NOTES FROM THE AI FRONTIER MODELING THE IMPACT OF AI ON THE WORLD ECONOMY DISCUSSION PAPER SEPTEMBER 2018 Jacques Bughin | Brussels Jeongmin Seong | Shanghai James Manyika | San Francisco Michael Chui | San Francisco Raoul Joshi | Stockholm Since its founding in 1990, the McKinsey Global Institute (MGI) has sought to develop a deeper understanding of the evolving global economy. As the business and economics research arm of McKinsey & Company, MGI aims to provide leaders in the commercial, public, and social sectors with the facts and insights on which to base management and policy decisions. MGI research combines the disciplines of economics and management, employing the analytical tools of economics with the insights of business leaders. Our “micro-to-macro” methodology examines microeconomic industry trends to better understand the broad macroeconomic forces affecting business strategy and public policy. MGI’s in-depth reports have covered more than 20 countries and 30 industries. Current research focuses on six themes: productivity and growth, natural resources, labor markets, the evolution of global financial markets, the economic impact of technology and innovation, and urbanization. Recent reports have assessed the digital economy, the impact of AI and automation on employment, income inequality, the productivity puzzle, the economic benefits of tackling gender inequality, a new era of global competition, Chinese innovation, and digital and financial globalization. MGI is led by three McKinsey & Company senior partners: Jacques Bughin, Jonathan Woetzel, and James Manyika, who also serves as the chairman of MGI. Michael Chui, Susan Lund, Anu Madgavkar, Jan Mischke, Sree Ramaswamy, and Jaana Remes are MGI partners, and Mekala Krishnan and Jeongmin Seong are MGI senior fellows. Project teams are led by the MGI partners and a group of senior fellows, and include consultants from McKinsey offices around the world. These teams draw on McKinsey’s global network of partners and industry and management experts. Advice and input to MGI research are provided by the MGI Council, members of which are also involved in MGI’s research. MGI Council members are drawn from around the world and from various sectors and include Andrés Cadena, Sandrine Devillard, Richard Dobbs, Tarek Elmasry, Katy George, Rajat Gupta, Eric Hazan, Eric Labaye, Acha Leke, Scott Nyquist, Gary Pinkus, Sven Smit, Oliver Tonby, and Eckart Windhagen. In addition, leading economists, including Nobel laureates, act as advisers to MGI research. The partners of McKinsey fund MGI’s research; it is not commissioned by any business, government, or other institution. For further information about MGI and to download reports, please visit www.mckinsey.com/mgi. Copyright © McKinsey & Company 2018 2 McKinsey Global Institute IN BRIEF WHAT’S INSIDE? NOTES FROM THE AI FRONTIER: In brief MODELING THE IMPACT OF AI Page 1 ON THE WORLD ECONOMY Introduction Page 2 Continuing the McKinsey Global Institute’s ongoing exploration of artificial 1. An approach to assessing intelligence (AI) and its broader implications, this discussion paper focuses the economic impact of AI on modeling AI’s potential impact on the economy. We take a micro-to-macro Page 9 and simulation-based approach in which the adoption of AI by firms arises from economic and competition-related incentives, and macro factors have an 2. AI has the potential to influence. We consider not only the possible benefits but also the costs related be a significant driver of to implementation and disruption. economic growth Page 12 AI has large potential to contribute to global economic activity. Looking at several broad categories of AI technologies, we model trends 3. Along with large economic in adoption, using early adopters and their performance as a leading gains, AI may bring indicator of how businesses across the board may (want to) absorb AI. wider gaps Based on early evidence, our average simulation shows around 70 percent Page 30 of companies adopting at least one of these types of AI technologies by 2030, and less than half of large companies may be using the full range of 4. Considering key questions AI technologies across their organizations. In the aggregate, and netting can help economic entities out competition effects and transition costs, AI could potentially deliver decide how to optimize for AI additional economic output of around $13 trillion by 2030, boosting global Page 46 GDP by about 1.2 percent a year. Technical appendix The economic impact may emerge gradually and be visible only over Page 49 time. Our simulation suggests that the adoption of AI by firms may follow an S-curve pattern—a slow start given the investment associated with learning Acknowledgments and deploying the technology, and then acceleration driven by competition Page 61 and improvements in complementary capabilities. As a result, AI’s contribution to growth may be three or more times higher by 2030 than it is over the next five years. Initial investment, ongoing refinement of techniques and applications, and significant transition costs might limit adoption by smaller firms. A key challenge is that adoption of AI could widen gaps between countries, companies, and workers. AI may widen performance gaps between countries. Those that establish themselves as AI leaders (mostly developed economies) could capture an additional 20 to 25 percent in economic benefits compared with today, while emerging economies may capture only half their upside. There could also be a widening gap between companies, with front- runners potentially doubling their returns by 2030 and companies that delay adoption falling behind. For individual workers, too, demand—and wages—may grow for those with digital and cognitive skills and with expertise in tasks that are hard to automate, but shrink for workers performing repetitive tasks. How companies and countries choose to embrace AI will likely impact outcomes. The pace of AI adoption and the extent to which companies choose to use AI for innovation rather than efficiency gains alone are likely to have a large impact on economic outcomes. Similarly, how countries choose to embrace these technologies (or not) will likely impact the extent to which their businesses, economies, and societies can benefit. The race is already on among companies and countries. In all cases, there are trade-offs that need to be understood and managed appropriately in order to capture the potential of AI for the world economy. The results of this modeling build upon, and are generally consistent with, our previous research, but add new results that deepen our understanding of how AI may touch off a competitive race with major implications for firms, labor markets, and broader economies, and reinforce our perception of the imperative for businesses, government, and society to address the challenges that lie ahead for skills and the world of work. INTRODUCTION The role of artificial intelligence tools and techniques in business and the global economy is a hot topic. This is not surprising given recent progress, breakthrough results, and demonstrations of AI, as well as the increasingly pervasive products and services already in wide use. All of this has led to speculation that AI may usher in radical—arguably unprecedented—changes in the way people live and work. This discussion paper is part of MGI’s ongoing effort to understand AI, the future of work, and the impact of automation on skills. It largely focuses on the impact of AI on economic growth.1 Our hope is that this effort helps us to broaden our understanding of how AI may impact economic activity, and potentially touch off a competitive race with major implications for firms, labor markets, and economies. Three key findings emerge: AI has large potential to contribute to global economic activity. AI is not a single technology but a family of technologies. In this paper, we look at five broad categories of AI technologies: computer vision, natural language, virtual assistants, robotic process automation, and advanced machine learning. Companies will likely use these tools to varying degrees. Some will take an opportunistic approach, testing only one technology and piloting it in a specific function. Others may be bolder, adopting all five and then absorbing them across their entire organization. For the sake of our modeling, we define the first approach as adoption and the second as full absorption.2 Between these two poles will be many companies at different stages of adoption; the model captures partial impact, too. By 2030, our average simulation shows, some 70 percent of companies may have adopted at least one type of AI technology, but less than half may have fully absorbed the five categories.3 The pattern of adoption and full absorption may be relatively rapid—at the high end of what has been observed with other technologies. However, several barriers may hinder rapid adoption. For instance, late adopters may find it difficult to generate impact from AI because AI opportunities have already been captured by front-runners, and they lag behind in developing capabilities and attracting talent.4 Nevertheless, at the average level of adoption implied by our simulation, and netting out competition effects and transition costs, AI could potentially deliver additional global economic activity of around $13 trillion 1 A version of this discussion paper is published in a forthcoming white paper on AI published by the International Telecommunication Union but, as with all MGI research, is independent and has not been commissioned or sponsored in any way. MGI research on the future of work, automation, skills, and AI can be read and downloaded at mckinsey.com/mgi/our-research/technology-and-innovation. Key publications relevant to this paper include A future that works: Automation, employment, and productivity, McKinsey Global Institute, January 2017; Jobs lost, jobs gained: Workforce transitions in a time of automation, McKinsey Global Institute, December 2017; Notes from the AI frontier: Insights from hundreds of use cases, McKinsey Global Institute, April 2018; and Skill shift: Automation and the future of the workforce, McKinsey Global Institute, May 2018. For a data visualization of AI and other analytics, see Visualizing the uses and potential impact of AI and other analytics, McKinsey Global Institute, April 2018 (mckinsey.com/featured- insights/artificial-intelligence/visualizing-the-uses-and-potential-impact-of-ai-and-other-analytics). 2 In this paper, we use the terms “adoption,” “diffusion,” and “absorption.” We define adoption as investment in a technology, diffusion as how adoption spreads—the process by which an innovation is communicated over time among the participants in a social system—and absorption as how technology is used within a firm. “Full absorption” is when a company uses the adopted technology for all operational purposes across its broad workflow system. These definitions align with those in academic literature. See, for instance, Tomaž Turk and Peter Trkman, “Bass model estimates for broadband diffusion in European countries,” Technological Forecasting and Social Change, 2012, Volume 79, Issue 1; David H. Wong et al., “Predicting the diffusion pattern of internet-based communication applications using bass model parameter estimates for email,” Journal of Internet Business, 2011, Issue 9; and Kenneth L. Kraemer, Sean Xu, and Kevin Zhuk, “The process of innovation assimilation by firms in different countries: A technology diffusion perspective on e-business,” Management Science, October 1, 2006. 3 These percentages need to be understood not in terms of numbers of firms per se, but in terms of their share of economic activity. 4 These industry dynamics between front-runners and followers are called the “rank effect” in the literature on technology adoption literature. See Paul Stoneman and John Vickers, “The assessment: The economics of technology policy,” Oxford Review of Economic Policy, 1988, Volume 4, Issue 4. 2 McKinsey Global Institute Notes from the AI frontier: Modeling the impact of AI on the world economy globally by 2030, or about 16 percent higher cumulative GDP compared with today. This amounts to about 1.2 percent additional GDP growth per year. If delivered, this impact would compare well with that of other general-purpose technologies through history.5 Consider, for instance, that the introduction of steam engines during the 1800s boosted labor productivity by an estimated 0.3 percent a year, the impact from robots during the 1990s around 0.4 percent, and the spread of IT during the 2000s 0.6 percent.6 The economic impact may emerge gradually and be visible only over time. The impact of AI may not be linear, but may build up at an accelerating pace over time. AI’s contribution to growth may be three or more times higher by 2030 than it is over the next five years. An S-curve pattern of AI adoption is likely—a slow start due to substantial costs and investment associated with learning and deploying these technologies, but then an acceleration driven by the cumulative effect of competition and an improvement in complementary capabilities. The fact that it takes time for productivity to unfold may be reminiscent of the Solow Paradox.7 Complementary management and process innovations will likely be necessary to take full advantage of AI innovations.8 It would be a misjudgment to interpret this “slow-burn” pattern of impact as proof that the effect of AI will be limited. The size of benefits for those who move into these technologies early will build up in later years at the expense of firms with limited or no adoption. A key challenge is that adoption of AI could widen gaps between countries, companies, and workers. AI could deliver a boost to economic activity, but the distribution of benefits is likely to be uneven: — Countries. AI may widen gaps between countries, reinforcing the current digital divide.9 Countries may need different strategies and responses because AI adoption levels vary. AI leaders (mostly in developed countries) could increase their lead in AI adoption over developing countries. Leading countries could capture an additional 20 to 25 percent in net economic benefits compared with today, while developing countries may capture only about 5 to 15 percent. Many developed countries may have no choice but to push AI to capture higher productivity growth as their GDP growth momentum slows, in many cases partly reflecting the challenges related to aging populations. Moreover, wage rates in these economies are high, which means that there is more incentive than in low-wage, developing countries to substitute labor with machines. Developing countries tend to have other ways to improve their productivity, including catching up with best practices and restructuring their industries, and may therefore have less incentive to push for AI (which, in any case, may offer them a smaller economic benefit than advanced economies). This does not mean that developed economies are set to make the best use of AI and that developing economies are destined to lose the AI race. Countries can choose to strengthen the foundations, enablers, and capabilities needed to reap the potential of AI, and be proactive in accelerating adoption. Some developing countries are already being ambitious in pushing AI. For instance, China, as we have noted, has 5 We acknowledge that direct comparison of the impact of AI with that of past technological innovations may not realistically be possible as our quantification of the impact of AI includes a family of technologies. Such comparisons are mainly to indicate a broad sense of magnitude. 6 A future that works: Automation, employment, and productivity, McKinsey Global Institute, January 2017. 7 The Solow Paradox is a phenomenon in which increased investment in IT is not visible in productivity statistics. For an in-depth debate, see Mekala Krishnan, Jan Mischke, and Jaana Remes, “Is the Solow Paradox back?” McKinsey Quarterly, June 2018. 8 Solving the productivity puzzle: The role of demand and the promise of digitization, McKinsey Global Institute, February 2018. 9 Jan A.G.M. van Dijk, “The evolution of the digital divide: The digital divide turns to inequality of skills and usage,” in Jacques Bus et al., eds., Digital Enlightenment Yearbook 2012, Amsterdam, Netherlands: IOS Press, 2012. McKinsey Global Institute Notes from the AI frontier: Modeling the impact of AI on the world economy 3 a national strategy in place to become a global leader in the AI supply chain, and is investing heavily.10 — Companies. AI technologies could lead to a performance gap between front-runners on one side and slow adopters and nonadopters on the other. At one end of the spectrum, front-runners (companies that fully absorb AI tools across their enterprises over the next five to seven years) are likely to benefit disproportionately. By 2030, they could potentially double their cash flow (economic benefit captured minus associated investment and transition costs), which implies additional annual net cash flow growth of about 6 percent for more than the next decade.11 Front-runners tend to have a strong starting digital base, a higher propensity to invest in AI, and positive views of the business case for AI. Although our simulation treats front-runners as one group, in reality, this category is not homogeneous. Some current AI innovators and creators have big starting endowments of data, computing power, and specialized talent. Other early adopters may not engage in creating these technologies but may be innovative in how they deploy them. At the other end of the spectrum is a long tail of laggards that do not adopt AI technologies at all or that have not fully absorbed them in their enterprises by 2030. This group may experience around a 20 percent decline in their cash flow from today’s levels, assuming the same cost and revenue model as today. One important driver of this profit pressure is the existence of strong competitive dynamics among firms, which could shift market share from laggards to front-runners and may prompt debate on the unequal distribution of the benefits of AI. — Workers. A widening gap may also unfold at the level of individual workers. Demand for jobs could shift away from repetitive tasks toward those that are socially and cognitively driven and others that involve activities that are hard to automate and require more digital skills.12 Job profiles characterized by repetitive tasks and activities that require low digital skills may experience the largest decline as a share of total employment, from some 40 percent to near 30 percent by 2030. The largest gain in share may be in nonrepetitive activities and those that require high digital skills, rising from some 40 percent to more than 50 percent. These shifts in employment would have an impact on wages. We simulate that around 13 percent of the total wage bill could shift to categories requiring nonrepetitive and high digital skills, where incomes could rise, while workers in the repetitive and low digital skills categories may potentially experience stagnation or even a cut in their wages. The share of the total wage bill of the latter group could decline from 33 to 20 percent.13 Direct consequences of this widening gap in employment and wages would be an intensifying war for people, particularly those skilled in developing and utilizing AI tools, and structural excess supply for a still relatively high portion of people lacking the digital and cognitive skills necessary to work with machines. 10 Artificial intelligence: Implications for China, McKinsey Global Institute, April 2017. 11 Large firms have a competitive advantage in adopting and absorbing AI ahead of industry peers. MGI’s econometric simulation suggests that they have adoption rates around ten percentage points higher than the average. Similarly, organizations that have a more established digital culture (including elements such as institutionalized user experience design thinking, a nimble organization, scaled and integrated agile ways of working across business and IT domains, and a leadership culture that fosters and enables a network of empowered teams to reach defined goals) are in a better position to accelerate the adoption of AI: their adoption rates are around eight percentage points higher than those of companies that do not have a digital culture. In fact, early corporate adopters will benefit from the exponential impact, potentially gaining three times more economic benefits from AI than followers. 12 Assessment of the impact of AI on individuals in this research mainly focuses on workers. A more complete view of the impact on individuals would include discussion of the effect of AI on users, citizens, and consumers. 13 For more detail on the impact of automation on wages, see Daron Acemoglu and Pascual Restrepo, Low-skill and high-skill automation, NBER working paper number 24119, December 2017. 4 McKinsey Global Institute Notes from the AI frontier: Modeling the impact of AI on the world economy Gaps may be widening among firms, workers, and countries, but measures can be taken to manage the transition and steer economies toward higher productivity and job growth. The disruption that comes with AI may lead to some firms leaving the market and some workers losing jobs. There will be major challenges for individuals transitioning to new jobs. However, if they are given the support they need to develop and refresh their skills and return to the labor market, then resources can be redeployed to more productive parts of the economy. THE AI REVOLUTION IS NOT IN ITS INFANCY, BUT THE MAJORITY OF THE ECONOMIC IMPACT OF AI IS YET TO COME Substantial progress in many areas has accelerated the development of AI, which has the potential to reshape the competitive landscape of companies, jobs, and the economic development of countries. Over the past few years, there have been many breakthrough results and announcements in natural language processing, machine vision, and games like Go.14 In addition, many products and services already in wide use employ advances in AI such as personal assistants and facial recognition systems. Much of this progress has been the result of progress in three areas: 1. Step-change improvements in computing power and capacity. At the silicon level, there has been continuous progress from central processing units to graphics processing units (GPUs). Today’s GPUs can be 40 to 80 times faster than the quickest versions available in 2013. Silicon-level development may put early movers (front- runners in this analysis) at an advantage because they have the resources to drive breakthroughs. Companies such as Google are pushing further with tensor processing units. Many more silicon-level developments are underway. At the cluster level, cloud solutions offer much cheaper computing and storage services on demand. Microsoft offers a hybrid solution combining the public and private cloud that helps companies rapidly ramp up their computing resources and handle spikes in need without large capital outlays. 2. Explosion of data. The world creates an unprecedented amount of data every day, feeding algorithms the raw material needed to produce new insights. International Data Corporation estimates that there may be 163 zettabytes (one trillion gigabytes) of data by 2025, or ten times the data generated in 2016.15 Enormous diversity in the data being generated means that organizing and analyzing these data are extremely challenging, but that there is an unprecedented opportunity to extract value from data that were not available in the past.16 3. Progress in algorithms. The techniques and algorithms underlying AI have continued to be developed. Recent advances in deep learning techniques are delivering step changes in the accuracy of classification and prediction.17 Deep learning uses large-scale neural networks (the most common are convolutional and recurrent neural networks) that 14 For instance, Eric Horvitz and colleagues at Microsoft Research have demonstrated in-stream supervision in which data can be labeled in the course of natural usage. See Eric Horvitz, “Machine learning, reasoning, and intelligence in daily life: Directions and challenges,” Proceedings of Artificial Intelligence Techniques for Ambient Intelligence, Hyderabad, India, January 2007. AlphaGo Zero used a new form of reinforcement learning to defeat its predecessor AlphaGo after learning to play the game Go from scratch. See Demis Hassabis et al., AlphaGo Zero: Learning from scratch, deepmind.com. DeepMind researchers have had promising results with transfer learning where training is simulated and then transferred to physical robots. See Andre A. Rusu et al., Sim-to-real robot learning from pixels with progressive nets, arvix.org, October 2016. For more, see Michael Chui, James Manyika, and Mehdi Miremadi, “What AI can and can’t do (yet) for your business,” McKinsey Quarterly, January 2018. 15 John Gantz, David Reinsel, and John Rydning, Data age 2025: The evolution of data to life-critical, IDC white paper, April 2017. 16 Behavioral, transactional, environmental, and geospatial data are available from sources including the web, social media, industrial sensors, payment systems, cameras, wearable devices, and human entry, for example. See The age of analytics: Competing in a data-driven world, McKinsey Global Institute, December 2016. 17 Yoshua Bengio, Aaron Courville, and Ian Goodfellow, Deep Learning, Cambridge, MA: MIT Press, 2016. McKinsey Global Institute Notes from the AI frontier: Modeling the impact of AI on the world economy 5 learn through the use of training data and backpropagation algorithms. Also emerging are meta-learning techniques that are attempting to automate the design of machine- learning models and neural networks by classifying images in large-scale data sets. Also notable is the development of reinforcement learning, an unsupervised technique that allows algorithms to learn tasks by trial and error, improving their performance through repetition and, in many cases—the game of Go being one example—surpassing human capabilities.18 In one-shot learning, an AI model can learn about a topic even where there is only a small number—even one—of real-world examples, reducing the need for large data sets.19 While many of the most public breakthroughs have largely been associated with a relatively small group of individuals, companies, and institutions and have mainly, but not exclusively, been US-led, this is changing fast. An increasing number of countries are now starting to put more emphasis on AI initiatives, with China in particular making huge strides (see Box 1, “More countries are taking measures to advance AI”). Most companies currently face significant limitations on their ability to leverage AI (see Box 2, “Current technical limitations to leveraging AI, and some early progress”). However, the AI revolution is certainly no longer in its infancy. These technologies are already widely used in business. MGI analysis of more than 400 cases in which companies and organizations could potentially use AI found that AI is already relatively applicable to real business problems and can have significant impact in areas including marketing and sales, supply chain management, and manufacturing. The research found that three deep learning techniques—feed forward neural networks, recurrent neural networks, and convolutional neural networks—together could enable the creation of between $3.5 trillion and $5.8 trillion in value each year in nine business functions in 19 countries. This is the equivalent of 1 to 9 percent of 2016 sector revenue.20 A broad range of companies already use AI tools in a wide variety of ways and functions. As of early 2018, AI was used in supply chains (for instance, Amazon’s Kiva robot automation in retail logistics); fixed assets (for example, preventive maintenance of assets by companies such as Neuron Soundware, which uses artificial auditory cortexes to simulate human sound interpretation, and can therefore automate the detection and identification of causes of potential breakdown of equipment); R&D (for instance, Quantum Black’s use of AI to streamline R&D in Formula 1 racing); and sales and marketing (for example, AI- powered search by Baidu, and Digiday’s AI-based predictive sales target of business-to- business salespeople).21 As funding becomes more widely available, the skills to deploy and manage AI are likely to spread to a broader swath of companies and take hold across economies. 18 Matt Burgess, “DeepMind’s latest AI breakthrough is its most significant yet,” Wired, October 18, 2017. 19 Yan Duan et al., One-shot imitation learning, arxiv.org, December 2017. 20 Notes from the AI frontier: Insights from hundreds of use cases, McKinsey Global Institute, April 2018. 21 Jeremy Hsu, “Deep learning AI listens to machines for signs of trouble,” IEEE Spectrum, December 27, 2016. 6 McKinsey Global Institute Notes from the AI frontier: Modeling the impact of AI on the world economy Box 1. More countries are taking measures to advance AI A number of countries have announced initiatives and plans to drive the use of AI in their economies. Here we give just a few examples as of mid-2018: China. The government is prioritizing AI, including its promotion in, for instance, its 13th Five-Year Plan (which runs from 2016 to 2020), its Internet Plus and AI plans from 2016 to 2018, and a “new generation AI plan.” China has stated that it aims to create a domestic AI market of 1 trillion renminbi ($150 billion) by 2020 and become a world-leading AI center by 2030.1 The private sector is pushing actively for AI, too. Three of China’s internet giants— Alibaba, Baidu, and Tencent—as well as iFlytek, a voice recognition specialist, have joined a “national team” to develop AI in in areas such as autonomous vehicles, smart cities, and medical imaging. Europe. European Union (EU) member states have announced their intention to collaborate on AI more actively across borders to ensure that Europe is competitive in these technologies and that they can tackle their social, economic, ethical, and legal ramifications together.2 The EU has called for $24 billion to be invested in AI research by 2020.3 A number of European countries have also been driving national initiatives. The French government has announced an initiative to double the number of people studying and researching AI projects, set new boundaries for data sharing, and invest $1.85 billion to fund research and startups.4 The United Kingdom has published a comprehensive plan to strengthen the core foundation of AI in an “artificial intelligence sector deal” and has stated its aim to lead in the field of AI ethics.5 Asia (outside China). The government of South Korea set up a Presidential Fourth Industrial Revolution Committee in 2017 and announced that it would invest $2 billion by 2022 to strengthen its capabilities in AI R&D.6 Singapore has launched an AI Singapore national initiative to enhance AI capabilities by forming a partnership of government institutions.7 Canada. International research institute CIFAR is leading the government’s Pan-Canadian Artificial Intelligence Strategy with three new AI institutes: the Alberta Intelligence Institute in Edmonton, the Vector Institute in Toronto, and MILA in Montreal; these cities are Canada’s three major AI centers.8 1 China to publish guideline on AI development: Minister, The State Council of the People’s Republic of China, March 11, 2018. 2 EU member states sign up to cooperate on artificial intelligence, European Commission, April 10, 2018. 3 Aoife White, “EU calls for $24 billion in AI to keep with China, U.S.,” Bloomberg News, May 1, 2018. 4 Romain Dillet, “France wants to become an artificial intelligence hub,” TechCrunch, March 29, 2018. 5 Artificial Intelligence Sector Deal, HM Government, 2018; and UK can lead the way on ethical AI, says Lords Committee, UK Parliament, April 16, 2018. 6 “South Korea aims high on AI, pumps $2 billion into R&D,” SyncedReview, May 16, 2018. 7 AI Singapore (aisingapore.org/about-ai-singapore/). 8 Pan-Canadian Artificial Intelligence Strategy, CIFAR (cifar.ca/ai/pan-canadian-artificial- intelligence-strategy). McKinsey Global Institute Notes from the AI frontier: Modeling the impact of AI on the world economy 7 Box 2. Current technical limitations to leveraging AI, and some early progress Businesses have recorded much progress in making AI applicable to them. However, the following five technical factors are arguably limiting the application of AI:1 Labeled training data. In supervised learning, machines do not learn by themselves but need to be taught, which means that humans must label and categorize the underlying training data. However, promising new techniques are emerging to reduce time spent on such efforts, including reinforcement learning and in-stream supervision such as generative adversarial networks, a supervised learning method in which two networks compete with each other to improve their understanding of a concept.2 Obtaining sufficiently large data sets. In many business use cases, it can be difficult to create or obtain data sets large enough to train algorithms. One example is the limited pool of the clinical-trial data necessary to predict healthcare treatment outcomes more accurately. Players with access to vast quantities of data may have an advantage. At present, the availability of labeled data is critical since most current AI models are trained through supervised learning, and categorizing data correctly requires a huge amount of human time. This may change as technologies and algorithms develop. One technique that could reduce the need for large data sets is one-shot learning, in which an AI model is pretrained in a set of related data and can then learn even from a small number of real-world examples. Difficulty explaining results. It is often difficult to explain results from large, complex neural-network-based systems. One development—still at an early stage—that could improve the ease of explaining or transparency of models is local interpretable model agnostic explanations, which attempt to identify which parts of input data a trained model relies on most to make predictions. Another technique that is becoming relatively well established is the application of generalized additive models. They use single-feature models, which limit interactions between features and enable users to interpret each one more easily. Difficulty generalizing. AI models still have difficulty carrying their experiences from one set of circumstances to another, which leaves companies having to commit resources to training new models even if use cases are relatively similar to previous ones. Transfer learning, in which an AI model is training to apply learning from one task to the next one, is showing promise.3 Risk of bias. The first four limitations may be solved as technology advances, but bias—in data in particular, but also in algorithms—has raised broad social concerns and may be challenging to resolve.4 A great deal of academic, nonprofit, and private-sector research is now underway on this issue. 1 “What AI can and can’t do (yet) for your business,” McKinsey Quarterly, January 2018. 2 Eric Horvitz, “Machine learning, reasoning, and intelligence in daily life: Directions and challenges,” Proceedings of Artificial Intelligence Techniques for Ambient Intelligence, Hyderabad, India, January 2007. 3 See John Guttag, Eric Horvitz, and Jenna Wiens, “A study in transfer learning: Leveraging data from multiple hospitals to enhance hospital-specific predictions,” Journal of the American Medical Informatics Association, 2014, Volume 21, Number 4. 4 For more, see Notes from the AI frontier: Insights from hundreds of use cases, McKinsey Global Institute, April 2018. 8 McKinsey Global Institute Notes from the AI frontier: Modeling the impact of AI on the world economy
Description: