Economics of Artificial Intelligence

Economics of Artificial Intelligence

An NBER conference on Economics of Artificial Intelligence took place in Toronto September 26-27. Research Associates Ajay K. Agrawal, Joshua S. Gans, and Avi Goldfarb, all of University of Toronto, and Catherine Tucker of MIT organized the meeting, which was sponsored by the Alfred P. Sloan Foundation and the Creative Destruction Lab. These researchers' papers were presented and discussed:


Julian Tszkin Chan, Bates White Economic Consulting, and Weifeng Zhong, Mercatus Center at George Mason University

Reading China: Predicting Policy Change with Machine Learning

For the first time in the literature, Chan and Zhong develop a quantitative indicator of the Chinese government's policy priorities over a long period of time, which they call the Policy Change Index (PCI) for China. The PCI is a leading indicator of policy changes that covers the period from 1951 to the first quarter of 2019, and it can be updated in the future. It is designed with two building blocks: the full text of the People's Daily -- the official newspaper of the Communist Party of China -- as input data and a set of machine learning techniques to detect changes in how this newspaper prioritizes policy issues. Due to the unique role of the People's Daily in China's propaganda system, detecting changes in this newspaper allows the researcher to predict changes in China's policies. The construction of the PCI does not require the understanding of the Chinese text, which suggests a wide range of applications in other contexts.


Joel M. Klinger, Juan C. Mateos-Garcia, and Konstantinos M. Stathoulopoulos, Nesta

Deep Learning, Deep Change? Mapping the Development of the Artificial Intelligence General Purpose Technology

General Purpose Technologies (GPTs) that can be applied in many industries are an important driver of economic growth and national and regional competitiveness. In spite of this, the geography of their development and diffusion has not received significant attention in the literature. Klinger, Mateos-Garcia, and Stathoulopoulos address this with an analysis of Deep Learning (DL), a core technique in Artificial Intelligence (AI) increasingly being recognized as the latest GPT. They identify DL papers in a novel dataset from ArXiv, a popular preprints website, and use CrunchBase, a technology business directory to measure industrial capabilities related to it. After showing that DL conforms with the definition of a GPT, having experienced rapid growth and diffusion into new fields where it has generated an impact, the researchers describe changes in its geography. Their analysis shows China's rise in AI rankings and relative decline in several European countries. Klinger, Mateos-Garcia, and Stathoulopoulos also find that initial volatility in the geography of DL has been followed by consolidation, suggesting that the window of opportunity for new entrants might be closing down as new DL research hubs become dominant. Finally, the researchers study the regional drivers of DL clustering. They find that competitive DL clusters tend to be based in regions combining research and industrial activities related to it. This could be because GPT developers and adopters located close to each other can collaborate and share knowledge more easily, thus overcoming coordination failures in GPT deployment. Their analysis also reveals a Chinese comparative advantage in DL after they control for other explanatory factors, perhaps underscoring the importance of access to data and supportive policies for the successful development of this complex, ‘omni-use’ technology.


David Autor, MIT and NBER, and Anna M. Salomons, Utrecht University New Frontiers: The Evolving Content and Geography of New Work in the 20th Century


James Bessen, Boston University; Maarten Goos, London School of Economics; Anna M. Salomons; and Wiljan van den Berge, CPB Netherlands Bureau for Economic Policy Analysis Automatic Reaction – What Happens to Workers at Firms that Automate?

Bessen, Goos, Salomons, and van den Berge provide the first estimates of the impacts of automation on individual workers by combining Dutch micro-data with a direct measure of automation expenditures covering firms in all private non-financial industries over 2000-2016. Using a differences-in-differences design exploiting automation event timing, the researchers find that automation at the firm increases the probability of workers separating from their employers and decreases days worked, leading to a 5-year cumulative wage income loss of around 11 percent of one year's earnings. These losses are only partially offset by benefits systems, and quite pervasive across worker types, firm sizes and sectors.


Mathieu Aubry, École des Ponts ParisTech; Roman Kräussl, University of Luxembourg; Gustavo Manso, University of California, Berkeley; and Christophe Spaenjers, HEC Paris

Machine Learning, Human Experts, and the Valuation of Real Assets

Aubry, Kräussl, Manso, and Spaenjers study the accuracy and usefulness of automated (i.e., machine-generated) valuations for illiquid and heterogeneous real assets. They assemble a database of 1.1 million paintings auctioned between 2008 and 2015. The researchers use a popular machine-learning technique--neural networks--to develop a pricing algorithm based on both non-visual and visual artwork characteristics. Their out-of-sample valuations predict auction prices dramatically better than valuations based on a standard hedonic pricing model. Moreover, they help explaining price levels and sale probabilities even after conditioning on auctioneers' pre-sale estimates. Machine learning is particularly helpful for assets that are associated with high price uncertainty. It can also correct human experts' systematic biases in expectations formation--and identify ex ante situations in which such biases are likely to arise.


Ajay K. Agrawal; John McHale, National University of Ireland; and Alexander Oettl, Georgia Institute of Technology and NBER

Artificial Intelligence, Scientific Discovery, and Commercial Innovation

Motivated by examples of machine learning use in genomics, drug discovery, and materials science, Agrawal, McHale, and Oettl develop a multi-stage combinatorial model of artificial intelligence (AI) -aided innovation. The innovator can utilize AI as a tool for drawing on existing knowledge in the form of data on past successes and failures to produce a prediction model - or map - of the combinatorial search space. Modeling innovation as a multi-stage search process, the researchers explore how improvements in AI could affect the productivity of the discovery pipeline in combinatorial type research problems by allowing improved prioritization of the leads that flow through that pipeline. Furthermore, they show how enhanced prediction can increase or decrease the demand for downstream testing, depending on the type of innovation. Finally, Agrawal, McHale, and Oettl examine the role of data generation as an alternative source of spillovers in sustaining economic growth.


Daniel Rock, MIT

Engineering Value: The Returns to Technological Talent and Investments in Artificial Intelligence

Engineers, as implementers of technology, are highly complementary to the intangible knowledge assets that firms accumulate. Rock seeks to address whether technical talent is a source of rents for corporate employers, both in general and in the specific case of the surprising open-source launch of TensorFlow, a deep learning software package, by Google. First, he presents a simple model of how employers can use job design as a tool to exercise monopsony power by partially allocating employee time to firm-specific tasks. Then, using over 180 million position records and over 52 million skill records from LinkedIn, he builds a panel of firm-level investment in technological human capital (information technology, research, and engineering talent quantities) to measure the market value of technological talent. He finds that on average, an additional engineer at a firm is correlated with approximately $854,000 more market value. Firm fixed effects and instrumental variables analyses using land-grant colleges and state-level changes in covenant-to-not-compete enforceability eliminate the statistical significance of this positive association, suggesting that engineering talent is correlated with the presence of complementary firm-specific intangible assets. Consistent with that hypothesis, AI-intensive companies rapidly gained market value following the launch of TensorFlow, while companies with opportunities to automate relatively larger quantities of labor with machine learning did not. Using a difference-in-differences approach, Rock shows that the launch of TensorFlow is associated with an approximate increase of $2.7 million in firm market value per unit of Artificial Intelligence skills captured by LinkedIn.


Daniel Bjorkegren, Brown University, and Joshua Blumenstock, University of California, Berkeley

Manipulation-Proof Machine Learning

An increasing number of decisions are being guided by machine learning algorithms. In most cases, an individual's historical behavior is used as input to an estimator that determines future decisions. But when an estimator is used to allocate resources, it may cease to be a good estimator: individuals may strategically alter their behavior to achieve a desired outcome. This paper develops a new class of estimators that are stable under manipulation, even when the decision rule is fully transparent. Bjorkegren and Blumenstock explicitly model the costs of manipulating different behaviors, and identify decision rules that are stable in equilibrium. Through a large field experiment in Kenya, they show that decision rules estimated with their strategy-robust method outperform those based on standard supervised learning approaches.


Mariano-Florentino Cuéllar, California Supreme Court and Stanford University; Benjamin Larsen, Copenhagen Business School; and Yong Suk Lee and Michael Webb, Stanford University

How Would AI Regulation Change Firms' Behavior? Evidence from Thousands of Managers

Lee, Larsen, Webb, and Cuéllar examine the impacts of different proposed AI regulations on managers’ intentions to adopt AI technologies and on their AI-related business strategies. The researchers conduct a randomized online survey experiment on more than a thousand managers in the U.S. They randomly present managers with different proposed AI regulations, and ask them to make decisions about AI adoption, budget allocation, hiring, and other issues. They have four main findings: (1) AI regulation generally reduces the rate of adoption of AI technologies. However, industry- and agency-specific AI regulation has a smaller impact than general AI regulation. (2) Regulation induce firms to think. That is, firms spend more on developing AI strategy and hire more managers. This is at the cost of hiring technical or lower-skilled workers. (3) The impact of AI regulation on innovation differs by industry and firm size. AI regulation increases intent to file patents in the healthcare and pharmaceutical sectors, but reduces it in the retail sector. Moreover, AI regulation reduces AI adoption in small firms and is more likely to reduce their innovative activity. (4) AI regulation increases firms’ perceptions of the importance of safety and transparency issues related to AI.


Ansgar Walther and Tarun Ramadorai, Imperial College London; Paul Goldsmith-Pinkham, Yale University; and Andreas Fuster, Swiss National Bank

Predictably Unequal? The Effects of Machine Learning on Credit Markets

Innovations in statistical technology, including in predicting creditworthiness, have sparked concerns about differential impacts across categories such as race. Theoretically, distributional consequences from better statistical technology can come from greater flexibility to uncover structural relationships, or from triangulation of otherwise excluded characteristics. Using data on US mortgages, Fuster, Goldsmith-Pinkham, Ramadorai, and Walther predict default using traditional and machine learning models. They find that Black and Hispanic borrowers are disproportionately less likely to gain from the introduction of machine learning. In a simple equilibrium credit market model, machine learning increases disparity in rates between and within groups; these changes are primarily attributable to greater flexibility.


Seth G. Benzell, Boston University; Laurence J. Kotlikoff, Boston University and NBER; Guillermo LaGarda, Inter-American Development Bank; and Jeffrey D. Sachs, Columbia University and NBER

Robots Are Us: Some Economics of Human Replacement

Will smart machines do to humans what the internal combustion engine did to horses - render them obsolete? If so, can putting people out of work or, at least, good work leave them unable to buy what smart machines produce? Benzell, Kotlikoff, LaGarda, and Sachs model's answer is yes. Over time and under the right conditions, today's supply reduces tomorrow's demand, leaving everyone worse off in the long-run. Carefully crafted redistribution policies can prevent such immiserating growth. But blunt policies, such as limiting intellectual property rights or restricting labor supply, can make matters worse.


Matthew Jackson, Stanford University, and Zafer Kanik, MIT

How Automation that Substitutes for Labor Affects Production Networks, Growth, and Income Inequality

Jackson and Kanik study the impact of technological change on GDP growth, income inequality, and the interconnectedness of the economy. Technological advances in goods that complement labor increase productivity but do not change the interdependencies across sectors nor the relative wages between high-skilled and low-skilled labor. In contrast, technological advances that (directly or indirectly) affect goods that substitute for labor (e.g., robots, AI) have impacts that depend on the state of the economy. An improvement in a good that substitutes for labor pushes that labor into other less productive processes, but wages also adjust and slow that displacement. The resulting growth in overall productivity is attenuated and income inequality between low- and high-skilled workers grows. The less productive alternative opportunities are for labor, the greater the decrease in wages and the lower the productivity growth is that results from the technological improvement. At the same time, the production network becomes denser and interconnectedness grows with automation, changing the centralities of different sectors and enhancing the impact of some future technological changes. Once automation has fully substituted for labor in some process, further technological advances translate directly into productivity gains. Jackson and Kanik's findings imply that i) the growth effects of recent technological developments in automation technologies should emerge gradually, and at an initial cost of increased income inequality, ii) technological advances that displace labor propagate both downstream and upstream via wage changes, and iii) the reliance on different skill levels of labor in various production processes determine the alternative uses of labor in the economy, and thus the reallocation of labor and the macroeconomic impacts of technological advances.


Marcus Dillender, University of Illinois at Chicago, and Eliza Forsythe, University of Illinois, Urbana-Champaign

White Collar Technological Change: Evidence from Job Posting Data

Dillender and Forsythe investigate the impact of computerization of white collar jobs on wages and employment. Using online job postings from 2007 and 2010-2016 for office and administrative support (OAS) jobs, the researchers show that when firms adopt new software at the job-title-level they increase the skills required of job applicants. Further, firms change the task content of such jobs, broadening them to include tasks associated with higher skill office functions. They aggregate these patterns to the local labor market level, instrumenting for local technology adoption with national measures. Dillender and Forsythe find that a one standard deviation increase in OAS technology usage reduces employment in OAS occupations by about one percentage point and increases wages for college graduates in OAS jobs by over three percent. The researchers find negative wage spillovers, with wages falling for both workers with no college experience and college graduates. These losses are in part driven by high-skill office occupations. These results are consistent with technological adoption inducing a realignment in task assignment across occupations, leading office support occupations to become higher-skill and hence less at risk from further automation. In addition, Dillender and Forsythe find total employment increases with computerization, despite the direct job losses in OAS employment.


Edward L. Glaeser and Michael Luca, Harvard University and NBER, and Andrew Hillis, Hyunjin Kim, and Scott Duke Kominers, Harvard University

How Does Compliance Affect the Returns to Algorithms? Evidence from Boston's Restaurant Inspectors


Jill Grennan, Duke University, and Roni Michaely, Cornell Tech

Artificial Intelligence and the Future of Work: Evidence from Analysts Artificial intelligence (AI) can enhance prediction, a common task for highly-skilled workers. Grennan and Michaely examine the implications of this technological change for incumbent workers in the context of security analysts. As evidence of substitution, they find the most talented analysts quit the profession while others shift their coverage toward low-AI stocks. Analysts' access to management gives them a soft information advantage, and they focus these meetings on low-AI stocks suggesting some complementarity. The quality of analysts' predictions also change: analysts covering high-AI stocks exhibit increased bias and forecast errors. Additional tests suggest the change in reporting quality stems from a less talented pool of analysts rather than strategically biased predictions.


Jack A. Clark, OpenAI, and Gillian Hadfield, University of Toronto

Regulatory Markets for AI Safety

Clark and Hadfield propose a new model for regulation to achieve AI safety: global regulatory markets. They first sketch the model in general terms and provide an overview of the costs and benefits of this approach. They then demonstrate how the model might work in practice: responding to the risk of adversarial attacks on AI models employed in commercial drones.


Bo Cowgill and Fabrizio Dell'Acqua, Columbia University

Biased Programmers? Or Biased Data? A Field Experiment about Algorithmic Bias

Why does "algorithmic bias'' occur? What programming practices mitigate it? Cowgill and Dell'Acqua analyze a field experiment on a diverse group of 260+ AI practitioners. In their experiment, machine learning programmers develop algorithms to predict math ability for a representative sample of over 10K OECD residents. The engineers are exposed to three randomly-assigned interventions: 1) Reminders about algorithmic bias, 2) Technical advice for reducing algorithmic bias, and 3) better (and more unbiased) training data. In preliminary results, the researchers quantify the effects of these interventions. Using the code and predictions produced by the engineers, they compare the effectiveness of each intervention at reducing algorithmic bias. Cowgill and Dell'Acqua also compare their interventions to shifts in hiring policies that would alter the backgrounds of engineers.


Prasanna Tambe and Lorin Hitt, University of Pennsylvania; Erik Brynjolfsson, MIT and NBER; and Daniel Rock, MIT

IT, AI and the Growth of Intangible Capital

Investments in the skills and practices that accompany IT and, most recently, AI investment, may account for a significant fraction of value in technology-intensive firms. However, the stock of this "IT intangible capital" (ITIC) as well as how the accumulation of this capital has contributed to economic growth has remained elusive, even though it is generally a precursor to deriving productive benefits from technology investments such as investments in AI. To study this phenomenon, Tambe, Hitt, Brynjolfsson, and Rock a) use a new extended thirty-year firm-level panel on IT labor along with Hall's Quantity Revelation Theorem to trace the development of ITIC over the last thirty years, and then b) examine implications for the current wave of AI investment. Their estimates suggest that ITIC formed about 25% of firms' assets by 2016 and that most of this capital is owned by larger firms. For the recent wave of AI investment, a striking result is that high market values are associated with AI before they are associated with increased revenues, suggesting that investors anticipate significant future returns to AI related intangible assets that are otherwise unmeasured. While AI measures are strongly correlated with market value, the researchers find little evidence that AI is already driving revenue or productivity.


Susan Athey, Stanford University and NBER; Juan Camilo Castillo, Stanford University; and Bharat Chandar, Stanford University

Service Quality in the Gig Economy: Empirical Evidence about Driving Quality at Uber

The rise of marketplaces for goods and services has led to changes in the mechanisms used to ensure high quality. Athey, Castillo, and Chandar analyze this phenomenon in the Uber market, where the system of pre-screening that prevailed in the taxi industry has been diminished in favor of (automated) quality measurement, reviews, and incentives. This shift allows greater flexibility in the workforce but its net effect on quality is unclear. Using telematics data as an objective quality outcome, the researchers show that UberX drivers provide better quality than UberTaxi drivers, controlling for all observables of the ride. They then explore whether this difference is driven by incentives, nudges, and information. Athey, Castillo, and Chandar show that riders’ preferences shape driving behavior. They also find that drivers respond to both user preferences and nudges, such as notifications when ratings fall below a threshold. Finally, the researchers show that informing drivers about their past behavior increases quality, especially for low-performing drivers.


Adair Morse, University of California, Berkeley and NBER, and Robert P. Bartlett III, Richard Stanton, and Nancy Wallace, University of California, Berkeley

Consumer-Lending Discrimination in the FinTech Era

Discrimination in lending can occur either in face-to-face decisions or in algorithmic scoring. Bartlett, Morse, Stanton, and Wallace provide a workable interpretation of the courts' legitimate-business-necessity defense of statistical discrimination. They then estimate the extent of racial/ethnic discrimination in the largest consumerlending market using an identification afforded by the pricing of mortgage credit risk by Fannie Mae and Freddie Mac. The researchers find that lenders charge Latinx/African-American borrowers 7.9 and 3.6 basis points more for purchase and refinance mortgages respectively, costing them $765M in aggregate per year in extra interest. FinTech algorithms also discriminate, but 40% less than face-to-face lenders. These results are consistent with both FinTech and non-FinTech lenders extracting monopoly rents in weaker competitive environments or profiling borrowers on low-shopping behavior. Such strategic pricing is not illegal per se, but under the law, it cannot result in discrimination. The lower levels of price discrimination by algorithms suggests that removing face-to-face interactions can reduce discrimination. Further silver linings emerge in the FinTech era: (1) Discrimination is declining; algorithmic lending may have increased competition or encouraged more shopping with the ease of platform applications. (2) Bartlett, Morse, Stanton, and Wallace find that 0.74-1.3 million minority applications were rejected between 2009 and 2015 due to discrimination; however, FinTechs do not discriminate in loan approval.


Benjamin R. Handel and Jonathan T. Kolstad, University of California, Berkeley and NBER; and Jonathan Gruber, MIT and NBER

Managing Intelligence: Skilled Experts and AI in Markets for Complex Products