Hi! I'm an economist at the University of Pittsburgh studying knowledge, innovation, and economic growth. These days, I'm quite interested in Wikipedia and other such open platforms. I'm also working on a number of open-source software projects.
This paper introduces a general equilibrium model of endogenous technical change through basic and applied research. Basic research differs from applied research in the nature and the magnitude of the generated spillovers. We propose a novel way of empirically identifying these spillovers and embed them in a framework with private firms and a public research sector. After characterizing the equilibrium, we estimate our model using micro-level data on research expenditures by French firms. Our key finding is that standard innovation policies (e.g., uniform R&D tax credits) can accentuate the dynamic misallocation in the economy by oversubsidizing applied research. Policies geared towards public basic research and its transmission to the private sector are significantly welfare improving.
We assess the rate of replication for empirical papers in the 2010 American Economic Review. Across seventy empirical papers, we find that 29 percent have one or more citation that partially replicates the original result. While only a minority of papers has a published replication, a majority (sixty percent) have either a replication, robustness test or an extension. Surveying authors within the literature we find substantial uncertainty over the number of extant replications.
We develop an endogenous growth model where clean and dirty technologies compete in production. Research can be directed to either technology. If dirty technologies are more advanced, the transition to clean technology can be difficult. Carbon taxes and research subsidies may encourage production and innovation in clean technologies, though the transition will typically be slow. We estimate the model using microdata from the US energy sector. We then characterize the optimal policy path which heavily relies on both subsidies and taxes. Finally we evaluate various alternative policies. Relying only on carbon taxes or delaying intervention have significant welfare costs.
Economic analysis of effective policies for managing epidemics requires an integrated economic and epidemiological approach. We develop and estimate a spatial, micro-founded model of the joint evolution of economic variables and the spread of an epidemic. We empirically discipline the model using new U.S. county-level data on health, mobility, employment outcomes, and non-pharmaceutical interventions (NPIs) at a daily frequency. Absent policy or medical interventions, the model predicts an initial period of exponential growth in new cases, followed by a protracted period of roughly constant case levels and reduced economic activity. Nevertheless, if vaccine development proved impossible, and suppression cannot entirely eradicate the disease, a utilitarian policymaker cannot improve significantly over the laissez-faire equilibrium by using lockdowns. Conversely, if a vaccine will arrive within two years, NPIs can improve upon the laissez-faire outcome by dramatically decreasing the number of infectious agents and keeping infections low until vaccine arrival. Mitigation measures that reduce viral transmission (e.g., mask-wearing) both reduce the virus's spread and increase economic activity.
How do patents relate to product innovation? To study this question, we construct a new patent-to-product dataset combining patent data with the detailed product- and firm-level data for the consumer goods sector. Using textual analysis of patent documents together with product descriptions, we link specific patents to finely defined product categories within firms and time periods. Our findings indicate that there is a substantial amount of product innovation that comes from firms that have never patented. Nevertheless, for patenting firms, standard patent-based metrics of innovation are correlated to product innovation defined based on both quantity and quality of new products. We find that market leaders use patents differently than followers. In particular, patents of large firms have a weaker association with the quality and quantity of product innovations. Nevertheless, consistent with the notion that patents being used to limit competition, we find that patents of larger firms are associated with higher future revenues even after accounting for the introduction of new products associated with those patents. Motivated by these empirical patterns, we develop a theoretical framework and use it to decompose the value of a patent. We show that the private value of a patent increases as firms become market leaders. This increase is mostly driven by an increasing value derived from protective patenting as opposed to productive patenting.
As the largest encyclopedia in the world, it is not surprising that Wikipedia reflects the state of scientific knowledge. However, Wikipedia is also one of the most accessed websites in the world, including by scientists, which suggests that it also has the potential to shape science. This paper shows that it does.
Incorporating ideas into Wikipedia leads to those ideas being used more in the scientific literature. We provide correlational evidence of this across thousands of Wikipedia articles and causal evidence of it through a randomized control trial where we add new scientific content to Wikipedia. We find that the causal impact is strong, with Wikipedia influencing roughly one in every ∼830 words in related scientific journal articles. We also find causal evidence that the scientific articles referenced in Wikipedia receive more citations, suggesting that Wikipedia complements the traditional journal system by pointing researchers to key underlying scientific articles.
Our findings speak not only to the influence of Wikipedia, but more broadly to the influence of repositories of scientific knowledge and the role that they play in the creation of scientific knowledge.
We study the optimal design of R&D policies and corporate taxation when the outputs of innovation are not appropriable in the absence of intellectual property rights policies and there are non-internalized technology spillovers across firms. Firms are heterogeneous in their research productivity, i.e., in the efficiency with which they convert a given set of R&D inputs into successful innovations. There is asymmetric information about firm productivity and about its stochastic evolution over time that prevents the first best solution to the technology spillover. The problem is thus posed as one of dynamic mechanism design with externalities. We characterize the optimal constrained efficient allocations over firms' life cycles and for firms of different productivities. We show that the constrained efficient allocations can be implemented either by a patent system plus a price subsidy for the monopolists' products, together with a parsimonious R&D subsidy function or, equivalently, by a prize mechanism. We estimate our model using firm-level data matched to patent data and quantify the optimal policies. Simpler innovation policies, such as linear R&D subsidies and linear profit taxes, lead to large revenue losses relative to the optimal mechanism.
There is substantial heterogeneity across industries in the level of interdependence between new and old technologies. I propose a measure of this interdependence—an index of sequentiality in innovation—which is the transfer rate of patents in a particular industry. I find that highly sequential industries have higher profitability, higher variance of firm growth, lower exit rates, and lower rates of patent expiry. To better understand these trends, I construct a model of firm dynamics where the productivity of firms evolves endogenously through innovations. New innovators either replace existing technologies or must purchase the rights to existing technologies from incumbents in order to produce, depending on the level of sequentiality in the industry. Estimating the model using data on US firms and recent data on US patent transfers, I can account for a large fraction of the cross-industry trends described above. Because innovation results in larger monopoly distortions in more sequential industries, there is an overinvestment of research inputs into these industries. This misallocation, which amounts to 2.5% in consumption equivalent terms, can be partially remedied using a patent policy featuring weaker protection in more sequential industries, producing welfare gains of 1.7%.
First off, check out my GitHub page! There you can find code relating to the above papers, the below listed projects, and other side projects.
Stuck in LaTeX/PDF hell? There may be a way out. Nowadays, academics are relying less and less on the printed page. At the same time, there have been major advances in the speed and functionality of web technology, particularly in the mobile space. Check out the live demo above!
I aim to produce a framework that can supplant LaTeX as the major tool for the promulgation of academic research. The benefits of such a framework will come both in the form of ease of use (for both the producer and consumer) and in an increased ability to integrate with existing web technologies.
Due to a preternaturally poor memory, I am an avid note taker. There are many, many note-taking apps out there, but not all focus on the other side of the equation: note-getting. Fuzzy is optimized for rapidly inputting and searching through notes, which comprise a title, a body of text, and a list of tags. The interface is web-based, but can be operated using only a keyboard.
This is a fully-featured USPTO patent data parser written in Python. It can handle applications, grants, assignments, and maintenance events in all formats. Additionally, it can cluster firm names into groups that are suffiently similar. The clustering algorithm first filters potential matches using locality-sensitive hashing then generates clusters from the components of a graph induced by a given Levenshtein distance threshhold.
This is a GNOME Shell plugin that provides a miniature preview of a chosen window, kind of like picture-in-picture on TVs. Great for watching movies while you're working, or at least pretending to.
Below is a service that allows you to look at the cumulative relative editing activity for a large number of tokens (about 1.1M) appearing in Wikipedia. You can type in your own (single) words into the box below, separated by commas, and see the results by pressing enter. You can also download the results in CSV form by clicking the button.
If you'd like to download the full dataset of tokens, just send me an email and I can arrange it. Even finer data is available at the article editing level.
In this project, I'm attempting to develop a framework for easy-to-use, real-time web plotting. The primary use case is for dashboards that display information on running calculations, but others are of course possible. The backend uses WebSockets and Tornado to interface with Matplotlib-style plotting commands. The frontend uses d3.js.
This project is quite old. The goal was to investigate the evolution of cellular automata entities in a competitive setting. Below is the progress of a randomly generated trial run:
Below are lectures notes for some of the classes that I've taught over the years, both at the graduate and undergraduate level.
External website: doughanley.com/grad_macro
External website: doughanley.com/grad_comp
1 — Preferences and Utility [PDF]
2 — The Walrasian Model and Consumer Choice [PDF]
3 — Consumer Demand [PDF]
4 — Equilibrium and Effiency [PDF]
5 — Equilibrium with Production [PDF]
6 — Firms and Production [PDF]
7 — Monopoly and Oligopoly [PDF]
8 — Intertemporal Choice and Uncertainty [PDF]
9 — Risk Sharing and Public Goods [PDF]
Advances in computing are critical for expanding the set of models that we can feasibly investigate quantitatively. GPU's are highly parallelized processing units, originally designed for use in video games, but increasingly finding their way into high performance computing. I gave a lecture in 2011 at Penn detailing how economists might use these in their research. The field is of course evolving rapidly on both the hardware and software fronts, so this may not incorporate all recent changes.