# Lecture 2: Endogenous Growth

In this lecture we're going to go over some of the foundational theories of endogenous technological growth. These came around in earnest starting in the early nineties with and , as well as , which provides an interesting synthesis of the two. A little later on provides an insightful critique to this strand of literature, in addition to a bit of perspective.

## Aggregate Framework

How might we begin to think about technological growth in an economy? In the simplest representation, we can imagine there is an aggregate production for ideas and that this concept of ideas maps directly into what we've been discussing as total factor productivity ($A$). The rate of change of the stock ideas is then a function of the current stock of ideas and the amount of effort put into producing new ideas through, for example through research. We'll think of this in terms of labor, but you could also imagine capital playing an important role.

Thus we have $\dot{A} = G(A,R)$ for some function $G$. This representation is a bit abstract, so we'll assume a fairly flexible functional form

Some find this to be a depressing result, in the sense that policies, such as a research subsidy for instance, have no chance of affecting long-term growth rates. That doesn't mean they can't have substantive effects though. Policies which put "upward pressure" on the incentives for research will increase growth rates in the short term, but this will dissipated out into level effects over time.

To see this consider the growth rate of $g_A$. I know that seems weird, but bear with me. Suppose that $s$ is constant over time, we have

The only remaining case is the one where $\phi = 1$. In this setting, our expression for the growth rates becomes simply
**scale effects** are clearly at least not the major drivers of growth. See for a detailed discussion of these issues.

Nonetheless, we'll be making this assumption ($\phi = 1$ and $n = 0$) for much of the remainder of the course, largely for the sake of analytic simplicity. There's no major obstacle to undoing this assumption in most models that we'll study, and in many it may not have major welfare or policy implications anyway.

## Microeconomic Foundations

The major breakthrough associated with was that it gave us a way to think about the incentives for individual actors to undertake innovation that then maps into an aggregate picture. In this framework, which we refer to as the **expanding variety** setting, innovators conjure up new ideas and are granted exclusive rights to employ them in production.

At this point, they can either undertake production themselves, if they have the means, or sell the idea to a producer. Each new idea produces a new and different type of product. All of these intermediate products are then aggregated into a single final product which is then sold to consumers. Let intermediate goods be denoted by $y_j$ for $j \in [0,A]$, and consider an aggregate production function of the form
*ex ante* identical. The resulting profit accrued is

At this point, we've completely solved the equilibrium for static production side. The most important quantity to be used here is the profit of the intermediate good producing firms. This will determine the incentives for the creation of new products. The present value of owning a product line is given by

From the above, we can see that $V$ and $\pi$ should grow at the same rate, which we can show using the above is equal to $(2-\varepsilon)g$, where $g$ is the growth rate of output $Y$. Thus we have

### Optimality

This model has some interesting optimality properties. Consider the problem of the social planner. In the most general setting, one must choose the levels of production for each product line $x_j$, as well as the split between goods production labor $P$ and research labor $R$. However, it is easy to show that any optimal choice still features $x_j = x$ for all $j$. Thus we need only make the choice between production and research. Given a constant growth rate, the path of output will be $Y(t) = Y(0) \exp(gt)$. This leads to welfare of

## Quality Ladders

Next we'll discuss a related class of models called **quality ladder** models. These have a similar product market structure, but a slightly different source of growth. Instead of generating growth through the invention of new product lines, we improve an existing fixed set of product lines. When an innovator $f$ comes up with a new idea, a randomly chosen product line $j$ sees an improvement in productivity, meaning
**step size**. At any given time, let the lead producer in a product line have productivity $q_j = \max_f \left\{q_{jf}\right\}$.

Suppose that we have the same product market setup as in the previous section, but fix the mass of products to $A = 1$ and let the elasticity of substitution be $\varepsilon = 1$. This results in the logarithmic form

We will assume that competing intermediate producers engage in Bertrand competition. The marginal cost of the lead producer is $w/q_j$, while that of the second best producer is $\lambda w/q_j$. Thus the lead producer will set his price equal to the next best producer's marginal cost meaning $p_j = \lambda w/q_j$. Using , this results a production level of

Now let's consider the value of acquiring a new product line. Let the aggregate rate of innovation be $\tau$. In this case, starting from the discrete time approximation, we'll have

What will the aggregate growth rate in the economy be? That is a function of the step size $\lambda$ and the innovation rate $\tau$. Because product lines are targeted randomly and there is a unit mass of them, the probability that any given product line will receive an innovation is also $\tau$. Total output is

### Optimality

We will now briefly discuss the optimality properties of this model. Using the expression from , again using $\theta = 1$, and the conditions

Now the question is, what forces are shaping the incentives to innovate and how do they differ from the considerations of the social planner. There are two distinct distortions at work here. First is the **consumer surplus effect**. Because each innovation builds upon the previous one, these increments last forever. However, the firm only enjoys the profits from them for a short period. This results in insufficient incentives for innovation. Second is the **business stealing effect**. When a firm improves the technology in a product line by 10%, they are rewarded with 110% of the original revenues.

In the end, the innovation rate is too high for very small or very large values of $\lambda$, while it is too low for intermediate values. When $\lambda$ is small, it's clear that the rewards for innovation are far too large compared to the productivity increment. When $\lambda$ is large, the actual incidence of business stealing is large, meaning firms are rewarded only in a short time increment for a productivity improvement that lasts forever.

### Correspondence

For relatively small values of $\lambda$, we have $\log(\lambda) \approx \lambda - 1$. In this case, if consider the expanding variety model with $\varepsilon = \frac{\lambda}{\lambda-1}$, the predictions are identical, both in terms of the equilibrium and optimal values for the growth rate and research labor allocation. Essentially, when $\varepsilon$ is high, products are highly substitutable, which is analogous to a low innovation step size environment. So the differences between these two classes of models may in the end be more a matter of interpretation than observational differences. See for an interesting discussion of this notion.