Innovation in the Cross-section
Programmable blockchains allow for rapid experimentation through a large cross section of applications
In Spring 2022 I was on a panel at an event hosted by VanEck on the periphery of the BTC Miami conference. The other panelists and I were asked about the relatively poor performance of an ETF that consisted of a basket of DeFi tokens. I made the point that because the barriers to entry were so low, experimentation can happen very quickly. But, this means there will be a larger fraction of projects that do not survive a process like natural selection that weeds out weak projects over time. The failure rate will be higher, so we shouldn't necessarily expect a broad basket of DeFi tokens to perform well.
I’d like to dig into the mechanism behind this idea and understand the tradeoffs and what assumptions are needed for it to be applicable. Broadly, if we think of each project as an experiment that has some probability of being a successful innovation, then the probability that at least one project succeeds over any fixed time period is higher when a larger number of trials take place. But, while it is easier to instantiate an experiment when the barriers to entry are low, and hence more trials take place, under reasonable assumptions, the fact that it is easier implies there will be “sloppier” attempts, corresponding to lower probabilities of success per project. In what follows I offer a simple model to formalize this tension, provide more context, and characterize barriers to entry that are optimal from a crude innovation-based macroprudential standpoint.
Low Barriers to Entry
Barriers to entry are relatively low for deploying applications on programmable blockchains like Ethereum. Novel applications can be developed that only contain a few thousand lines of code, and require expertise that can be captured by as few as a single developer. For example, the smart contract suite for Uniswap, which recently surpassed Coinbase in terms of monthly trading volume, is about 2000 lines of code of Solidity. The frontend is another few thousand lines of code of javascript and typescript. A single skilled developer can build the entire application. In contrast, Twitter’s server and database code is tens of millions of lines, and employs thousands of software developers.1
Moreover, these applications can be deployed to a liquid network with existing users with low cost. For example, the cost of deploying Uni on Ethereum is in the ballpark of $1,000 depending on the gas costs and the Ethereum price. On Avalanche it can be less than $100. Ethereum already has hundreds of millions of dollars of active liquidity. User acquisition can happen organically, quickly, because the heavy lifting for onboarding existing Ethereum users to a new app is handled by wallet applications like Metamask. Most successful projects market through social media like Twitter and chat apps like Discord and Telegram, reducing the marginal costs for new users to near zero. In contrast, estimated user acquisition costs for Robinhood were as high as $50 in the 2019-2022 period.
The process of decreasing barriers to entry has been on a secular trajectory for many years. In web2 and software development in general, innovation happens more quickly than in manufacturing, which, after the industrial revolution, innovated more quickly than in agriculture, etc. If we break down the chronology to a finer grain, we will certainly see the progress has not been perfectly linear. But, there has never been a point where a handful of innovators with very little resources could design and implement applications that could be used by so many so quickly.
High Noise-to-Signal
There are costs associated with low barriers to entry. One potential cost is the decrease in the average quality of entrants. There is certainly anecdotal evidence of a tradeoff like this. The rate that dot-com era tech companies fail exceeds the rate that railroad companies fail. The rate at which crypto tokens crash is much higher than the rate that tech companies fail. Because the barriers to entry are relatively low for developing and deploying applications in web3, innovation can happen more quickly than other settings. But the average application may be lower quality.
Whether lower costs of entry give rise to the same projects being attempted at lower quality all else equal is an empirical question. But in addition to being consistent with anecdotal evidence, it is certainly plausible behavior. If the initial investment correlates with a measure of total effort at the organization level, then lower initial investment can correspond to less total effort and higher rates of mistakes, and hence lower success rates.
A Simple Model
We want to find a “sweet spot” where the barriers to entry are at a level where the benefits of more innovators entering exactly offset the costs of higher failure rates. The role of the model in the following sections is to formalize the analysis of this tradeoff. Readers who want to skip the math (I kept it light) can jump to the Punchlines section.
We’ll start with a simple case that shows convergence to success is faster with a larger cross-section, and then we’ll add costs to increasing the cross-section. Assume all projects y begin as noisy implementations of a true innovation x, which we can wlog set to zero, 2
A project is a successful innovation if the realization is close enough to the true innovation
A project outcome is realized one period in the future. Write the probability of success for a single project in a single period
Then the probability of at least one success within k periods is3
Now suppose in each period, N experiments can take place. Assuming N is at least two, the probability of at least one success over k periods is, unsurprisingly, higher
Although it is clear the probability of a successful innovation is higher per time period when more experiments take place, the model is trivial. We've merely added more zero-cost independent trials per period.
Tradeoffs and Objectives
To capture the costs of lowering barriers to entry, suppose that the probability of success is an increasing function of barriers B
subject to the obvious boundary conditions for probabilities.
Probabilities that are increasing as a function of barriers are plausible in a setting where agents choose effort to reduce the variance of the investment outcome. In the above probability model, the more effort, the higher the probability of success, because the noise reduction results in a higher probability of falling within the success band
So, if the initial investment is an increasing function of effort, lower barriers to entry can correspond to lower effort, higher noise entrants.
Suppose in addition the number of entrants as a function of barriers to entry is a decreasing function N
When the barriers decrease, the number of entrants increases, at least over a defensible range of entry costs.
The increase in the population participation is plausible in a setting where there is a fixed minimum cost to enter, and with a distribution of wealth in the cross-section that places more weight on lower wealth levels. Suppose that if agents have wealth below the barrier, they cannot access the venture markets and instead invest in a savings account that returns no interest. Then as soon as the barrier reaches the wealth level of a particular agent, the agent will invest in any positive EV project.
To keep everything uniform, we can assume exactly the barrier to entry is the only viable investment amount, so any surplus goes to the savings account. Then effort is determined completely by the barrier. We can think of this as the upper bound on the impact of changing barriers on cumulative success probabilities.
Implications
Now, the k-period success probabilities are
We can use these aggregate success probabilities as a macroprudential indicator of the value of a barrier. This is reasonable if we think there is a social benefit to realized successful innovations. Then the aggregate probabilities are a measure of the probability that society receives the benefit.
Since the number of projects and the individual probabilities of success move opposite each other as a function of B, the calculus can go either way. It’s not always better to add more trials. It’s not always better to improve the average quality.
We can quantify this tradeoff using simple functional forms. An exponential entrants function
on a positive, finite range for B, and a uniform probability of success as a function of the minimum up-front investment B are enough to generate an interior solution.
Plugging the functional forms into the cumulative success probabilities, taking first order conditions and shuffling some things around, we can see that higher barrier to entry can result in a smaller number of entrants but higher cumulative success probabilities if
In words, if the rate at which the number of entrants decreases is offset by the marginal benefit of increasing individual success probabilities, the result of raising barriers is a benefit.
Below I plot the entrance function, single-period and cumulative success probabilities, and the corresponding first order condition for the k-period success probabilities.
There are two optima: the dashed green line, and one corresponding to the limit on the right. The optimal barriers tradeoff the benefit of adding more trials with the costs of reducing average quality. The green line is the internal optimum. Left of the green line, adding more projects reduces the cumulative success rates. To the right of the green line, improving the average quality reduces the cumulative success rates until the impact from the individual probabilities takes over.
The two optima are important conceptually, but the lower optimum is more interesting as an achievable interior and policy benchmark. The upper optimum corresponds to the limit where the individual success probabilities approach one. I would rule this out as attainable in practice, but it does generate interesting local behavior. If one were evaluating a barriers policy and one were at a sufficiently high barrier, one might think increasing barriers is better, which is true on the margin, but you would be moving away from the globally optimal much lower barrier.
Returns on a Basket
Despite the prospects for the winning technologies in DeFi, the process for assembling constituents for a DeFi ETF will involve sorting through more low-probability experiments than in similar tech booms with higher barriers to entry, all else equal.
Assume the value of a successful project is Y. We can write the return on an equally-weighted basket of projects
Expected returns simplify nicely
The expected return on a basket of ex-ante identical projects depends only on the success probability and the initial cost.
The plot of expected returns suggests that in equilibrium, no one would enter to the left of the red line: it is never individually optimal. It also implies that even risk averse founders with more wealth than B may invest more than the minimum, so in practice we could see heterogeneity in project quality activated through the wealth distribution. The green line shows expected returns at the socially optimal barrier. Expected returns on an equally weighted basket increase with individual success probabilities so the highest expected return corresponds to the upper barrier.
Punchlines
Making assumptions about qualitative properties of the entrance rate and success probabilities as a function of barriers to entry, we can solve for an optimal barrier that maximizes the cumulative k-period success probabilities. At the optimal barrier, lowering barriers decreases the cumulative probability of success by reducing the average project quality at a rate faster than the rate of new entrants. Conversely, raising the barrier away from the optimum decreases entrants at a faster rate than the project quality increases, lowering the cumulative success probabilities.
The success probabilities are a reasonable macroprudential indicator if we think there is a social benefit to realized successful innovations. Hence it makes sense to think about policies impacting barriers to entry through the lens of this tradeoff. Expected returns on an equally weighted basket of projects is lower near the socially optimal barrier than above it.
The assumptions required for this type of behavior to emerge are plausible. If investments require a level of effort, and more effort reduces the noise in terms of the outcome of the project, then increasing barriers requires more effort which improves project quality. On the other hand, if the number of eligible participants increases when barriers decrease because there are more agents with less wealth in the population, more projects will be undertaken when barriers are low. The contributions of these forces to the cumulative probabilities are perfectly traded-off at the optimum barrier.
Other Factors and Equilibrium Forces
Composability and the extent to which existing applications play a role in the success of future applications could either increase or decrease the speed of convergence. If applications become dependent on existing applications that haven’t succeeded yet, progress could slow. On the other hand, if composability increases the signal to noise ratio when new applications are built on top of successful ones, progress could accelerate. Similarly, network effects can improve innovation rates and may be higher when a larger number of projects enter. The path dependence of these mechanisms is interesting.
There are also market-wide factors that play a role in the absolute performance of a basket of DeFi tokens. But, relative to, say, the stock market, the basket of DeFi tokens in question had still underperformed. If we allow the success probabilities to contain an aggregate component, the mechanism described above can explain this discrepancy independently of aggregate market conditions.
In practice these projects would be heterogeneous and have some correlation profile that would imply an optimal allocation that is different than equally-weighted. Such a portfolio would have a higher Sharpe ratio in equilibrium. Similarly, investing more than the minimum can create a distribution of project quality not captured here that may be more plausible empirically. Thinking about correlations and heterogeneity will be important for distributing risk in something like a basket ETF. The model discussed above corresponds to a case where all projects are independent and ex-ante identical.
A specific phenomenon that is at play but not captured by the model is that in practice, lowering barriers gives access to a larger number of exceptional innovators, which can tilt the scales toward a lower B. For example, we could assume that each entrant has a probability u of being exceptional, meaning the success probabilities or the payoffs are high relative to the population. Then, the amount by which new exceptional innovators add to cumulative probabilities can offset the rate at which the average project quality decreases and result in a lower optimal barrier.
The cumulative success probabilities might not be the right benchmark. The benefits of an innovation might not accrue to non-investors in practice. Other measures of welfare like allocative efficiency become important in equilibrium models. Policy considerations like market access, specialization, and the organization of the venture sector can have implications for efficient outcomes in equilibrium that offset the effects highlighted above. Nonetheless, using the cumulative success probabilities as the benchmark provides some surprisingly clean insight into the tradeoffs in question.
Red Flags, Heterogeneity and Picking Winners
An implication of our discussion is that the lower the barriers to entry, the higher the frequency of failed projects, and hence the greater importance on the ability to pick winners. Equally weighted, randomly sampled baskets do worse when barriers are lower. In DeFi, there are some indicators we can look for in terms of the types of experiments that are unlikely to succeed.
short or nonexistent early stage investor lockups
anything that relies too much on token reward incentives to boost its initial TVL4
meme tokens
governance only tokens
here is a controversial one: anything that is not upgradeable enough and hence not adaptive enough5
fully anon teams
Final Thoughts
Successful innovations are more prolific when a large cross-section is at play. Participation from a larger cross section improves welfare if the rate at which the average quality decreases is small enough. Even when a large cross-section of lower quality projects is socially optimal, a uniform basket can perform poorly relative to more selective benchmarks. Signals for picking winners may be particularly valuable in markets with low barriers to entry.
Part of the reason for this is the nodes do most of the heavy lifting: GETH, a popular Ethereum client, is about one million lines of code. This common infrastructure is a fundamental driver of the network effect on Ethereum because it substantially reduces the complexity of the marginal application.
Although most founders and developers do not think of themselves as experimenting, it is a useful abstraction to think of each of these projects as experiments.
The k-probabilities are defined over the space of k-sequences of outcomes. In one of these sequences, the experiment fails every period. The k-period success probabilities as defined measure all of the remaining sequences, including some with many successes.
Token reward incentives are still valuable as tools to distribute ownership across a community and to create initial interest.
While smart contract immutability is important in certain contexts, the ability to stay at the forefront of a very fast moving technology is a more general principle. Moreover, smart contract immutability matters more for anon teams, another red flag.