Past, Present, and Future of Modern Finance (JPM Series)

1063-past-present-and-future-of-modern-finance-hero

Past, Present, and Future of Modern Finance (JPM Series)

November 2024
Read Time: 15 min
Key Points
  • Revolutionary change in finance, as in other disciplines, takes more than just new data. Rather, it requires a comprehensive restructuring of the prevailing frameworks of scientific knowledge and inquiry.

  • Modern portfolio theory (MPT) and other pillars of neoclassical finance are not always backed by empirical data. This only reinforces their transformative nature: Profits can be found in the gaps between theory and the real world.

  • In quantitative finance, overreliance on data or overreliance on theory may have their place. However, a Bayesian approach, which blends data and theory, is more likely to uncover lasting insights.

  • Behavioral finance and neoclassical finance differ on such fundamental concepts as whether markets are efficient. But that doesn’t mean one perspective should be discarded in favor of the other. Both can inform our understanding of the markets.

This is part of a series of articles adapted from my contribution to the 50th Anniversary Special Edition of The Journal of Portfolio Management.

Introduction

In his seminal work, The Structure of Scientific Revolutions (1962), the historian and philosopher of science Thomas S. Kuhn coined the expression “paradigm shift” to describe the path of scientific progress. In Kuhn’s conception, science is not a linear accumulation of knowledge but a series of revolutionary changes in the basic concepts of leading scientific thinkers. Scientific thought progresses through periods of "normal science," when it evolves within an existing framework of consensus (a paradigm). Accumulating inconsistencies in the prevailing paradigm then trigger a crisis, leading to the emergence of new theories and ideas, resulting in a paradigm shift where the old framework is rapidly replaced by a new one. In finance, this pattern has recurred time and again.

Create your free account or log in to keep reading.

 

Kuhn argues that these revolutions are not just episodes of cognitive change but are also sociologically driven processes, as the acceptance of new paradigms often requires a shift in the commitments and practices of the scientific community. Such revolutionary change takes more than new data; it involves a complete overhaul of the conceptual structure underlying scientific observation and understanding.

Kuhn’s work anticipated the evolutionary biologists Stephen Jay Gould and Niles Eldredge’s concept of “punctuated equilibrium.” Gould and Eldredge (1972) suggest that evolution often occurs in bursts of rapid change (punctuations) separated by long periods of relative stability (equilibrium).1 Punctuated equilibrium propels advances in many fields of science, including our own modest corner of the “dismal science” of economics: the world of finance.

Initially, we cannot know which ideas are good and which are bad. If an idea proves its merit in the crucible of aggressive criticism, it is eventually embraced. Innovative concepts are challenged, then accepted as fact, eventually becoming received wisdom, even dogma. Some of these concepts turn out to be myths, which are eventually challenged and overturned, demonstrating the punctuated equilibrium of science.

Punctuated equilibrium propels advances in many fields of science, including our own modest corner of the ‘dismal science’ of economics: the world of finance.

The Origins of Modern Portfolio Theory

Theories developed in the 1950s and 1960s formed the foundation of our current understanding of financial markets. Harry Markowitz introduced modern portfolio theory (MPT) with his study of portfolio selection, mean-variance optimization, and the efficient frontier in 1952, further refining it in 1956. In the 1960s, building on Markowitz’s work, several innovators (Jack Treynor, William Sharpe, John Lintner, and Jan Mossin) developed the capital asset pricing model (CAPM), which posits that, in equilibrium, expected security returns must be a linear function of market beta.2

Eugene Fama introduced the efficient market hypothesis (EMH) in 1970 and expanded on the thesis in his excellent 1976 book. In the 1970s and 1980s, we learned that the single equity market beta prediction of the CAPM was, at best, incomplete. In 1976, Stephen Ross proposed the Arbitrage Pricing Theory (APT), an asset pricing model in which multiple factors influence the returns of individual securities. In the 1980s, Nai-Fu Chen, Richard Roll, and Ross published convincing evidence that multiple factors do indeed determine security returns and helped set the stage for factor-based strategies.

A field of study is not science unless it produces falsifiable theories. Ironically, the mere fact that empirical data does not always support MPT, EMH, and the CAPM reinforces their revolutionary nature and their relevance as the scientific foundations of modern finance. Contradictory data can highlight gaps between theory and the real-world behavior of capital markets. For example, the transition from CAPM to APT, then to various anomalies and factors at odds with both, illustrates the punctuated equilibrium of scientific progress in modern finance.

A field of study is not science unless it produces falsifiable theories. Ironically, the mere fact that empirical data does not always support MPT, EMH, and the CAPM reinforces their revolutionary nature and their relevance as the scientific foundations of modern finance.

Until they are arbitraged away, such gaps can also be important sources of profit for investors. Fama (1976) shows that any test of the EMH is really a joint test of the EMH and the particular asset pricing model used to test for efficiency. With Fama and French’s Three-Factor Model (1992, 1993), the EMH earned a new lease on life—albeit with a new twist: Some investors prefer to earn higher returns than the market by owning unloved smaller-cap or lower-priced value stocks.

Behavioral Finance Emerges

In the 1990s, Richard Thaler applied and extended Daniel Kahneman and Amos Tversky’s behavioral economics research to question not just a single pricing model but also the rational decision-making assumptions of EMH and the CAPM. Thaler and others have published compelling research suggesting that human decision-making is a far more complex process than Sharpe’s single-factor model tacitly assumes.

Efficient markets are not a fact, they are a hypothesis, an attractive model of the way the world ought to work. To save EMH from obsolescence, researchers now postulate a risk premium that varies across time, asset classes, and even individual assets. This raises a question: What’s the difference between an inefficient market and a market in which the risk premium varies across time and from one asset to the next?

While not even Fama suggests markets are 100% efficient, academia largely embraced EMH as a sound approximation of the real world through at least the 1990s. But this is much less the case today. The “noise in price” model, with fair value following a random walk and prices equal to fair value plus or minus a mean-reverting error, better reflects reality and explains a host of anomalies. This does not mean that inefficiencies are stable. Once identified, they should be arbitraged away.

Efficient markets are not a fact; they are a hypothesis, an attractive model of the way the world ought to work.

And so, the academic debate about market efficiency remains unresolved. For each new pricing anomaly and behavioral deviation from mean-variance optimization, a more complex pricing model emerges that accounts for different investor preferences that may vary over time. Jeremy Siegel (2006) has likened this process to pre-Copernican “epicycles” that explained planetary movements that diverged from the accepted geocentric model even though the simpler heliocentric model works better. Isn’t “noise in price” simpler and more powerful than EMH with factor epicycles?

The Evolution of Finance Is Ongoing

Since these theoretical foundations of finance and investing were laid down half a century ago, we have seen the emergence of the quant community—the revenge of the nerds—from an odd fringe into a dominant force in asset management. We have seen passive management soar despite accusations of “investment socialism,” with passive investors free-riding on the price-discovery process that is the central purpose of the capital markets. Along the way, many new ideas have been accepted as fact, then challenged, and in a few cases eventually abandoned. We have seen smart beta and factor investing enthusiastically embraced, then questioned, then cautiously reconsidered. Indeed, the plumbing of finance has changed amid a more serious focus on investing as a science.

Of late, we have seen a dramatic emergence of new tools and big data. There are fewer analysts, while the quality and quantity of information have vastly increased. Artificial intelligence (AI) is the revolution du jour. AI is not new; it has been around for decades, though its capabilities continue to grow exponentially. Moore’s law is alive and well! Indeed, AI has refined the algorithms used in high-frequency trading (HFT) for many years. User-friendly AI is new. That’s the breakthrough. In the coming decades, AI will change our lives in more ways than we can possibly imagine. But, as with the internet, computers, automobiles, trains, the telegraph, and other revolutionary technologies, AI will transform our world more than we expect, but more slowly than we expect.

Each new breakthrough brings new insights, some brilliant and some flawed, with some debunking or amending seductive myths and dogma in due course. A brief review of “scientific method” demonstrates how such myths can come and go.

Scientific Method: Data-First vs. Theory-First vs. Bayesian Approaches

The scientific method has roots going back to Aristotle and beyond, but it is neither widely understood nor widely used in finance today. Indeed, I would argue that it is not widely used in the hard sciences, where confirmation bias still dominates. It begins with a hypothesis, a belief about the way the world ought to work. We then use data to dispassionately test our ideas, not merely to prove ourselves right, but to learn. A damning critique in the hard sciences is that a hypothesis is “unfalsifiable,” that it cannot be proven wrong. Accordingly, as we test our ideas, a secondary goal is to falsify—or at least find the flaws in—our own hypothesis before others do. To state the obvious, using a backtest to improve our backtest is the antithesis of the scientific method, even if it is all too common in the quant community.

Even within quantitative finance, three distinct practices have fought for preeminence, what I call the data-first, theory-first, and Bayesian methods.

Data-first has been the method of choice in the factor community, which itself developed from a decades-long exploration of capital market “anomalies.” What better way to earn tenure than to scour vast quantities of data to identify a previously unidentified anomaly or factor? Why search for flaws in our hypothesis if our goal is tenure? Tarun Chordia, Amit Goyal, and Alessio Saretto (2020) construct 2 million random factors using the CRSP database. The best-performing factor has a t-statistic of 9.01 for its CAPM alpha.

Chordia, et al, are not trying to find a fantastic new factor but to illustrate how data mining can lead us astray. Among the best factors of the 2 million is (CSHO-CSHPRI)/MRC4. What the heck is that?

(Common Shares Outstanding – Common Shares Used to Calculate EPS)
Rental Commitments, Four Years Hence

Of course, no sensible investor would rely on something this peculiar, no matter the statistical significance. Even millions of tests can lead us off course. Data-first means data mining.  Relentless data mining is NOT scientific method. Using a backtest to improve the backtest gives us a great backtest, not a good product.

Data-first has its place. AI applications for developing HFT algorithms, for example, with billions of data samples, don’t need a prior hypothesis. In applications with thousands or even millions of data samples, however, data-first is self-evidently dangerous. Most research in finance and economics—whether for factors, asset allocation, or anomalies—relies on daily, monthly, or quarterly data. For most such research (with the possible exception of tick data), there isn’t enough data to safely rely on a data-first approach.

Relentless data mining is NOT scientific method. Using a backtest to improve the backtest gives us a great backtest, not a good product.

The theory-first method dominated the early stages of modern finance and still has many adherents in the academic finance community. Theory-first disregards data and assumes that when the data does not support the theory, the data—not the theory—is simply wrong or otherwise driven by anomalous outliers. The market is efficient, never mind evidence of market inefficiency. Expected returns correlate to beta and little else, never mind extensive evidence to the contrary. The broader economic community suffers from similar myopia. Fiscal and monetary stimulus promotes growth, never mind any data to the contrary. Theory-first is seductive because the ideas make so much intuitive sense. As with data first, theory-first has its place, both as a foundation for Bayesian priors and in arenas where data is lacking.

Unless data samples are either vast or more or less nonexistent, a Bayesian approach is more likely to lead to lasting insights than theory-first or data-first. A Bayesian will blend data and theory, giving neither preeminence.3 Both depend upon the other. A theory is developed with care to identify validating empirical tests and then tested against the data. The data is not used to develop the theory.

The Next Evolution in Finance?

While our understanding of the nature of markets has evolved considerably since Markowitz and company established the key pillars of modern finance, as this analysis demonstrates, the many debates in the discipline are far from settled.

Both the academic and practitioner communities in our industry are perhaps too complacent, and too invested in maintaining the current equilibrium or paradigm. Too many people say, “Assuming this, then we can decide that.” Too few are willing to question those basic assumptions. As fiduciaries, we owe it to our clients to be less accepting of received wisdom (which is too often dogma) and more willing to explore the implications of errors in the root assumptions of finance theory. These basic assumptions often fail when they are tested. Flawed assumptions are not bad; they are our best source of learning. We can learn more and earn more by exploring the many gaps between theory, received wisdom, and reality.

Flawed assumptions are not bad; they are our best source of learning. We can learn more and earn more by exploring the many gaps between theory, received wisdom, and reality.

If neoclassical finance assumes markets are efficient, while behavioral finance assumes the opposite, do we discard the less convenient theory? Isn’t it better to recognize elements of truth in seemingly incompatible theories? Economics is not physics. Neoclassical finance and behavioral finance both have important insights. By recognizing this possibility, we not only gain a richer understanding of the markets, but we may also help catalyze finance’s next paradigm shift and advance our little corner of the dismal science to the next stage in its evolution.

Investment Solutions

Trusted by investors around the world

Learn More  >

Learn More About the Author

End Notes

1. Gould and Eldredge’s model better explained certain patterns observed in the fossil record where bursts of evolutionary change were followed by eons of comparative stasis. Species appear suddenly, persist largely unchanged for millions of years, and then disappear without leaving much transitional evidence. “Punctuated equilibrium” supplanted gradualism, the previously accepted norm. This is no less true in the evolution of ideas than it is in biological evolution.

2. Sharpe won Nobel recognition for his contribution (and rightly so!). Lintner, Mossin, and Treynor did not, albeit for different reasons. Lintner and Mossin died in the 1980s, and the Nobel Prize is never awarded posthumously. I was lucky enough to be at a conversation between Sharpe and Treynor during the 2000s at the Q group, in which Sharpe asked Treynor when he did the CAPM work. Treynor said that he had written two papers in 1961 and 1962 and submitted them to a handful of journals. They were rejected outright (not “revise-and-resubmit”). As he wasn’t an academic, per se, Treynor thought that was the end of it and let it go. Sharpe expressed heartfelt empathy. The Treynor papers were widely circulated in industry circles (but not academic circles at that time) and are available on SSRN.

3. The only time I saw Harry Markowitz seriously lose his temper was on this topic. A leading academic had written a paper critical of a theory that Markowitz had found compelling, by constructing a hypothetical scenario in which the theory might not work. I remember Markowitz shouting into the phone, “Your hypothetical bears no resemblance to the real world. I’m a Bayesian. You’re clearly not a Bayesian.” For Markowitz, this was a damning critique!

References

Chen, N. F., R. Roll, and S. A. Ross. 1986. “Economic Forces and the Stock Market.’ The Journal of Business 59 (3): 383–403.

Chordia, T., A. Goyal, and A. Saretto. 2020. “Anomalies and False Rejections. The Review of Financial Studies 33 (5): 2134–2179.

Fama, E. F. 1976. Foundations of Finance, New York, NY: Basic Books

Fama, E. F., and K. R. French. 1992. “The Cross-Section of Expected Stock Returns.” Journal of Finance 47 (2): 427–465.

Fama, E. F., and K. R. French. 1993. “Common Risk Factors in the Returns on Stocks and Bonds.” Journal of Financial Economics 33 (1): 3–56.

Eldredge, N., and S. J. Gould. 1972 "Punctuated Equilibria: An Alternative to Phyletic Gradualism." Models in Paleobiology, edited by Schopf, T. J. M. San Francisco, CA: Freeman, Cooper & Co. 82–115.

Kahneman, D., and Tversky, A. 1996. “On the Reality of Cognitive Illusions: A Reply to Gigerenzer's Critique.” Psychological Review 103: 582–591.

Kuhn, T. S. 1962. The Structure of Scientific Revolutions. Chicago, IL: University of Chicago Press.

Lintner, J. 1965. “The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets.” The Review of Economics and Statistics 47 (1): 13–37.

Markowitz, H. M. 1952. “Portfolio Selection,” Journal of Finance 7 (1): 77–91.

Markowitz, H. M. 1956. “The Optimization of a Quadratic Function Subject to Linear Constraints.” Naval Research Logistics Quarterly 3 (1–2): 111–133.

Mossin, J. 1966. “Equilibrium in a Capital Asset Market.” Econometrica 34 (4): 768–783.

Ross, S. A. 1976. “The Arbitrage Theory of Capital Asset Pricing.” Journal of Economic Theory 13 (3): 341–360.

Siegel, J. J. 2006. “The ‘Noisy Market’ Hypothesis.” Wall Street Journal, June 14.

Thaler, R. H. 1999. "Mental Accounting Matters." Journal of Behavioral Decision Making 12 (3): 183–206.

Thaler, R. H. 1999. "The End of Behavioral Finance." Financial Analysts Journal 56 (6): 12–17.

Thaler, R. H. 2000. "From Homo Economicus to Homo Sapiens." Journal of Economics Perspectives 14: 133–141.

Treynor, J. L. 1961. “Market Value, Time, and Risk.” Unpublished.

Treynor, J. L. 1962. “Toward a Theory of Market Value of Risky Assets.” Unpublished.

Treynor, J. L. 2005. “Why Market-Valuation-Indifferent Indexing Works.” Financial Analysts Journal 61 (5): 65–69.

Tversky, A., and D. Kahneman, 1991. “Loss Aversion in Riskless Choice: A Reference-Dependent Model.” Quarterly Journal of Economics 106: 1039–1061.

Tversky, A., and D. Kahneman. 1992. “Advances in Prospect Theory: Cumulative Representation of Uncertainty.” Journal of Risk and Uncertainty 5: 297–323.