A Case for Agent-Based Models in Economics
In a recent essay in Nature, Doyne Farmer and Duncan Foley have made a strong case for the use of agent-based models in economics. These are computational models in which a large numbers of interacting agents (individuals, households, firms, and regulators, for example) are endowed with behavioral rules that map environmental cues onto actions. Such models are capable of generating complex dynamics even with simple behavioral rules because the interaction structure can give rise to emergent properties that could not possibly be deduced by examining the rules themselves. As such, they are capable of providing microfoundations for macroeconomics in a manner that is both more plausible and more authentic than is the case with highly aggregative representative agent models.
Among the most famous (and spectacular) agent-based models is John Conway's Game of Life (if you've never seen a simulation of this you really must). In economics, the earliest such models were developed by Thomas Schelling in the 1960s, and included his celebrated checkerboard model of residential segregation. But with the exception of a few individuals (some of whom are mentioned below) there has been limited interest among economists in the further development of such approaches.
Farmer and Foley hope to change this. They begin their piece with a critical look at contemporary modeling practices:
In today's high-tech age, one naturally assumes that US President Barack Obama's economic team and its international counterparts are using sophisticated quantitative computer models to guide us out of the current economic crisis. They are not.
The best models they have are of two types, both with fatal flaws. Type one is econometric: empirical statistical models that are fitted to past data. These successfully forecast a few quarters ahead as long as things stay more or less the same, but fail in the face of great change. Type two goes by the name of 'dynamic stochastic general equilibrium'. These models... by their very nature rule out crises of the type we are experiencing now.
As a result, economic policy-makers are basing their decisions on common sense, and on anecdotal analogies to previous crises such as Japan's 'lost decade' or the Great Depression...The leaders of the world are flying the economy by the seat of their pants.
This is hard for most non-economists to believe. Aren't people on Wall Street using fancy mathematical models? Yes, but for a completely different purpose: modelling the potential profit and risk of individual trades. There is no attempt to assemble the pieces and understand the behaviour of the whole economic system.
The authors suggest a shift in orientation:
There is a better way: agent-based models. An agent-based model is a computerized simulation of a number of decision-makers (agents) and institutions, which interact through prescribed rules. The agents can be as diverse as needed — from consumers to policy-makers and Wall Street professionals — and the institutional structure can include everything from banks to the government. Such models do not rely on the assumption that the economy will move towards a predetermined equilibrium state, as other models do. Instead, at any given time, each agent acts according to its current situation, the state of the world around it and the rules governing its behaviour. An individual consumer, for example, might decide whether to save or spend based on the rate of inflation, his or her current optimism about the future, and behavioural rules deduced from psychology experiments. The computer keeps track of the many agent interactions, to see what happens over time. Agent-based simulations can handle a far wider range of nonlinear behaviour than conventional equilibrium models. Policy-makers can thus simulate an artificial economy under different policy scenarios and quantitatively explore their consequences.
Such methods are unfamiliar (or unappealing) to most theorists in the leading research departments and rarely published in the top professional journals. Farmer and Foley attribute this in part to the failure of a particular set of macroeconomic policies, and the resulting ascendancy of the rational expectations hypothesis:
Why is this type of modelling not well-developed in economics? Because of historical choices made to address the complexity of the economy and the importance of human reasoning and adaptability.
The notion that financial economies are complex systems can be traced at least as far back as Adam Smith in the late 1700s. More recently John Maynard Keynes and his followers attempted to describe and quantify this complexity based on historical patterns. Keynesian economics enjoyed a heyday in the decades after the Second World War, but was forced out of the mainstream after failing a crucial test during the mid-seventies. The Keynesian predictions suggested that inflation could pull society out of a recession; that, as rising prices had historically stimulated supply, producers would respond to the rising prices seen under inflation by increasing production and hiring more workers. But when US policy-makers increased the money supply in an attempt to stimulate employment, it didn't work — they ended up with both high inflation and high unemployment, a miserable state called 'stagflation'. Robert Lucas and others argued in 1976 that Keynesian models had failed because they neglected the power of human learning and adaptation. Firms and workers learned that inflation is just inflation, and is not the same as a real rise in prices relative to wages...
The cure for macroeconomic theory, however, may have been worse than the disease. During the last quarter of the twentieth century, 'rational expectations' emerged as the dominant paradigm in economics... Even if rational expectations are a reasonable model of human behaviour, the mathematical machinery is cumbersome and requires drastic simplifications to get tractable results. The equilibrium models that were developed, such as those used by the US Federal Reserve, by necessity stripped away most of the structure of a real economy. There are no banks or derivatives, much less sub-prime mortgages or credit default swaps — these introduce too much nonlinearity and complexity for equilibrium methods to handle...
Agent-based models potentially present a way to model the financial economy as a complex system, as Keynes attempted to do, while taking human adaptation and learning into account, as Lucas advocated. Such models allow for the creation of a kind of virtual universe, in which many players can act in complex — and realistic — ways. In some other areas of science, such as epidemiology or traffic control, agent-based models already help policy-making.
One problem that must be addressed if agent-based models are to gain widespread acceptance in economics is that of quality control. For methodologies that are currently in common use, there exist well-understood (though imperfect) standards for assessing the value of any given contribution. Empirical researchers are concerned with identification and external validity, for instance, and theorists with robustness. But how is one to judge the robustness of a set of simulation results?
The major challenge lies in specifying how the agents behave and, in particular, in choosing the rules they use to make decisions. In many cases this is still done by common sense and guesswork, which is only sometimes sufficient to mimic real behaviour. An attempt to model all the details of a realistic problem can rapidly lead to a complicated simulation where it is difficult to determine what causes what. To make agent-based modelling useful we must proceed systematically, avoiding arbitrary assumptions, carefully grounding and testing each piece of the model against reality and introducing additional complexity only when it is needed. Done right, the agent-based method can provide an unprecedented understanding of the emergent properties of interacting parts in complex circumstances where intuition fails.
This recognizes the problem of quality control, but does not offer much in the way of guidance for editors or referees in evaluating submissions. Presumably such standards will emerge over time, perhaps through the development of a few contributions that are commonly agreed to be outstanding and can serve as templates for future work.
There do exist a number of researchers using agent-based methodologies in economics, and Farmer and Foley specifically mention Blake LeBaron, Rob Axtell, Mauro Gallegati, Robert Clower and Peter Howitt. To this list I would add Joshua Epstein, Marco Janssen, Peter Albin, and especially Leigh Tesfatsion, whose ACE (agent-based computational economics) website provides a wonderful overview of what such methods are designed to achieve. (Tesfatsion also mentions not just Smith but also Hayek as a key figure in exploring the "self-organizing capabilities of decentralized market economies.")
A recent example of an agent-based model that deals specifically with the financial crisis may be found in a paper by Thurner, Farmer, and Geanakoplos. Farmer and Foley provide an overview:
Leverage, the investment of borrowed funds, is measured as the ratio of total assets owned to the wealth of the borrower; if a house is bought with a 20% down-payment the leverage is five. There are four types of agents in this model. 'Noise traders', who trade more or less at random, but are slightly biased toward driving prices towards a fundamental value; hedge funds, which hold a stock when it is under-priced and otherwise hold cash; investors who decide whether to invest in a hedge fund; and a bank that can lend money to the hedge funds, allowing them to buy more stock. Normally, the presence of the hedge funds damps volatility, pushing the stock price towards its fundamental value. But, to contain their risk, the banks cap leverage at a predetermined maximum value. If the price of the stock drops while a fund is fully leveraged, the fund's wealth plummets and its leverage increases; thus the fund has to sell stock to pay off part of its loan and keep within its leverage limit, selling into a falling market.
This agent-based model shows how the behaviour of the hedge funds amplifies price fluctuations, and in extreme cases causes crashes. The price statistics from this model look very much like reality. It shows that the standard ways banks attempt to reduce their own risk can create more risk for the whole system.
Previous models of leverage based on equilibrium theory showed qualitatively how leverage can lead to crashes, but they gave no quantitative information about how this affects the statistical properties of prices. The agent approach simulates complex and nonlinear behaviour that is so far intractable in equilibrium models. It could be made more realistic by adding more detailed information about the behaviour of real banks and funds, and this could shed light on many important questions. For example, does spreading risk across many financial institutions stabilize the financial system, or does it increase financial fragility? Better data on lending between banks and hedge funds would make it possible to model this accurately. What if the banks themselves borrow money and use leverage too, a process that played a key role in the current crisis? The model could be used to see how these banks might behave in an alternative regulatory environment.
I have discussed Geanakoplos' more methodologically orthodox papers on leverage cycles in an earlier post. That work uses standard methods in general equilibrium theory to address related questions, suggesting that the two approaches are potentially quite complementary. In fact, the nature of agent-based modeling is such that it is best conducted in interdisciplinary teams, and is therefore unlikely to ever become the dominant methodology in use:
Creating a carefully crafted agent-based model of the whole economy is, like climate modelling, a huge undertaking. It requires close feedback between simulation, testing, data collection and the development of theory. This demands serious computing power and multi-disciplinary collaboration among economists, computer scientists, psychologists, biologists and physical scientists with experience in large-scale modelling. A few million dollars — much less than 0.001% of the US financial stimulus package against the recession — would allow a serious start on such an effort.
Given the enormity of the stakes, such an approach is well worth trying.
I agree. This kind of effort is currently being undertaken at the Santa Fe Institute, where Farmer and Foley are both on the faculty. And for graduate students interested in exploring these ideas and methods, John Miller and Scott Page hold a Workshop on Computational Economic Modeling in Santa Fe each summer (the program announcement for the 2010 workshop is here.) Their book on Complex Adaptive Systems provides a nice introduction to the subject, as does Epstein and Axtell's Growing Artificial Societies. But it is Thomas Schelling's Micromotives and Macrobehavior, first published in 1978, that in my view reveals most clearly the logic and potential of the agent-based approach.
---
Update (2/7). Cyril Hedoin at Rationalité Limitée points to a paper by Axtell, Axelrod, Epstein and Cohen that explicitly discusses the important issues of replication and comparative model evaluation for agent-based simulations. (He also mentions a nineteenth century debate on research methodology between Carl Menger and Gustav von Schmoller that seems relevant; I'd like to take a closer look at this if I can ever find the time.)
Also, in a pair of comments on this post, Barkley Rosser recommends a 2008 book on Emergent Macroeconomics by Delli Gatti, Gaffeo, Gallegati, Giulioni, and Palestrini, and provides an extensive review of agent-based computational models in regional science and urban economics.