Saturday, September 06, 2014

The CORE Project

Back in October 2012 I got an unexpected email from Wendy Carlin, Professor of Economics at University College London, asking me to join her and a few others for a meeting in Cambridge, Massachusetts, to be held the following January. The goal was to "consider how we could better teach economics to undergraduates." Wendy motivated the project as follows:
People say that altering the course of the undergraduate curriculum is like turning around a half a million ton super tanker. But I think that the time may be right for the initiative I am inviting you to join me in proposing. 
First the economy has performed woefully over the past few decades in most of the advanced economies with increasing inequality, stagnant or declining living standards for many, and increased instability. Second, in the public eye economics as a profession has performed little better; with many of our colleagues offering superficial and even incorrect diagnoses and remedies. Third, what economists now know is increasingly remote from what is taught to our undergraduates. We can teach much more exciting and relevant material than the current diet... Fourth, the economy itself is grippingly interesting to students. 
I think that a curriculum that places before students the best of the current state of economic knowledge addressed to the pressing problems of concern... could succeed.
I attended the meeting, and joined a group that would swell to incorporate more than two dozen economists spread across four continents. Over the next eighteen months, with funding from the Institute for New Economic Thinking, we began to assemble a set of teaching materials under the banner of Curriculum Open-Access Resources in Economics (CORE).

A beta version of the resulting e-book, simply called The Economy, is now available free of charge worldwide to anyone interested in using it. Only the first ten units have been posted at this time; the remainder are still in preparation. The published units cover the industrial revolution, innovation, firms, contracts, labor and product markets, competition, disequilibrium dynamics, and externalities. For the most part these are topics in microeconomics, though with a great deal of attention to history, institutions, and experiments. The latter half of the book, dealing with money, banking, and aggregate activity is nearing completion and is targeted for release in November.

It is our hope that these materials make their way in some form into every introductory classroom and beyond. Instructors could use them to supplement (and perhaps eventually replace) existing texts, and students could use them to dig deeper and obtain fresh and interesting perspectives on topics they encounter (or ought to encounter) in class. And anyone interested in an introduction to economics, regardless of age, occupation or location, can work through these units at their own pace.

The unit with which I had the greatest involvement is the ninth, on Market Dynamics. Here we examine the variety of ways in which markets respond to changes in the conditions of demand and supply. The focus is on adjustment processes rather than simply a comparison of equilibria. For instance, we look at the process of trading in securities markets, introducing the continuous double auction, bid and ask prices, and limit orders. We examine the manner in which new information is incorporated into prices through order book dynamics. The contrasting perspectives of Fama and Shiller on the possibility of asset price bubbles are introduced, with a discussion of the risks involved in market timing and short selling.  Markets with highly flexible prices are then contrasted with posted price markets (such as the iTunes store) where changes in demand are met with pure quantity adjustments in the short run. We look at rationing, queueing and secondary markets in some detail, with reference to the deliberate setting of prices below market-clearing levels, as in the case of certain concerts, sporting events, and elite restaurants.

I mention this unit not just because I had a hand in developing it, but to make the point that there are topics covered in these materials that would not ordinarily be found in an introductory text. Other units draw heavily on the work of economic historians, and pay more than fleeting attention to the history of ideas. The financial sector makes a frequent appearance in the posted units, and will do so to an even greater extent in the units under development.

But far more important than the content innovations in the book are the process innovations. The material was developed collaboratively by a large team, and made coherent through a careful editing process. It is released under a creative commons license, so that any user can customize, translate, or improve it for their own use or the use of their students. Most importantly, we see this initial product not as a stand-alone text, but rather as the foundation on which an entire curriculum can be built. We can imagine the development of units that branch off into various fields (for use in topics courses), as well as the incorporation of more advanced material eventually making its way into graduate education.

So if you're teaching an introductory economics course, or enrolled in one, or just interested in the material, just register here for complete access without charge. We will eventually set up instructor diaries to consolidate feedback, and welcome suggestions for improvement. This is just the start of a long but hopefully significant and transformative process of creative destruction. 

Tuesday, August 19, 2014

The Agent-Based Method

It's nice to see some attention being paid to agent-based computational models on economics blogs, but Chris House has managed to misrepresent the methodology so completely that his post is likely to do more harm than good. 

In comparing the agent-based method to the more standard dynamic stochastic general equilibrium (DSGE) approach, House begins as follows:
Probably the most important distinguishing feature is that, in an ABM, the interactions are governed by rules of behavior that the modeler simply encodes directly into the system individuals who populate the environment.
So far so good, although I would not have used the qualifier "simply", since encoded rules can be highly complex. For instance, an ABM that seeks to describe the trading process in an asset market may have multiple participant types (liquidity, information, and high-frequency traders for instance) and some of these may be using extremely sophisticated strategies.

How does this approach compare with DSGE models? House argues that the key difference lies in assumptions about rationality and self-interest:
People who write down DSGE models don’t do that. Instead, they make assumptions on what people want. They also place assumptions on the constraints people face. Based on the combination of goals and constraints, the behavior is derived. The reason that economists set up their theories this way – by making assumptions about goals and then drawing conclusions about behavior – is that they are following in the central tradition of all of economics, namely that allocations and decisions and choices are guided by self-interest. This goes all the way back to Adam Smith and it’s the organizing philosophy of all economics. Decisions and actions in such an environment are all made with an eye towards achieving some goal or some objective. For consumers this is typically utility maximization – a purely subjective assessment of well-being.  For firms, the objective is typically profit maximization. This is exactly where rationality enters into economics. Rationality means that the “agents” that inhabit an economic system make choices based on their own preferences.
This, to say the least, is grossly misleading. The rules encoded in an ABM could easily specify what individuals want and then proceed from there. For instance, we could start from the premise that our high-frequency traders want to maximize profits. They can only do this by submitting orders of various types, the consequences of which will depend on the orders placed by others. Each agent can have a highly sophisticated strategy that maps historical data, including the current order book, into new orders. The strategy can be sensitive to beliefs about the stream of income that will be derived from ownership of the asset over a given horizon, and may also be sensitive to beliefs about the strategies in use by others. Agents can be as sophisticated and forward-looking in their pursuit of self-interest in an ABM as you care to make them; they can even be set up to make choices based on solutions to dynamic programming problems, provided that these are based on private beliefs about the future that change endogenously over time. 

What you cannot have in an ABM is the assumption that, from the outset, individual plans are mutually consistent. That is, you cannot simply assume that the economy is tracing out an equilibrium path. The agent-based approach is at heart a model of disequilibrium dynamics, in which the mutual consistency of plans, if it arises at all, has to do so endogenously through a clearly specified adjustment process. This is the key difference between the ABM and DSGE approaches, and it's right there in the acronym of the latter.

A typical (though not universal) feature of agent-based models is an evolutionary process, that allows successful strategies to proliferate over time at the expense of less successful ones. Since success itself is frequency dependent---the payoffs to a strategy depend on the prevailing distribution of strategies in the population---we have strong feedback between behavior and environment. Returning to the example of trading, an arbitrage-based strategy may be highly profitable when rare but much less so when prevalent. This rich feedback between environment and behavior, with the distribution of strategies determining the environment faced by each, and the payoffs to each strategy determining changes in their composition, is a fundamental feature of agent-based models. In failing to understand this, House makes claims that are close to being the opposite of the truth: 
Ironically, eliminating rational behavior also eliminates an important source of feedback – namely the feedback from the environment to behavior.  This type of two-way feedback is prevalent in economics and it’s why equilibria of economic models are often the solutions to fixed-point mappings. Agents make choices based on the features of the economy.  The features of the economy in turn depend on the choices of the agents. This gives us a circularity which needs to be resolved in standard models. This circularity is cut in the ABMs however since the choice functions do not depend on the environment. This is somewhat ironic since many of the critics of economics stress such feedback loops as important mechanisms.
It is absolutely true that dynamics in agent-based models do not require the computation of fixed points, but this is a strength rather than a weakness, and has nothing to do with the absence of feedback effects. These effects arise dynamically in calendar time, not through some mystical process by which coordination is instantaneously achieved and continuously maintained. 

It's worth thinking about how the learning literature in macroeconomics, dating back to Marcet and Sargent and substantially advanced by Evans and Honkapohja fits into this schema. Such learning models drop the assumption that beliefs continuously satisfy mutual consistency, and therefore take a small step towards the ABM approach. But it really is a small step, since a great deal of coordination continues to be assumed. For instance, in the canonical learning model, there is a parameter about which learning occurs, and the system is self-referential in that beliefs about the parameter determine its realized value. This allows for the possibility that individuals may hold incorrect beliefs, but limits quite severely---and more importantly, exogenously---the structure of such errors. This is done for understandable reasons of tractability, and allows for analytical solutions and convergence results to be obtained. But there is way too much coordination in beliefs across individuals assumed for this to be considered part of the ABM family.

The title of House's post asks (in response to an earlier piece by Mark Buchanan) whether agent-based models really are the future of the discipline. I have argued previously that they are enormously promising, but face one major methodological obstacle that needs to be overcome. This is the problem of quality control: unlike papers in empirical fields (where causal identification is paramount) or in theory (where robustness is key) there is no set of criteria, widely agreed upon, that can allow a referee to determine whether a given set of simulation results provides a deep and generalizable insight into the workings of the economy. One of the most celebrated agent-based models in economics---the Schelling segregation model---is also among the very earliest. Effective and acclaimed recent exemplars are in short supply, though there is certainly research effort at the highest levels pointed in this direction. The claim that such models can displace the equilibrium approach entirely is much too grandiose, but they should be able to find ample space alongside more orthodox approaches in time. 


The example of interacting trading strategies in this post wasn't pulled out of thin air; market ecology has been a recurrent theme on this blog. In ongoing work with Yeon-Koo Che and Jinwoo Kim, I am exploring the interaction of trading strategies in asset markets, with the goal of addressing some questions about the impact on volatility and welfare of high-frequency trading. We have found the agent-based approach very useful in thinking about these questions, and I'll present some preliminary results at a session on the methodology at the Rethinking Economics conference in New York next month. The event is free and open to the public but seating is limited and registration required.


Update: Chris House responds, leaping from the assertion that agent-based models disregard rationality and self-interest to the diametrically opposed claim that DSGEs are a special case of agent-based models. Noah Smith concurs, but seems to misunderstand not just the agent-based method but also the rational expectations hypothesis. Leigh Tesfatsion's two comments on Chris' posts are spot on, and it's worth spending a bit of time on her agent-based computational economics page. There you will find the following definition (italics added):
Agent-based computational economics (ACE) is the computational modeling of economic processes (including whole economies) as open-ended dynamic systems of interacting agents... ACE modeling is analogous to a culture-dish laboratory experiment for a virtual world. Starting from an initial world state, specified by the modeler, the virtual world should be capable of evolving over time driven solely by the interactions of the agents that reside within the world. No resort to externally imposed sky-hooks enforcing global coordination, such as market clearing and rational expectations constraints, should be needed to drive or support the dynamics of this world.
As I said in a response to Noah, the claim that DSGE's are a special case of agent-based models is not just wrong, it makes the case for pluralism harder to make. But the good news is that there seems to be a lot of interest in the approach among graduate students. I introduced the basic idea at the end of a graduate math methods course for first year PhD students at Columbia a couple of years ago, and it was really nice to see a few of them show up to the agent-based modeling session at the recent Rethinking Economics conference. I suspect that before long, knowledge of this (along with more orthodox methods) will be an asset in the job market. 

Tuesday, June 03, 2014

Plots and Subplots in Piketty's Capital

Thomas Piketty's Capital in the Twenty-First Century is a hefty 700 pages long, but if one were to collect together all reviews of the book into a single volume it would doubtless be heftier still. These range from glowing to skeptical to largely dismissive; from cursory to deeply informed. Questions have been asked (and answered) about the book’s empirical claims, and some serious theoretical challenges are now on the table.

Most reviewers have focused on Piketty’s dramatic predictions of rising global wealth inequality, which he attributes to the logic of capital accumulation under conditions of low fertility and productivity growth. I have little to say about this main message, which I consider plausible but highly speculative (more on this below). Instead, I will focus here on the book’s many interesting digressions and subplots.

When an economist as talented as Piketty immerses himself in a sea of data from a broad range of sources for over a decade, a number of valuable insights are bound to emerge. Some are central to his main argument while others are tangential; either way, they are deserving of comment and scrutiny.

Let me begin with the issue of measurement, which Piketty discusses explicitly. He argues that "statistical indices such as the Gini coefficient give an abstract and sterile view of inequality, which makes it difficult for people to grasp their position in the contemporary hierarchy." Instead, his preference is for distribution tables, which display the share of total income or wealth that is held by members of various classes. In many cases he considers just three groups: those below the median, those above the median but outside the top decile, and those in the top decile. Sometimes he partitions the top group into those within the top centile and those outside it; and occasionally looks at even finer partitions of those at the summit.

Using this approach, Piketty is able to document one of the most significant social transformations of the twentieth century: the emergence of a European middle class with significant property holdings. Prior to the first World War, there was scarcely any difference between the per-capita wealth of those below the median and the forty percent of the population directly above them; each of these groups owned about 5% of aggregate wealth. The remaining 90% was in the hands of the top decile, with 50% held by the top centile. This was to change dramatically: the share of wealth held by the middle group has risen seven-fold and now stands at 35%, while the share of wealth held by those below the median remains negligible.

Piketty argues that this "emergence of a patrimonial middle class was an important, if fragile, historical innovation, and it would be serious mistake to underestimate it." In particular, one could make a case that the continued stability of the system depends on the consolidation of this transition. If there is a reversal, as Piketty suspects there could well be, it would have major social and political ramifications. I'll return to this point below.

In order to facilitate comparisons across time and space, Piketty measures the value of capital in units of years of national income. This is an interesting choice that yields certain immediate dividends. Consider the following chart, which displays the value of national capital for eight countries over four decades:

The dramatic increase during the 1980s in the value of Japanese capital, encompassing both real estate and stocks, is evident. So is the long, slow decline in the ratio of capital to national income after the peak in 1990. The Japanese series begins and ends close to the cluster of other countries, but takes a three decade long detour in the interim. Piketty argues that the use of such measures can be helpful for policy:
...the Japanese record of 1990 was recently beaten by Spain, where the total amount of net private capital reached eight years of national income on the eve of the crisis of 2007-2008... The Spanish bubble began to shrink quite rapidly in 2010-2011, just as the Japanese bubble did in the early 1990s... note how useful it is to represent the historical evolution of the capital/income ratio in this way, and thus to exploit stocks and flows in the national accounts. Doing so might make it possible to detect obvious overvaluations in time to apply prudential policies... 
Over short periods of time the value of aggregate capital can fluctuate sharply; over longer periods it is determined largely by the flow of savings. One of the most provocative claims in the book concerns the motives for private saving. The standard textbook theory, familiar to students of economics at all levels, is based on the life-cycle hypothesis formulated by Franco Modigliani. From this perspective, saving is motivated by the desire to smooth consumption over the course of a lifetime: one borrows when young, pays off this debt and accumulates assets during peak earning years, and spends down the accumulated savings during retirement. Geometrically, savings behavior is depicted as a "Modigliani triangle" with rising asset accumulation when working, a peak at retirement, and depletion of assets thereafter.

There is no doubt that saving for retirement is a key feature of contemporary financial planning, and individuals with the means to do so accumulate significant asset positions over their working lives. But one of Piketty's most startling claims is that there is little evidence for Modigliani triangles in the data. Instead, asset accumulation appears to rise monotonically over the life-cycle. That is, the capital income from accumulated assets is sufficient to finance retirement consumption without appreciable depletion of the asset base.

Now this could be explained by a desire to leave substantial bequests to one's children, except that the pattern seems to arise also for those without natural heirs. Piketty concludes that "Modigliani's life-cycle theory... which is perfectly plausible a priori, cannot explain the observed facts---to put it mildly." This is a challenging claim. If it stands up to scrutiny, it will require a significant change in the manner in which individual savings behavior is conceived in economic models.

Also interesting is the aggregate savings behavior of societies. Countries that run large and persistent trade surpluses (thus producing more than they consume) end up accumulating assets overseas. Other countries are in the opposite position; a portion of their capital is foreign-owned, and part of their current output accordingly flows to foreign residents in the form of capital income. Not surprisingly, countries with positive inflows of capital income from abroad tend to be more affluent in the first place; Japan and Germany are prime examples. As a result, "the global income distribution is more unequal than the output distribution."

While such imbalances can be large when comparing countries, Piketty observes that at the level of most continent blocs, the imbalance is negligible: “total income is almost exactly equal to total output” within Europe, Asia, and the Americas. That is, the rich and poor countries within these continents have roughly offsetting net asset positions relative to the rest of the world.

The one exception is Africa, where nearly twenty percent of total capital (and a much greater portion of manufacturing capital) is foreign-owned. As a result, income is less than output on the continent as a whole, with the difference accruing to foreign residents in the form of capital income. Put differently, investment on the continent has been financed in large part through savings elsewhere, not from flows from surplus to deficit countries within Africa.

Is this a cause for concern? Piketty notes that in theory, "the fact that rich countries own part of the capital of poor countries can have virtuous effects by promoting convergence." However, successful late industrializing nations such as Japan, South Korea, Taiwan, and China managed to mobilize domestic savings to a significant degree to finance investment in physical and human capital. They "benefitted far more from open markets for goods and services and advantageous terms of trade than from free capital flows... gains from free trade come mainly from the diffusion of knowledge and from the productivity gains made necessary by open borders."

Unless African nations can transition to something approaching self-sufficiency in savings, a significant share of the continent’s assets will continue to remain foreign-owned. Piketty sees dangers in this:
When a country is largely owned by foreigners, there is a recurrent and almost irrepressible demand for expropriation... The country is thus caught in an endless alternation between revolutionary governments (whose success in improving actual living conditions for their citizens is often limited) and governments dedicated to the protection of existing property owners, thereby laying the groundwork for the next revolution or coup. Inequality of capital ownership is already difficult to accept and peacefully maintain within a single national community. Internationally it is almost impossible to sustain without a colonial type of political domination.
Indeed, the colonial period was characterized by very large and positive net asset positions in Europe. On the eve of the first World War, the European powers "owned an estimated one-third to one-half of the domestic capital of Asia and Africa and more than three-quarters of their industrial capital." But these massive positions vanished in the wake of two World Wars and the Great Depression. These calamities, according to Piketty, resulted in a significant loss of asset values, dramatic shifts in attitudes towards taxation, and a reversal of trends in the evolution of global wealth inequality, trends that have now begun to reassert themselves.

There are many more interesting tangents and detours in the book, including discussions of the circumstances under with David Ricardo first developed the hypothesis that has come to be called Ricardian Equivalence, and the manner in which the "long and tumultuous history of the public debt... has indelibly marked collective memories and representations." But this post is too long already and I need to wrap it up.

For a lengthy book so filled with charts and tables, Capital in the Twenty-First Century is surprisingly readable. This is in no small part because the author cites philosophers and novelists freely and at length. This lightens the prose, and is also a very effective rhetorical device. As Piketty notes, authors such as Jane Austen and HonorĂ© de Balzac "depicted the effects of inequality with a verisimilitude and evocative power that no statistical analysis can match."

This is an important point. Numerical tables simply cannot capture the deep-seated sense of social standing and expectations of deference that permeate a hierarchical society. Those familiar with the culture of the Indian subcontinent will understand this well. There are many oppressive distinctions that remain salient in modern society but we at least pay lip service to the creed that we are all created equal and endowed with certain inalienable rights. The sustainability of significant wealth inequality in the face of this creed depends on the effectiveness of what Piketty calls "the apparatus of justification." But no matter how effective this apparatus, there is a limit to the extent of wealth inequality that is consistent with the survival of this creed.

This is how I interpret Piketty's main message: if the historically significant emergence of a propertied middle class were to be reversed, social and political tremors would follow. But how are we to evaluate his claim that such a reversal is inevitable in the absence of a global tax on capital? His argument depends on interactions between demographic change, productivity growth, and the distribution of income, and without a well-articulated theory that features all these components in a unified manner, I have no way of evaluating it with much confidence.

Piketty's attitude towards theory in economics is dismissive; he claims that it involves “immoderate use of mathematical models, which are frequently no more than an excuse for occupying the terrain and masking the vacuity of the content.” This criticism is not entirely undeserved. It is nevertheless my hope that the book will stimulate theorists to think through the interactions between fertility, technology, and distribution in a serious way. Without this, I don't see how Piketty's predictions can be properly evaluated, or even fully understood. 

Sunday, April 06, 2014

Superfluous Financial Intermediation

I'm only about halfway through Flash Boys but have already come across a couple of striking examples of what might charitably be called superfluous financial intermediation.

This is the practice of inserting oneself between a buyer and a seller of an asset, when both parties have already communicated to the market a willingness to trade at a mutually acceptable price. If the intermediary were simply absent from the marketplace, a trade would occur between the parties virtually instantaneously at a single price that is acceptable to both. Instead, both parties trade against the intermediary, at different prices. The intermediary captures the spread at the expense of the parties who wish to transact, adds nothing to liquidity in the market for the asset, and doubles the notional volume of trade.

The first example may be summarized as follows. A hundred thousand shares in a company have been offered for sale at a specified price across multiple exchanges. A single buyer wishes to purchase the whole lot and is willing to pay the asked price. He places a single buy order to this effect. The order first reaches BATS, where it is partially filled for ten thousand shares; it is then routed to the other exchanges for completion. An intermediary, having seen the original buy order on arrival at BATS, places orders to buy the remaining ninety thousand shares on the other exchanges. This latter order travels faster and trades first, so the original buyer receives only partial fulfillment. The intermediary immediately posts offers to sell ninety thousand shares at a slightly higher price, which the original buyer is likely to accept. All this in a matter of milliseconds.

The intermediary here is serving no useful economic function. Volume is significantly higher than it otherwise would have been, but there has been no increase in market liquidity. Had there been no intermediary present, the buyer and sellers would have transacted without any discernible delay, at a price that would have been better for the buyer and no worse for the sellers. Furthermore, an order is allowed to trade ahead of one that made its first contact with the market at an earlier point in time.

The second example involves interactions between a dark pool and the public markets. Suppose that the highest bid price for a stock in the public exchanges is $100.00, and the lowest ask is $100.10. An individual submits a bid for a thousand shares at $100.05 to a dark pool, where it remains invisible and awaits a matching order. Shortly thereafter, a sell order for a thousand shares at $100.01 is placed at a public exchange. These orders are compatible and should trade against each other at a single price. Instead, both trade against an intermediary, which buys at the lower price, sells at the higher price, and captures the spread.

As in the first example, the intermediary is not providing any benefit to either transacting party, and is not adding liquidity to the market for the asset. Volume is doubled but no economic purpose is served. Transactions that were about to occur anyway are preempted by a fraction of a second, and a net transfer of resources from investors to intermediaries is the only lasting consequence.

Michael Lewis has focused on practices such as these because their social wastefulness and fundamental unfairness is so transparent. But it's important to recognize that most of the strategies implemented by high frequency trading firms may not be quite so easy to classify or condemn. For instance, how is one to evaluate trading based on short term price forecasts based on genuinely public information? I have tried to argue in earlier posts that the proliferation of such information extracting strategies can give rise to greater price volatility. Furthermore, an arms race among intermediaries willing to sink significant resources into securing the slightest of speed advantages must ultimately be paid for by investors. This is an immediate consequence of what I like to call Bogle's Law:
It is the iron law of the markets, the undefiable rules of arithmetic: Gross return in the market, less the costs of financial intermediation, equals the net return actually delivered to market participants.
I hope that the minor factual errors in Flash Boys won't detract from the book's main message, or derail the important and overdue debate that it has predictably stirred. By focusing on the most egregious practices Lewis has already picked the low-hanging fruit. What remains to be figured out is how typical such practices really are. Taking full account of the range of strategies used by high frequency traders, to what extent are our asset markets characterized by superfluous financial intermediation?


Update (4/11). It took me a while to get through it but I’ve now finished the book. It’s well worth reading. Although the public discussion of Flash Boys has been largely focused on high frequency trading, the two most damning claims in the book concern broker-dealers and the SEC.

Lewis provides evidence to suggest that some broker-dealers direct trades to their own dark pools at the expense of their customers. Brokers with less than a ten percent market share in equities trading mysteriously manage to execute more than half of their customers’ orders in their own dark pools rather than in the wider market. This is peculiar because for any given order, the likelihood that the best matching bid or offer is found in a broker’s internal dark pool should roughly match the broker’s market share in equities trading. Instead, a small portion of the order is traded at external venues in a manner that allows the information content of the order to leak out. This results in a price response on other exchanges, allowing the internal dark pool to then provide the best match.

There’s also an account of a meeting between Brad Katsuyama, the book’s main protagonist, and the SEC’s Division of Trading and Markets that is just jaw-dropping. Katsuyama had discovered the reason why his large orders were only partially filled even though there seemed to be enough offers available across all exchanges for complete fulfillment (the first example above). In order to prevent their orders from being front-run after their first contact with the market, Katsuyama and his team developed a simple but ingenious defense. They split each order into components that matched the offers available at the various exchanges, and then submitted the components at carefully calibrated intervals (separated by microseconds) so that they would arrive at their respective exchanges simultaneously. The program written to accomplish this was subsequently called Thor. Katsuyama met with the SEC to explain how Thor worked, and was astonished to find that some of the younger staffers thought that the program, designed to protect fundamental traders from being front-run, was unfair to the high-frequency outfits whose strategies were being rendered ineffective.

This account, if accurate, reveals a truly astonishing failure within the SEC to understand the agency’s primary mandate. If this is the state of our regulatory infrastructure then there really is little hope for reform. 

Wednesday, November 20, 2013

The Payments System and Monetary Transmission

About forty minutes into the final session of a recent research conference at the IMF, Ken Rogoff made the following remarks:
We have regulation about the government having monopoly over currency, but we allow these very close substitutes, we think it's good, but maybe... it's not so good, maybe we want to have a future where we all have an ATM at the Fed instead of intermediated through a bank... and if you want a better deal, you want more interest on your money, then you can buy what is basically a bond fund that may be very liquid, but you are not guaranteed that you're going to get paid back in full. 
This is an idea that's long overdue. Allowing individuals to hold accounts at the Fed would result in a payments system that is insulated from banking crises. It would make deposit insurance completely unnecessary, thus removing a key subsidy that makes debt financing of asset positions so appealing to banks. There would be no need to impose higher capital requirements, since a fragile capital structure would result in a deposit drain. And there would be no need to require banks to offer cash mutual funds, since the accounts at the Fed would serve precisely this purpose.

But the greatest benefit of such a policy would lie elsewhere, in providing the Fed with a vastly superior monetary transmission mechanism. In a brief comment on Macroeconomic Resilience a few months ago, I proposed that an account be created at the Fed for every individual with a social security number, including minors. Any profits accruing to the Fed as a result of its open market operations could then be used to credit these accounts instead of being transferred to the Treasury. But these credits should not be immediately available for withdrawal: they should be released in increments if and when monetary easing is called for.

The main advantage of such an approach is that it directly eases debtor balance sheets when a recession hits. It can provide a buffer to those facing financial distress, allowing payments to be made on mortgages or auto loans in the face of an unexpected loss of income. And as children transition into adulthood, they will find themselves with accumulated deposits that could be used to finance educational expenditures or a down payment on a home.

In contrast, monetary policy as currently practiced targets creditor balance sheets. Asset prices rise as interest rates are driven down. The goal is to stimulate expenditure by lowering borrowing costs, but by definition this requires individuals to take on more debt. In an over-leveraged economy struggling through a balance sheet recession, such policies can only provide temporary relief. 

No matter how monetary policy is implemented, it has distributional effects. As a result, the impact on real income growth of a given nominal target is sensitive to the monetary transmission mechanism in place. One of the things I find most puzzling and frustrating about current debates concerning monetary policy is the focus on targets rather than mechanisms. To my mind, the choice of target---whether the inflation rate or nominal income growth or something entirely different---is of secondary importance compared to the mechanism used to attain it.

Rogoff was followed at the podium by Larry Summers, who voiced fears that we face a long period of secular stagnation. Paul Krugman has endorsed this view. I think that this fate can be avoided, but not by fiddling with inflation or nominal growth targets. The Fed is currently hobbled not by the choice of an inappropriate goal, but by the limited menu of transmission mechanisms at its disposal. If all you can do in the face of excessive indebtedness is to encourage more borrowing, swapping one target for another is not going to solve the problem. Thinking more imaginatively about mechanisms is absolutely essential, otherwise we may well be facing a lost decade of our own.

Thursday, September 26, 2013

The Romney Whale

In my last post I referenced a paper with David Rothschild that we posted earlier this month. The main goal of that work was to try to examine the manner in which new information is transmitted to asset prices, and to distinguish empirically between two influential theories of trading. To accomplish this we examined in close detail every transaction on Intrade over the two week period immediately preceding the 2012 presidential election. We looked at about 84,000 transactions involving 3.5 million contracts and over 3,200 unique accounts, and in the process of doing so quickly realized that a single trader was responsible for about a third of all bets on Romney to win, and had wagered and lost close to 4 million dollars in just this one fortnight.  

While this discovery was (and remains) incidental to the main message of the paper, it has attracted a great deal of media attention over the past couple of days. (About a dozen articles are linked here, and there have been a couple more published since.) Most of these reports state the basic facts and make some conjectures about motivation. The purpose of this post is to describe and interpret what we found in a bit more detail. Much of what is said here can also be found in Section 5 of the paper.

To begin with, the discovery of a large trader with significant exposure to a Romney loss was not a surprise. There was discussion of a possible "Romney Whale" in the Intrade chat rooms and elsewhere leading up to the election, as well as open recognition of the possibility of arbitrage with Betfair. On the afternoon of election day I noticed that the order book for Romney contracts was unusually asymmetric, with the number of bids far exceeding the number of offers, and posted this:

This was circulated quite widely thanks to the following response:

In a post on the following day I explained why I thought that is was an attempt at manipulation:
Could this not have been just a big bet, placed by someone optimistic about Romney's chances? I don't think so, for two reasons. First, if one wanted to bet on Romney rather than Obama, much better odds were available elsewhere, for instance on Betfair. More importantly, one would not want to leave such large orders standing at a time when new information was emerging rapidly; the risk of having the orders met by someone with superior information would be too great. Yet these orders stood for hours, and effectively placed a floor on the Romney price and a ceiling on the price for Obama.
Ron Bernstein at Intrade has explained why the disparity with Betfair is not surprising given the severe constraints faced by US residents in opening and operating accounts at the latter exchange, and the differential fee structure. Nevertheless, I still find the second explanation compelling.

The strategic manner in which these orders were placed, with large bids at steadily declining intervals, suggested to me that this was an experienced trader making efficient use of the available funds to order to have the maximum price impact. The orders lower down on the bid side of the book served as deterrents, revealing to counterparties that a sale at the best bid would not result in a price collapse. This is why I described the trader as sophisticated in my conversations with reporters at the WSJ and Politico.  Characterizing his behavior as stupid presumes that this was a series of bets made with in the conviction that Romney would prevail, which I doubt.

But if this was an attempt to manipulate prices, what was it's purpose? We consider a couple of different possibilities in the paper. The one that has been most widely reported is that it was at attempt to boost campaign contributions, morale, and turnout. But there's another very different possibility that's worth considering.

On the afternoon of the 2004 presidential election exit polls were leaked that suggested a surprise victory by John Kerry, and the result was a sharp rise in the price of his Tradesports contract. (Tradesports was a precursor to Intrade.) This was sustained for several hours until election returns began to come in that were not consistent with the reported polls. In an interesting study published in 2007, Snowberg, Wolfers and Zitzewitz used this event to examine the effects on the S&P futures market of beliefs about the electoral outcome. They found that perceptions of a Kerry victory resulted in a decline in the price of the index, an effect they interpreted as causal. The following chart from their paper makes the point quite vividly:

Motivated in part by this finding, we wondered whether the manipulation of beliefs about the electoral outcome could have been motivated by financial gain. If Intrade could be used to manipulate beliefs in a manner that affected the value a stock price index, or specific securities such as health or oil and gas stocks, then a four million dollar loss could be easily compensated by a much larger gain elsewhere. Could this have provided motivation for the behavior of our large trader?

We decided that this was extremely unlikely, for two reasons. First, the 2004 analysis showed only that changes in beliefs affected the index, not that the Intrade price caused the change in beliefs. In fact, it was the leaked exit polls that affected both the Intrade price and the S&P 500 futures market. This does not mean that a change in the Intrade price, absent confirming evidence from elsewhere, could not have a causal impact on other asset prices. It's possible that it could, but not plausible in our estimation.

Furthermore, the partisan effects identified by Snowberg et al. were completely absent in 2012. In fact, if anything, the effects were reversed. S&P 500 futures reacted negatively to an increase in perceptions of a Romney victory during the first debate, and positively to the announcement of the result on election day. One possible reason is that monetary policy was expected to be tighter under a Romney administration. Here is the chart for the first debate:

The index falls as Romney's prospects appear to be rising, but the effect is clearly minor. The main point is that any attempt to use 2004 correlations to influence the S&P via changes in Intrade prices, even if they were successful in altering beliefs about the election, would have been futile or counterproductive from the perspective of financial gain.

This is why we ultimately decided that this trader's activity was simply a form of support to the campaign. While four million dollars is a fortune for most of us, it is less than the cost of making and airing a primetime commercial in a major media market. In the age of multi-billion dollar political campaigns, it really is a drop in the bucket. Even if the impact on perceptions was small, it's not clear to me that there was an alternative use of this money at this late stage that would have had a greater impact. Certainly television commercials had been largely tuned out by this point.

It's important to keep in mind that attempts at manipulation notwithstanding, real money peer-to-peer prediction markets have been very effective in forecasting outcomes over the two decades in which they have been in use. Furthermore, as I hope my paper with David demonstrates, the simplicity of the binary option contract makes the data from such markets valuable for academic research. It is true that participation in these markets is a form of gambling, but that is also the case for many short-horizon traders in markets for more traditional assets, especially options, futures and swaps. The volume of speculation in such markets exceeds the demands for hedging by an order of magnitude. From a regulatory standpoint, there is really no rational basis for treating prediction markets differently.

Sunday, September 22, 2013

Information, Beliefs, and Trading

Even the most casual observer of financial markets cannot fail to be impressed by the speed with which prices respond to new information. Markets may overreact at times but they seldom fail to react at all, and the time lag between the emergence of information and an adjustment in price is extremely short in the case of liquid securities such as common stock.

Since all price movements arise from orders placed and executed, prices can respond to news only if there exist individuals in the economy who are alert to the arrival of new information and are willing to adjust positions on this basis. But this raises the question of how such "information traders" are able to find willing counterparties. After all, who in their right mind wants to trade with an individual having superior information?

This kind of reasoning, when pushed to its logical limits, leads to some paradoxical conclusions. As shown by Aumann, two individuals who are commonly known to be rational, and who share a common prior belief about the likelihood of an event, cannot agree to disagree no matter how different their private information might be. That is, they can disagree only if this disagreement is itself not common knowledge. But the willingness of two risk-averse parties to enter opposite sides of a bet requires them to agree to disagree, and hence trade between risk-averse individuals with common priors is impossible if they are commonly known to be rational.

This may sound like an obscure and irrelevant result, since we see an enormous amount of trading in asset markets, but I find it immensely clarifying. It means that in thinking about trading we have to allow for either departures from (common knowledge of) rationality, or we have to drop the common prior hypothesis. And these two directions lead to different models of trading, with different and testable empirical predictions.

The first approach, which maintains the common prior assumption but allows for traders with information-insensitive asset demands, was developed in a hugely influential paper by Albert Kyle. Such "noise traders" need not be viewed as entirely irrational; they may simply have urgent liquidity needs that require them to enter or exit positions regardless of price. Kyle showed that the presence of such traders induces market makers operating under competitive conditions to post bid and ask prices that could be accepted by any counterparty, including information traders. From this perspective, prices come to reflect information because informed parties trade with uninformed market makers, who compensate for losses on these trades with profits made in transactions with noise traders.

An alternative approach, which does not require the presence of noise traders at all but drops the common prior assumption, can be traced to a wonderful (and even earlier) paper by Harrison and Kreps. Here all traders have the same information at each point in time, but disagree about its implications for the value of securities. Trade occurs as new information arrives because individuals interpret this information differently. (Formally, they have heterogeneous priors and can therefore disagree even if their posterior beliefs are commonly known.) From this perspective prices respond to news because of heterogeneous interpretations of public information.

Since these two approaches imply very different distributions of trading strategies, they are empirically distinguishable in principle. But identifying strategies from a sequence of trades is not an easy task. At a minimum, one needs transaction level data in which each trade is linked to a buyer and seller account, so that the evolution of individual portfolios can be tracked over time. From these portfolio adjustments one might hope to deduce the distribution of strategies in the trading population.

In a paper that I have discussed previously on this blog, Kirilenko, Kyle, Samadi and Tuzun have used transaction level data from the S&P 500 E-Mini futures market to partition accounts into a small set of groups, thus mapping out an "ecosystem'' in which different categories of traders "occupy quite distinct, albeit overlapping, positions.'' Their concern was primarily with the behavior of high frequency traders both before and during the flash crash of May 6, 2010, especially in relation to liquidity provision. They do not explore the question of how prices come to reflect information, but in principle their data would allow them to do so.

I have recently posted the first draft a paper, written jointly with David Rothschild, that looks at transaction level data from a very different source: Intrade's prediction market for the 2012 US presidential election. Anyone who followed this market over the course of the election cycle will know that prices were highly responsive to information, adjusting almost instantaneously to news. Our main goal in the paper was to map out an ecology of trading strategies and thereby gain some understanding of the process by means of which information comes to be reflected in prices. (We also wanted to evaluate claims made at the time of the election that a large trader was attempting to manipulate prices, but that's a topic for another post.)

The data are extremely rich: for each transaction over the two week period immediately preceding the election, we know the price, quantity, time of trade, and aggressor side. Most importantly, we have unique identifiers for the buyer and seller accounts, which allows us to trace the evolution of trader portfolios and profits. No identities can be deduced from this data, but it is possible to make inferences about strategies from the pattern of trades.

We focus on contracts referencing the two major party candidates, Obama and Romney. These contracts are structured as binary options, paying $10 if the referenced candidate wins the election and nothing otherwise. The data allows us to compute volume, transactions, aggression, holding duration, directional exposure, margin, and profit for each account. Using this, we are able to group traders into five categories, each associated with a distinct trading strategy.

During our observational window there were about 84,000 separate transactions involving 3.5 million contracts and over 3,200 unique accounts. The single largest trader accumulated a net long Romney position of 1.2 million contracts (in part by shorting Obama contracts) and did this by engaging in about 13,000 distinct trades for a total loss in two weeks of about 4 million dollars. But this was not the most frequent trader: a different account was responsible for almost 34,000 transactions, which were clearly implemented algorithmically.

One of our most striking findings is that 86% of traders, accounting for 52% of volume, never change the direction of their exposure even once. A further 25% of volume comes from 8% of traders who are strongly biased in one direction or the other. A handful of arbitrageurs account for another 14% of volume, leaving just 6% of accounts and 8% of volume associated with individuals who are unbiased in the sense that they are willing to take directional positions on either side of the market. This suggests to us that information finds its way into prices largely through the activities of traders who are biased in one direction or another, and differ with respect to their interpretations of public information rather than their differential access to private information.

Prediction markets have historically generated forecasts that compete very effectively with those of the best pollsters.  But if most traders never change the direction of their exposure, how does information come to be reflected in prices? We argue that this occurs through something resembling the following process. Imagine a population of traders partitioned into two groups, one of which is predisposed to believe in an Obama victory while the other is predisposed to believe the opposite. Suppose that the first group has a net long position in the Obama contract while the second is short, and news arrives that suggests a decline in Obama's odds of victory (think of the first debate). Both groups revise their beliefs in response to the new information, but to different degrees. The latter group considers the news to be seriously damaging while the former thinks it isn't quite so bad. Initially both groups wish to sell, so the price drops quickly with very little trade since there are few buyers. But once the price falls far enough, the former group is now willing to buy, thus expanding their long position, while the latter group increases their short exposure. The result is that one group of traders ends up as net buyers of the Obama contract even when the news is bad for the incumbent, while the other ends up increasing short exposure even when the news is good. Prices respond to information, and move in the manner that one would predict, without any individual trader switching direction.

This is a very special market, to be sure, more closely related to sports betting than to stock trading. But it does not seem implausible to us that similar patterns of directional exposure may also be found in more traditional and economically important asset markets. Especially in the case of consumer durables, attachment to products and the companies that make them is widespread. It would not be surprising if one were to find Apple or Samsung partisans among investors, just as one finds them among consumers. In this case one would expect to find a set of traders who increase their long positions in Apple even in the face of bad news for the company because they believe that the price has declined more than is warranted by the news. Whether or not such patterns exist is an empirical question that can only be settled with a transaction level analysis of trading data.

If there's a message in all this, it is that markets aggregate not just information, but also fundamentally irreconcilable perspectives. Prices, as John Kay puts it, "are the product of a clash between competing narratives about the world." Some of the volatility that one observes in asset markets arises from changes in perspectives, which can happen independently of the arrival of information. This is why substantial "corrections" can occur even in the absence of significant news, and why stock prices appear to "move too much to be justified by subsequent changes in dividends." What makes markets appear invincible is not the perfect aggregation of information that is sometimes attributed to them, but the sheer unpredictability of persuasion, exhortation, and social influence that can give rise to major shifts in the distribution of narratives.