Over the last 20 years, technological innovations have dramatically increased the speed of trading in global financial markets. While historically buyers and sellers of securities used to meet on actual trading floors, nowadays the vast majority of stock trading occurs electronically. The amount of time that it takes for messages (e.g., buy or sell orders) to travel within and between markets is now measured in microseconds and is limited, for the most part, only by the speed of light. Although most would agree that extremely long delays in markets are undesirable, it is unclear whether ultra-fast markets are necessary – perhaps they are even harmful.
Empirical evidence suggests that many technological innovations have made financial markets undoubtedly work better. For example, market quality appears to have generally improved with the adoption of electronic trading environments. The case is less clear for more recent developments, where individual market participants are willing to spend significant sums in order to obtain tiny speed advantages vis-à-vis all other traders. Clearly, each individual market participant has strong incentives to be the fastest trader, as he will be the first (and probably the only one) benefitting from very short-term profit opportunities. However, from an overall welfare perspective it appears debatable whether more ‘efficient’ prices (i.e., the absence of such profit opportunities) at ultra-high frequencies are of any relevance in the first place. In addition, in modern markets more than half of all trading occurs between ultra-fast automated trading algorithms, with potentially devastating consequences for market stability. One prominent example is the so-called ‘flash crash’ of May 6, 2010, during which the Dow Jones Index went down by almost 10% within minutes. While the actual causes of this crash are still under debate, automated trading algorithms appear to have at least exacerbated the problem as these algorithms are often highly correlated.
Many have argued that alternative market structures might mitigate the ‘arms race’ for speed. The argument is that, rather than operating markets continuously, they should be ‘pulsed’ via periodic batch auctions. In this setting, buy and sell orders are allowed to arrive within a certain time-interval, at the end of which they are matched to clear the market. This involves executing as many buy and sell orders as possible at the same transaction price. Interestingly, however, very little is known about what the ‘optimal’ time between such market clearings should be.
In our model, market quality depends on the time between market clearings, and we generally find that markets should be neither too fast nor too slow. Why? If markets are too fast, few investors interact with each other in any clearing and transaction prices will not coincide with the actual value of the security. On the other hand, if markets are too slow, investors will have to wait very long until their orders are executed and the actual value of the security will shift substantially between subsequent clearings. We show that the largest U.S. stocks should ideally trade at an interval of 1 to 3 seconds. This is evidence that markets have already become too fast – otherwise the optimal market speed should be closer to milli- or microsecond frequencies.
Our findings should not necessarily be understood as a call for implementing periodic batch auctions. In fact, modern financial markets are highly fragmented -a single security is often traded simultaneously on many venues- and synchronizing the market clearings of all venues is likely to a very complicated task in itself. Rather, we interpret our estimates as the optimal delay that investors should place on their orders, i.e., the time an investor should allow other investors to interact with her order. Today’s continuous markets allow investors to delay execution with passive order types, so that the benefits of slower trading do not require market structure changes.