Judging by the sheer number of choices, we are a decade into a bull market for electronic trading algorithms. Market fragmentation and technology improvements have spurred the development of hundreds of new algos, to the point where a trader today can choose between more than 1600 different algo strategies available on their execution management system. Despite the explosion of choice, traders still find the products lacking.
According to a January survey by Greenwich Associates, only 7% of buyside traders are completely satisfied with the off-the-shelf algos marketed by their brokers, with most preferring customized offerings. This points to the need for a fresh look at how algos are developed, deployed and modified. In short, a reinvention of algorithms. There are three key areas to address in a reinvention of algos: simplification, measurement and context.
Simpler is better
With 1600+ algos on the market, there are too many products chasing similar goals, including a lot of “me-too” algos with different parameters adding confusion to daily workflows. The typical buyside trader employs 3-5 discrete trading strategies based on urgency and liquidity needs, and algo providers should work on meeting those needs through enhancements to a core set of algos, not by creating new ones that are merely variations on a theme.
As the algo marketplace has become more fragmented, algos have become more difficult to measure. This has led to a trend in transaction cost analysis (TCA) focusing downstream on venue analysis. TCA providers and brokers have clustered on metrics such as reversion, fill rate and latency. The attraction of these downstream measures is clear: there are fewer venues (though still many!) than algos and the sample size is much larger. This is a bit of a compromise. Venue analysis has statistical significance but not as much economic significance to your performance. For most institutional traders, market impact and information leakage are the main drivers of performance and venue analysis doesn’t have much to say about those factors. Measuring this information leakage ranks highly among desired algo improvements, with 73% of buyside traders surveyed by ITG in early March citing this as a priority.
Context is everything
Traders often express frustration about algo behavior, which seems unnatural or unintuitive. Some of this frustration stems from the “black box” nature of many algos, which offers little transparency about how the algo is behaving. Transparency is especially important because most traders don’t believe that algos understand the full picture and motivation of an order. With some exceptions (including ITG’s Algo Prism real-time monitoring tool), traders do not have a clear view of what an algo will do with their orders. This frustration can be seen in the higher cancellation rates: traders cancel 33% of orders sent to algos versus just 6% sent to human (aka high-touch) traders. The solution to this problem lies in full transparency, from order plan to limit order placement, and smarter models built with machine learning. We are leveraging the recent surge in investment in this field to create adaptive models that know more about an order, and a client, than ever before. Drawing from a wealth of historical trading data, we are able to create algos that adapt to an individual trader’s style and preferences, and feel more “natural” over time.
It is not necessary to start from scratch in this reinvention; the sell-side has collectively spent hundreds of millions of dollars over the past decade to improve algo technology, reducing latency and working to narrow the competitive gap between their clients and high-frequency traders. What is needed now is a more focused approach, working on improvements at the strategy level that result in lower implementation shortfall costs. The end result will hopefully be a marketplace with far fewer algos, improved overall investment performance – and much higher buyside approval ratings.