With the rise of global unbundling, traders’ best-execution workload will increase dramatically, as they must develop a best-ex commission allocation scheme for all counterparties. Given so many brokers and so much noise inherent to trading, this is not just a practical concern, but an econometric one. This isn’t a new problem, but one that will become more common. Some of ITG’s clients, notably quantitative firms, are already trading exclusively for best-ex, with a history of performing TCA and judging algorithmic trading experiments. This document attempts to document some of the practices we have observed and offers some insights into how they can be used by traders at non-quantitative funds.
Ideally, brokers would offer algorithms that provide the lowest level of cost they’re capable of for a given level of risk—faster algorithms being costlier with lower risk on average and slower algorithms the converse. The industry didn’t intentionally develop this way, but competitive forces have driven the market for commercial algorithms to have a few different entry points that are fairly similar across brokers and roughly achieve a sort of efficient frontier.
One of the biggest challenges for an institutional trading desk is estimating the risk tolerance for a given order or program. The most obvious and sought after inputs to risk tolerance are market direction and portfolio manager alpha, but these are difficult to predict. While it does make sense to pursue a better understanding of directional measures, we think it is also essential to understand the humbler, but more predictable, element of risk: frequency. The more frequently you implement a type of trade, the less contribution each order / day has to the final performance.
This sounds simple and obvious, but it can be difficult to track the motivation of each trade from PM to desk to broker. This tracking is a technology investment worth making.
Once the risk of an order or program is well understood, it’s time to select a strategy. This is an area where the skill and experience of a trader is key. It may seem that you should just pick the strategy on the efficient frontier according to your risk tolerance, but there is another dimension to consider and that is current market conditions. Volume, spread and volatility vary substantially from day to day and trade to trade. This third dimension can make the shape of the efficient frontier very different for a given trade vs. the historical expectation. For example, in a high volume, low volatility environment, it might be very low cost to reduce market risk substantially by trading faster.
In general, the efficient frontier for strategy selection can be improved by using personalized data rather than a peer data set that may blend together flows with widely varying motivations. The downside of using only personalized flow is that you may not have a sufficient sample, so a good system will bootstrap with peer TCA data where necessary.