QPI (defined in the original article) is a volatility-adjusted measure of how rare a stock’s 3-day move is relative to its own 5-year history. You take all 3-day returns from the past 5 years, locate today’s move in that distribution, and rescale it so that 0 means an extremely rare tail event and 100 means a very common one.
I don’t use fixed %-drop thresholds because they ignore regime and volatility: a -5% move in Stock A during a calm period is completely different from a -5% move in Stock B during a high-vol regime. QPI solves this by anchoring everything to each stock’s own distribution.
And yes, computing QPI on residual returns (relative to the S&P 500 or sector) is a natural extension, since it isolates true idiosyncratic dislocations rather than broad market moves: thanks for the suggestion! Will definitely try it!
In the next few articles, I will extend/improve on this idea... not necessarily shorter timeframes, but more/different universes, more data (that helps us differentiate fundamental price drops from transient ones), and overlaying a risk model :)
About the "residual returns" idea, even if you find no improvement (and very likely, far fewer trades), you shouldn't throw it away immediately; it would be worth comparing the two schemes in non-US markets.
In my experience most academic papers don’t hold up in practice. However it gives insight into what shops are very likely investigating. Thanks for sharing!
Totally! In the past 2 months, I implemented 3 different papers... the results were so off, I couldn't even find a way to transform the work into posts to share here!
Have you also estimated the performance after adding slippage to the trading costs? Since the number of trades has increased significantly compared to the previous strategy version, this should have a noticeable impact on the final performance. Assuming a 10m account, a single trade size of 1m would imply an additional 2-6bps of slippage, depending on ADV, even when trading only mega/large-cap S&P names. 2 bps of round-trip costs is a really generous assumption..
I looked at your first article. With apology for further pestering you on details, but these are important for requisite pre-judging (not to mention the "thousand" cuts you mentioned):
1. Are the 3 day intervals independent or overlapping?
2. (likely I know the answer, but just to be explicitly clear:) To create the distribution, did you strictly use history at the time, compared to (for example) using the distribution-of-returns that looks ahead?
3. I'm not sure how this might apply exactly, since it depends on your symbol selection criteria, but:
When going back ~20 years, one should be very wary of "survivorship bias". In other words, there were many symbols in the S&P500 that have been dropped, since. One must be sure to include the dropped names for the time of walk-forward window. Also, vice versa for the added names: not to use them at time-windows prior to their inclusion.
Obvious part: If a symbol was dropped it may well be one that went bankrupt and you would not included the losses in your test
Less obvious part: merely being a member of S&P500 should be considered a major cause of price-action, especially in the age of passive index "investing" (aka dumb-money dumping)
Impressive results.
This'd only work in the nutty world of the S&P where there are _very_ few meaningful downtrends.
1.what is QPI?
2.can you give a rough idea of the value of the drop threshold?
(at 3-5k-trades/yr it is likely <5%, either way perhaps a window of acceptable drops may weed out some large drops that will not recover.)
3. Should also try distribution of returns to be relative to the centroid (S&P index)
4.Since you hinted at it, how would you modify this for shorter time frames?
Thanks, Tim!
QPI (defined in the original article) is a volatility-adjusted measure of how rare a stock’s 3-day move is relative to its own 5-year history. You take all 3-day returns from the past 5 years, locate today’s move in that distribution, and rescale it so that 0 means an extremely rare tail event and 100 means a very common one.
I don’t use fixed %-drop thresholds because they ignore regime and volatility: a -5% move in Stock A during a calm period is completely different from a -5% move in Stock B during a high-vol regime. QPI solves this by anchoring everything to each stock’s own distribution.
And yes, computing QPI on residual returns (relative to the S&P 500 or sector) is a natural extension, since it isolates true idiosyncratic dislocations rather than broad market moves: thanks for the suggestion! Will definitely try it!
In the next few articles, I will extend/improve on this idea... not necessarily shorter timeframes, but more/different universes, more data (that helps us differentiate fundamental price drops from transient ones), and overlaying a risk model :)
Thanks! I will find the original article.
About the "residual returns" idea, even if you find no improvement (and very likely, far fewer trades), you shouldn't throw it away immediately; it would be worth comparing the two schemes in non-US markets.
In my experience most academic papers don’t hold up in practice. However it gives insight into what shops are very likely investigating. Thanks for sharing!
That's true... thank you!
I strongly concur with your comment about "..far from published results.." it's a major source of wasted time!
Totally! In the past 2 months, I implemented 3 different papers... the results were so off, I couldn't even find a way to transform the work into posts to share here!
Impressed by your approach, definitely gonna apply myself!
Solid post as always!
Have you also estimated the performance after adding slippage to the trading costs? Since the number of trades has increased significantly compared to the previous strategy version, this should have a noticeable impact on the final performance. Assuming a 10m account, a single trade size of 1m would imply an additional 2-6bps of slippage, depending on ADV, even when trading only mega/large-cap S&P names. 2 bps of round-trip costs is a really generous assumption..
I looked at your first article. With apology for further pestering you on details, but these are important for requisite pre-judging (not to mention the "thousand" cuts you mentioned):
1. Are the 3 day intervals independent or overlapping?
2. (likely I know the answer, but just to be explicitly clear:) To create the distribution, did you strictly use history at the time, compared to (for example) using the distribution-of-returns that looks ahead?
3. I'm not sure how this might apply exactly, since it depends on your symbol selection criteria, but:
When going back ~20 years, one should be very wary of "survivorship bias". In other words, there were many symbols in the S&P500 that have been dropped, since. One must be sure to include the dropped names for the time of walk-forward window. Also, vice versa for the added names: not to use them at time-windows prior to their inclusion.
Obvious part: If a symbol was dropped it may well be one that went bankrupt and you would not included the losses in your test
Less obvious part: merely being a member of S&P500 should be considered a major cause of price-action, especially in the age of passive index "investing" (aka dumb-money dumping)
Hi Tim!
1) Using both produce the same results
2) obviously only looks backward, that would be a huge & rookie mistake
3) we also obviously took care of survivorship bias. All delisted index members are included.. this would also be a huge & rookie mistake :)