14 Comments
User's avatar
Rob's avatar

Excellent work, Quantitativo. Your analysis is consistently sharp and engaging, and this strategy shows real promise!

Expand full comment
ethan's avatar

Cool! Seems the hardest part would be data handling at first

Expand full comment
Quantitativo's avatar

it always is :)

Expand full comment
ethan's avatar

Also, it would be hard to trade in real life for such a large portfolio with equal weighting like what you back testing

Expand full comment
The Robust Quant's avatar

That’s really interesting! Do you guys usually discuss and try to implement this paper and others like it in your group?

Expand full comment
Quantitativo's avatar

Yes! That is THE PURPOSE of the community :)

Expand full comment
Tony's avatar

Thank you Carlos for sharing an empirical gem! This seems opening a new and fascinating area in the quant trading field.

I am no expert in long/short system, but still I would like to ask whether the trading cost (commissions, slippage, short cost) will eat up the overall returns by much if we use a more harsher trading environment simulation? Thanks.

Expand full comment
Quantitativo's avatar

Thanks! You’re absolutely right: trading costs can quickly erode performance, especially in fast-rebalancing strategies.

In my tests, I already include commissions. Slippage can be largely ignored since trades are executed with MOC orders, and short borrow costs are negligible given the 24-hour holding period.

That said, if you raise commissions to typical retail levels, a daily-rebalanced strategy likely won’t hold up. You’d need to trade less frequently — say weekly or monthly — to keep it viable.

Expand full comment
Timo's avatar

This is really interesting! Thank you.

Expand full comment
Peter Cosyn's avatar

Great article. The method gives quants focusing on mid and large caps new strategies to codify smart money behavior. I assume that for small and micro caps (my focus) the data is too sparse to be useful?

Expand full comment
suman suhag's avatar

There are several advantages to using word embeddings instead of character embeddings when training a deep neural network. First, word embeddings provide a higher level of abstraction than character embeddings. This allows the network to learn the relationships between words, rather than the individual characters that make up those words. This can lead to improved performance on tasks such as language modeling and machine translation.

Second, word embeddings are typically much smaller than character embeddings. This is because each word is represented by a single vector, rather than a vector for each character in the word. This can make training faster and more efficient.

Third, word embeddings are already available for many languages, which can save time when training a new model.

There are also some disadvantages to using word embeddings. One is that they can be less accurate than character embeddings, especially for rare words. Another is that they can be less effective for tasks that require understanding of the syntactic structure of a sentence, such as parsing.

Expand full comment
JamesZ's avatar

Very nice explanation of the paper and hands on replication work! One question I have, if an institutional portfolio manager runs a long/short equity portfolio, the pairs of stock being long/short would be closely correlated but the position is opposite sign. Then the ordering of his position sizing would not properly reflect stock's adjacency to each other right?

Expand full comment
John Quick's avatar

Now do none cherry picked years for robustness or nah?

Expand full comment
Quantitativo's avatar

Hi John! Thank you for your question! I testedI only in the years reported because that’s ALL the data I have ;)

Expand full comment