Quant Finance (Machine Learning Trading)

Price is what you pay. Value is what you get. 

Warren Buffet

Quant Finance is one of the challenging areas to apply Machine Learning. Over time, the need for applying machine learning for quantitative finance has exploded. There has been a constant attempt to democratize the space with several decentralised quantitative platforms in the market.

I explored the space for the first time through the alpha research program of WorldQuant. The program with an idea to make quant research accessible opened its infrastructure to college students. Over time, as I spend more time in this space, I learned a thing or two and how to apply machine learning in a practical fashion.

In this attempt to get a good overall overview, I collected and collated feedback and inputs from multiple sources. They include Reddit, personal bookmarks, books and plain old google searches.

Similarly, this is a basic compendium of major resources, I found. So, will keep updating this as I go ahead.

Courses

This provides a nice little refresher into the possibilities of using ML in Quant Finance. Additionally uses Python (Pandas) for processing. Gives a neat introduction.

Books 

Below is the list of books, which give a good idea and peep into the world of quantitative finance. The first one is a nice hands-on book, allowing you to build on the course of Udacity.

Blogs/ Websites

These are some of the blogs which have interesting commentary, a couple of them are by the same authors as the books.

Notes

Hard technical topics are best learned using notes. Besides the already available resources, it helps to retain and recollect things fast.

Book A Slot

Competitions/ Platforms

  • Numerai: Had competed on Numerai, back in college. Even received some fractional bitcoin as a part of the reward. Ever since they have gone on to create their own cryptocurrency. Including, several changes to the overall platform and incentive structure. Think ML for finance in Kaggle style.
  • Quantopian: Another popular platform but more traditional in nature with a simpler scheme. Has plenty of data sources, nice research environment and a thriving community on forums. Unfortunately, closed as of 2020.
  • Websim: Another simulation platform, by WorldQuant. This one has a nice payout scheme if you do well with revenue sharing and fixed stipends. This too has been discontinued now.

Trading Systems

The entire trade cycle can be divided into four parts. They are based on the different aspects which go into researching, building, deploying and then evaluating an alpha. Each part of the process provides ample opportunities to use data science and machine learning.

Data & Pre-Processing

The edge here is in terms of size, latency and novelty of the type of data being used. Thereby, besides the price and order book data, alternate data is gaining flavour.

  • Gaining access to quality data is the biggest challenge in terms of the entry barrier.
    • Cost is extremely prohibitive.
    • Openly available data is ubiquitous and has low signal power.
  • Non-stationary, Non-IID & Non-Normal price data points result in the violation of several Machine Learning algorithmic assumptions.
  • Instead of sampling data in terms of time, we sample data in volume terms called volume bars. This has a dual advantage :
    • The corresponding volume has better statistical properties (iid & gaussian)
    • This takes into account also the volume aspect of information. We manage to capture more information due to higher sampling during higher activity

Signal Generation & Processing

This is one area where the majority of ML applications is being explored. Everything around converting the data sets into useful signals comes here.

  • Overfitting data is the biggest challenge with ML models. Coupled with relentless backtesting can result in lots of spurious results.
  • Feature selection and not back-testing is where the edge is. Use simple models to understand and interpret the top features/ predictors.
  • Model ML problems as classification over regression. Simple models over complex approach: Occam’s Razor
  • Follow a research-driven approach(EDA, summaries) contrary to back-testing heavy model.
  • Split Data into Train, Validation & Test.
    • Normalise the values.
    • Plot the error values after each epoch. This will help in understanding if we are overfitting, generalisability of models etc.
  • Order book “pictures” used and trained using transfer-learning to predict the next set of movements.
  • LSTMs: Any paper on time series will have some relevant stuff for financial data sets. The pre-processing steps can be replicated for financial data sets as well.

Portfolio Allocation & Risk Management

Besides signal generation, portfolio allocation processes also involve several risk management principles. I have seen firms employ strict risk management constraints such as not more than 2% liquidity in one stock etc.

  • Kelly Criterion & Portfolio Allocation theory to distribute funds to signals. Make assumptions about a known mean, variance of returns which is a huge assumption.
  • Extreme returns(primarily negative) have a higher probability than the traditional normal distribution. This is important from a risk management point of view. This is refer as fat tails in returns.
    • This can result in greater drawdown during live markets compared to backtesting.
    • A large & diverse portfolio can bring the excess kurtosis close to zero.
    • This assumption of independence can be dangerous. The fat tails aspect coupled with highly correlated asset movements resulted in a failure of risk management during the 2008 crisis.
  • The covariance between assets is constantly changing. They highly correlates with negative stock movements.

Evaluation & Execution

Post-simulation of the model under conditions of risk etc, they still need to be evaluated. Several metrics and evaluation techniques have been developed but the most prominent one is the historical Sharpe Ratio during pre-defined data.

  • Sharpe Ratio: The best single criteria to evaluate stocks under the assumption that returns have a normal distribution.
  • Returns typically are known to have a high kurtosis and long negative tail. So the probability of high drawdowns is greater in real scenario compared to back-tests.
  • Bonferroni Test: The p-value for significance usually adjusted as we carry out more back testing operations. This allows to ensure new max Sharpe >> old max Sharpe for it to be an actual signal.
  • Transaction costs are absolutely critical and important to check for. Often not accounted for in back-tests and modelling.
  • Execution Strategy :
    • You can’t execute at midpoint prices, so need to include bid/ask price.
    • Trade execution needs account for size of the trades.
    • You need at the minimum last trade price with volume. Best is to have order book depth.

Optimising Stock Portfolio

Machine Learning Notes

Book A Slot