At, we believe that historical prices are the best indicator of future prices and that trends persist over time in certain markets. To this end, we adopt a systematic or quantitative approach where mathematical models are employed to identify and capture these trends.

As the competition for alpha capture has intensified so that variants of the same model are increasingly traded across multiple markets/assets, the scale of the task of optimizing their free parameters has also grown. As a result, solution search spaces often become vast.

A reasonable strategy for many problems with a lot of parameters or multiple local optima is to start with a genetic algorithm, then use the best solution from the genetic algorithm as the start for a derivative-based optimizer. In general, derivative-based algorithms work better as they approach the optimum.

With the productivity in mind team addresses these issues in two ways: through the use of genetic algorithms and the offloading of optimization tasks to separate machines that can even be on the far side of the planet.

To avoid curve fitting our team apply optimization algorithms to search for more promising areas using 60%-70% in sample reliable historical market data. No less then 30% out of sample market data is used in system development process.

However, even if a model exhibits satisfactory performance during both in and out of sample testing, that still leaves the more acid test of “paper trading”(live simulation). As trading models have increased in frequency this has become an increasingly vital part of the evolution of a model prior to live deployment. It takes several months of live simulation to prove system is a candidate for live trading.

At we understand the extreme importance of having clean, accurate data on which to base our trading system development. Our source of high quality historical data is among the cleanest in the industry, allowing our team to make decisions with confidence.

Data provider’s superior data accuracy is maintained by using automated filtering mechanisms to catch three primary sources (layers of defense):

  • erroneous data is caught in real-time by automated filtering mechanisms;
  • data is corrected in real-time by the financial exchanges themselves;
  • data is manually maintained in-house by using an expert team of market data professionals.

The backtesting tool is able to work on a specific time range with Look-Inside-the-Bar technology, tick-level testing, limit-order fill assumptions, and by accounting for commissions and slippage. All this allow the developer to determine the optimal parameters for that period.

This can then be tested on other time slices to determine the optimal parameters over the full backtesting timeframe. It may also conclude that if the best in/out-of-sample parameter values for each period vary enormously then the model may not be sufficiently stable.

Read next about our team→