High Frequency Trading Review

    BOOKMARK AND SHARE

    The first part of a two-part interview with Professor Dave Cliff

    In this interview for the High Frequency Trading Review, Mike O’Hara talks to Professor Dave Cliff of the University of Bristol, Director of the UK Large-Scale Complex IT Systems Research Initiative, and a member of the Lead Expert Group of the UK Government’s Foresight Project on The Future of Computer Trading in Financial Markets.

    During 1998-2005 Professor Cliff worked in industry, as a Department Scientist for Hewlett-Packard Laboratories, and then as a Director/Trader in the Complex Risk Group at Deutsche Bank’s London foreign exchange (FX) trading floor. In 1995 Professor Cliff invented an autonomous adaptive trading algorithm, which in 2001 was shown by IBM researchers to consistently beat human traders in experimental versions of financial-market auction systems, and which is now widely recognized as one of the two first autonomous adaptive algorithmic trading systems with real-world applicability.

    This is Part One of a two-part interview.

    HFT Review: Can we start with some background on how you got involved in computer-based trading?

    Dave Cliff: The story starts with me doing an undergraduate degree in computer science in the mid-80’s, when I got very interested in artificial intelligence (AI). I then went on to do a masters and a PhD in AI at the University of Sussex, when AI was going through something of a revolution.

    Working as an academic at the University of Sussex in the mid-90’s I didn’t earn an awful lot of money but I did have a very fast internet connection in my office. This was the very early days of the web when most people at home, if they wanted to access the internet, would have to dial up on a 14.4kbps modem, which would take half an hour to download a photo. And yet in my office I had essentially instant access to the web.

    At that time, major exchanges like the London Stock Exchange and LIFFE were publishing data on the web for free. So I was day trading on the futures exchange from my office at Sussex, using my desk phone and mobile phone and the T1 line showing me the prices changing. I had no real background as a trader but I took  five hundred quid out on my credit card and within about three months, I made so much money that if I drew a graph and extrapolated it out, I realized I could probably retire in about three or four years time. But then within another six weeks, I was back in the red [laughs].

    I knew enough about artificial intelligence and about automating processes that a lot of what I was doing could in principle be replaced by machine. Ultimately, I wanted to just write a program that sucked the data off the webpage, traded automatically and made money for me while I was doing other things.

    Then, out of the blue, I got a letter from Hewlett Packard Research Labs in Bristol asking if I wanted to spend a few months working there as a visiting academic. But the deal was that I had to choose a research topic that was totally different from anything I’d done before.

    Up until that time, I’d been working on computational models using evolution to design new neural networks, which controlled autonomous mobile robots (autonomous in that they had to look after themselves, which meant that you couldn’t pre-program them with an understanding of their environment). So I agreed to go to HP on the basis that I would work on an automated trader. In much the same way as my robots dealt with unpredictable environments and therefore had to learn from experience, I would create an artificially intelligent trader that observes what’s happening in the market and learn from its experience in order to trade profitably.

    HP was happy to let me do that and so I wrote this piece of software called ZIP, Zero Intelligence Plus. The intention was for it to be as minimal as possible, so it is a ridiculously simple algorithm, almost embarrassingly so. It’s essentially some nested if-then rules, the kind of thing that you might type into an Excel spreadsheet macro. And this set of decisions determines whether the trader should increase or decrease a margin. For each unit it trades, has some notion of the price below which it shouldn’t sell or above which it shouldn’t buy and that is its limit price. However, the price that it actually quotes into the market as a bid or an offer is different from the limit price because obviously, if you’ve been told you can buy something and spend no more than ten quid, you want to start low and you might be bidding just one or two pounds. Then gradually, you’ll approach towards the ten quid point in order to get the deal, so with each quote you’re reducing the margin on the trade.  The key innovation I introduced in my ZIP algorithm was that it learned from its experience. So if it made a mistake, it would recognize that mistake and be better the next time it was in the same situation.

    HFTR: When was this exactly?

    DC: I did the research in 1996 and HP published the results, and the ZIP program code, in 1997. I then went on to do some other things, like DJ-ing and producing algorithmic dance music (but that’s another story!)

    Fast-forward to 2001, when I started to get a bunch of calls because a team at IBM’s Research Labs in the US had just completed the first ever systematic experimental tests of human traders competing against automated, adaptive trading systems. Although IBM had developed their own algorithm called MGD, (Modified Gjerstad Dickhaut), it did the same kind of thing as my ZIP algorithm, using different methods. They had tested out both their MGD and my ZIP against human traders under rigorous experimental conditions and found that both algorithms consistently beat humans, regardless of whether the humans or robots were buyers or sellers. The robots always out-performed the humans.

    IBM published their findings at the 2001 IJCAI conference (the International Joint Conference on AI) and although IBM are a pretty conservative company, in the opening paragraphs of this paper they said that this was a result that could have financial implications measured in billions of dollars. I think that implicitly what they were saying was there will always be financial markets and there will always be the institutions (i.e. hedge funds, pension management funds, banks, etc). But the traders that do the business on behalf of those institutions would cease to be human at some point in the future and start to be machines.  Now when IBM says something like that, it gets a lot of attention. So IBMs result, and my work, was suddenly appearing in New Scientist, in The Economist, it was even in the UK’s Daily Mail, which is not a newspaper you associate with in-depth scientific reporting!

    HFTR: Where did your research go from there?

    Click here to access the full text of this interview (registered, logged-in users only)

    If you are not a registered user, click here to register for free

    Related content

    News: Solarflare and GATElab announce partnership in the UltraLow Latency – High Frequency Trading field
    21 February 2012 – GATElab
    GATElab, the independent multi-asset electronic trading systems provider and Solarflare, the leader in application-intelligent 10 Gigabit Ethernet (10GbE) networking soft…

    News: Retreating Sea Ice Could Benefit High-Frequency Trading {Wall Street Journal]
    22 March 2012 – High Frequency Trading Review
    By Nick Clayton  This August work should start on laying three submarine fiber optic cables between London and Tokyo via the Arctic. These projects are apparently only…

    Blog: Off Target
    Sybase 12 April 2012

    Leave A Reply