High Frequency Trading Review

    BOOKMARK AND SHARE

    An Interview with Charles Barry of Juniper Networks

    Sponsored by Juniper Networks

    In this interview for the High Frequency Trading Review, Mike O’Hara talks to Charles Barry, PhD and Senior Director of Engineering at Juniper Networks, about recent advances in network timing and clock synchronization, an area of increasing importance in the world of computer-based trading. The author of numerous patents and technical papers, Dr. Barry holds a Ph.D. in Optical Networking and a MSEE in Information and Control Systems from Stanford, and a B.S. Physics from MIT.

    HFT Review: Why does network timing and synchronization matter to the HFT community (and the wider trading/investing community in general)?

    Charles Barry: First is the issue of why time at all, then the need for increased accuracy.  In simple terms, you can’t sell what you don’t own, even if it is for a brief instant. Putting aside the complexities and rules governing Options, puts, calls and the like – if you want to sell a share of stock in Chicago, you want to make sure the purchase is closed in New York. Timing this accurately is fundamental to keeping a ‘book’ up to date.

    As high frequency systems have pushed transaction execution times from seconds, to milliseconds, to microseconds, the needs for accuracy increase. Just as you can’t time a 100 meter dash with a pocket watch, you can’t effectively monitor a microsecond transactions with millisecond resolution. It is possible that systems out of synch could register a trade confirmation before an order – something regulators do not look kindly on.

    HFTR: What are some of the key issues and challenges around synchronisation of network elements?

    CB: The first challenge is to get the timing to everything, everywhere.  Every switch, router, server, client and network interface needs timing.  And that means getting timing into the depths of the operating system and applications.

    The second challenge is for the timing to be absolutely accurate. It’s not good enough to know the relative Electronictime between events, but you have to know exactly when two things happened anywhere in the network, separated by thousands of miles.  Inaccuracies in prior network timing technologies like Network Time Protocol (NTP) could result in any two systems with time offsets of 10-100 milliseconds.   This leads to the classic “Chicken or the Egg” problem. With inaccurate timestamps, it would be easy to see trades that looked as though they were executed before they were placed.  Causality between trades and prices could not be established and algorithms flounder, or worse, they make mistakes.

    The third challenge is to ensure that the timing distribution is robust and verifiable.  Everything works well when the network is perfect, but stuff happen. Networks can be misconfigured, power can go out, fiber lines can be cut, or lightning could strike.    

    HFTR: What are some of the different approaches firms are taking to address these challenges? And what are the pros and cons of each of those approaches?

    CB: There is no panacea that is going to solve the entire network timing and synchronization challenge.  The two leading candidates are GPS and the IEEE1588v2 Precision Time Protocol (PTP).  Both must work hand in hand to provide an accurate, scalable and robust solution.

    GPS technology is almost ubiquitous today. It’s in your cell phone, your car, even in your camera (“where was I when I shot this picture?”).  It’s very inexpensive and surprisingly accurate. It’s not uncommon to get 10m or better location accuracy on your handset. What a lot of people don’t know is that GPS relies first and foremost on super accurate time. Without going into the gory details such as inertial mechanics or relativity, the GPS satellites have to first know where they are, and in order to do this they have to know exactly what time it is. They get their time from their primary source:  the United States Naval Observatory (USNO).  And the GPS satellites all have incredibly accurate atomic clocks on board so  they can stay in sync while orbiting.  When the GPS receiver in your car is deriving its position, it runs an algorithm called Time Difference of Arrival (TDOA). As the name suggests, this only works if all the satellites have the same time and they know where they are. This way you can know where and when you are.  One way to think of time is the distance it takes light to travel in that time. If you know your location to 10 meters, then you know the time to 30 nanoseconds.  That’s typical performance of GPS receivers with 50ns offering a safe margin on the accuracy.

    Financial and stock exchange data centers are increasingly deploying GPS receivers on the roof of the data center and then distributing GPS timing throughout the data center. This distribution can include the GPS RF signal itself or a Pulse Per Second (PPS) synchronization signal that has been recovered and synced to within 50ns of Coordinated Universal Time (UCT) (like UNSO).  Typically the RF or the PPS signal is derived from a redundant set of antennas and receivers, and then amplified and split out to all the clients.   When it’s working properly, every client has GPS that is timed within 50ns.  The error between any two clients in the world is then just 100ns.

    The main drawback of GPS is that you it’s not always available.  You can’t just put a GPS antenna on the roof;  someone could build a bigger building and block your antenna, lightning could strikes solar flares could disrupt service, and more and more worrisome, GPS jammers and adjacent interferers are becoming prevalent.

    Fortunately, IEEE1588v2 PTP is strong exactly where GPS is weak.  It’s a network protocol that is being implemented and deployed in data centers and telecommunications networks.  While PTP technology is not deployed ubiquitously yet, it works over any packet interface and can connect to every network device and server over standard fiber or copper.   The fact that this is an IEEE standard means customers can expect a high degree of vendor interoperability which is key to the success of the market. This means that a carrier (who may have a network core from Vendor A) could exchange PTP timing data with customers using a variety of equipment in their data centers.

    At the highest level, PTP operates in a timing hierarchy of “Grandmaster” (GM), “Boundary Clock” (BC) and “Ordinary Clock” (OC).  More about PTP can be found at: http://en.wikipedia.org/wiki/Precision_Time_Protocol

    The major drawback of PTP timing with respect to GPS is that it suffers from variations of delay due to traffic loading.  When network traffic packets, including the PTP packets, traverse the network, they experience delay, and variation of delay due to queuing.  Even with preferential treatment such as QoS and Expedited Forwarding (EF), the timing packets will sometimes have to wait for a packet that is in front of it.  Networks can be 10 hops or more and the delay and jitter builds up.  The best PTP algorithms can achieve 1μs most of the time. A strict guarantee requires end-to-end QoS and is about 1μs per hop if all the links are 10Gbps, or about 10μs.  That’s not bad, but it’s nowhere near GPS.

    Fortunately, the PTP architecture offers several fundamental improvements leading to much greater accuracy when compared to traditional NTP.  These include greatly higher packet rates, timestamping in hardware and the “Transparent Clock” (TC).  Higher packet rates improve performance – the more packets, the more measurements and the better and faster the PTP algorithm acquires accurate phase.  Timestamping in the hardware makes it possible to achieve nanosecond resolution as opposed to 100’s of microseconds, leading to better and faster phase acquisition.  Most importantly, the PTP TC is perhaps the most powerful technology to in the race to sub-microsecond network synchronization.  The TC marks the time a packet entered on the physical network interface, notes how much time it took to get through the switch and marks the time it took to get back onto the network. This is done hop by hop.  At the end client, the total accumulated delay due to all the queuing in all the nodes with Transparent Clock can be subtracted out. All you’re left with is the true delay through the network and some accumulated residual noise.  With TC, the noise can be as little as 8ns per hop. So it is possible to achieve 100ns or better accuracy over 10+ hops.  That’s in the ballpark of GPS and makes it possible that any two clients, even ones very far away in distance or network hops, can be timed to within 200ns of one another.

    So with GPS and PTP, is the problem solved? Well, Not exactly.

    It’s not just enough to get the timing signal to the server, switch or router, but you have to get the timing all the way down to the processors and processes running inside of the server.  It is of no value to have sub-microsecond accurate timing at the GPS receiver if you can’t precisely timestamp the packet when it was created, sent or received.  This requires modifications to the Operating systems to take the timing from the GPS receiver or PTP. There are significant efforts ongoing in the OS world to improve their system implementation. RedHat Linux 6.0 and others are leading the way.  Applications have to be rewritten to take advantage of this increased accuracy. NIC card vendors and server manufacturers are also improving designs for throughput, hardware timestamping and higher throughput/lower latency internal buses. It all helps.

    HFTR: How accurately can clocks be synchronised using the various methods available? (microseconds, nanoseconds? picoseconds?)

    CB: Given all of the techniques we’ve discussed, we can see that over the wide area it is possible to achieve 100ns node to node with GPS, 10 μs with PTP over non-TC networks, and 200ns with PTP TC networks.

    That’s not to say that lower numbers aren’t achievable. In time, I’m certain that we’ll see a push to 10ns or better. These levels of accuracy can be achieved today with survey-grade GPS receivers and specialized timing over fiber systems. But they aren’t affordable.  Fortunately, Moore’s law applies to virtually every aspect of the ecosystem and we’ll keep pushing the limits of technology to get there robustly and economically.

    HFTR: What is involved in accurately measuring synchronisation down to that level?

    CB: Measuring is always the toughest challenge.  A lot of this has to be done right at the installation through calibration.  You have to design the timing network right, you have to build it right and you have to calibrate it.

    As we’ve said, it’s possible with readily available GPS technology to get down to 50ns (100ns client to client). But that’s only if all the cables in the data center are matched in length.  And it’s not good enough to just match the length of the cables in the datacenter.  You also have to match the length of the cable from the antenna(s) in and between datacenters.  The delay has to be calibrated between the exchange in New York and the exchange in Tokyo. If you need to get to the sub 50ns level, it’s trickier than you think and isn’t as simple as measuring the cable lengths.  Ideally you need to have an atomic clock reference that has been previously locked and calibrated and then use this to calibrate all PPS signals inside the data center.

    As to PTP, because the GM is locked to GPS, it has the same issues.  Over and above the GPS issues, the biggest calibration issue with PTP is that network protocols are only absolutely accurate in perfectly symmetric delay networks.  In an ideal world, all the network elements and processing and fiber paths are exactly the same in upstream and downstream directions.  But that’s not the case. In a data center it would not be too hard to ensure that fibers are all nearly of the same length or that the differences are limited to feet or 10’s feet. A good rule of thumb is 1ns per foot.  It’s not the delay that causes the error, but the asymmetry. There’s some math behind it, but the basic gist is that there is no way with any network protocol to have less error than half the asymmetry. If there is a 10 foot asymmetry, that’s 5ns error.  While this is not bad at all for the data center, asymmetry in the wide area can easily be several microseconds.  One way to do the calibration is to survey the delay using GPS-based network monitors to compare with the client’s recovered phase.  This calibration has to be done for every end-to-end wide area path. Outside of putting GPS everywhere in the network, there is no real way to get around this if you want microsecond accurate network time over the wide area.  Fortunately, the calibration doesn’t have to be done every day.

    Digging a little deeper, while it’s nice to have the timing in the network and to the edge, if the server OS and application don’t have equally good timing, then it’s not going to help you.  How do you measure the accuracy of the server?  Today’s servers don’t have physical timing outputs that you can measure from the outside. So the only way to measure the server’s clock accuracy is by the PTP algorithm running in the server itself. You won’t be able to measure more accurately than the weakest link in the chain, but if you know the rest of your network is sub-microsecond you’ll be able to measure if the server clock is in the same ballpark.

    Finally, for large data centers and financial exchanges, it’s not enough to just know if a single client is working, but you need to know that all the clients are timed accurately, all the time.  This is where sync network monitoring comes into play. At Juniper we’ve developed, deployed and continue to expand industry leading software that monitors, reports and if there are issues, generates alarms the moment synchronization errors are detected.

    HFTR: What are exchanges/trading venues doing to ensure they are following latest best practices in this area?

    CB: As it turns out, every day we’re seeing more data centers, trading houses and financial institutions putting more focus on network timing and sync.  Some of the largest exchanges are doing everything we’ve discussed above in painstaking detail to deliver network timing to every device in their network spanning every continent. They’re distributing GPS throughout their data centers, and they’re doing the calibrations and cable length matching to make it accurate. They’re also deploying IEEE1588v2 PTP throughout their data centers and across the globe.  They are working with operating system vendors, applications vendors, NIC vendors, servers vendors and network solution providers to ensure that network time is accurate everywhere robustly.  And they’re building in redundancy and monitoring to ensure it’s working properly, all the time.  Service providers are increasingly deploying PTP in their backhaul networks.  As these networks are rolled out over time they will have PTP TC. In time, this will improve absolute accuracy over the wide area from several microseconds to 200ns or better.

    Juniper Networks is fully committed, engaged and working with technology partners, financial trading companies, data centers, telecommunication service providers and stock exchanges to optimize every aspect of the end-to-end network.  Our recent purchase of Brilliant Telecom, Inc. brings world-class expertise and a suite of network timing products with unmatched scalability and performance to Juniper.  This puts us in a unique position among network solution providers to address every aspect of end-to-end network timing, reduce latency, increase speed and bandwidth and to deliver microsecond or better precision time to every client, everywhere.

    Time is Money.  It’s never been more true than today and it’s more important every day.

    HFTR: Thank you Charles

    – o –

    Biography

    Dr. Charles Barry is a recognized innovator in IP networking with more than 25 years in the telecommunications industry. Within the past decade, Charles has founded or been a principal of three telecom network equipment provider companies. He is now focused on delivering world-class timing synchronization with network latency and performance monitoring for telecom, financial and industrial markets at Juniper Networks.

    Dr. Barry joined Juniper Networks through its purchase of Brilliant Telecommunications, where he served as founder, president and CEO/CTO from 2004 to 2011. Prior to founding Brilliant, Dr. Barry founded Luminous Networks, where he was responsible for innovations that established Luminous as a leader in fiber optic telecommunications with Tier-1 carrier deployments across the globe.  Prior to Luminous, Charles led network product development at NUKO Information systems, a leading digital video compression and networking company.

    The author of numerous patents and technical papers, Dr. Barry holds a Ph.D. in Optical Networking and a MSEE in Information and Control Systems from Stanford, and a B.S. Physics from MIT.

    For more information, visit www.juniper.net

    Download PDF

    Related content

    News: Enyx chooses TS-Associates TipOff and Application Tap for Precision Instrumentation
    6 November 2013 – Enyx
    Paris, France – 6 November 2013: Enyx today announces that it has chosen TS-Associates TipOff® and Application Tap® solutions for the Precision In…

    News: TMX Atrium expands Moscow connectivity to IXcellerate
    9 January 2014 – IXcellerate
    IXcellerate, Moscow’s premiere carrier neutral data centre operator, today announced connectivity to TMX Atrium, provider of smarter infrastructure solutions for the fin…

    News: Green Key Technologies Awarded OTC Matching Algorithm Patent
    13 December 2013 – Baymarkets
    Chicago-December 12th, 2013 – Green Key Technologies, a financial markets technology solutions provider, announced today that the United Sta…

    News: StreamBase and Hithink Partner to Deliver Real-Time Analytics Powered Financial Information Services in China
    9 April 2013 – StreamBase
    Leading Chinese financial information services provider will deploy StreamBase to deliver event-based trading applications tailored to the Chinese market NEW YORK, NY (USA…

    News: Volante Achieves Oracle Exalogic Optimized, Oracle Exadata Ready, and Oracle SuperCluster Ready Status
    17 September 2013 – Volante Technologies
    Volante Suite is Optimized for Speed and Reliability on Oracle Exalogic Elastic Cloud and Supported and Ready to Run on Oracle Exadata Database Machine and Oracle SuperCluster…

    News: Perseus Launches Precision Time Service in CME Group’s Aurora Data Center
    3 October 2013 – Perseus Telecom
    CHICAGO and NEW YORK, October 3, 2013 – Perseus Telecom and CME Group, the world’s leading and most diverse derivatives marketplace, today announced a new ser…

    News: Azul Systems Launches New Release of Open Source jHiccup
    29 October 2013 – Azul Systems
    System Diagnostic and Monitoring Tool Provides Quick Picture of Java Applications as They Run in Production and Test Environments LONDON, UK, and SUNNYVALE, Calif., 29 Octo…

    Leave A Reply