Open Platform: Time Is Money

Charles Ferland, vice president and general manager, EMEA, Blade Network Technologies

Communication and data make up the backbone of the global capital markets. Financial services transactions happen not between people, but between servers. What’s more, the extent of these communications enacted virtually—at the network edge—account for more than 99 percent of all financial services’ exchanges and correspondence. For the much ballyhooed practice of high-frequency trading, ensuring that data arrives with minimal, deterministic and “fair” latency couldn’t be more vital. Financial firms, however, are increasingly faced with the challenge of deploying a network that champions “zero latency” and high throughput at the lowest possible total cost of ownership (TCO).

Extraordinary Decade
Having reached the end of an extraordinary decade of technology advances, financial firms commonly manage hundreds of trades per millisecond over direct 10 gigabit Ethernet (10GbE) connections. With each trade preceded by up to 15,000 quotes, the sheer volume of information transacted every second has become stunning. Managing this fast-growing volume of data is a significant undertaking. Even more challenging for network providers, though, is managing financial data networks with the speed to support a competitive advantage.

Competitive success in financial networks is defined by microseconds. Studies have shown that tens of milliseconds of delay in data delivery can represent a 10 percent drop in revenues, and delays of even five microseconds per trade can cost hundreds of thousands of dollars. The necessity for high speed and throughput is often a priority for extreme and complex scalability in capital markets.

According to the Tabb Group’s Data Center Networking: Redefining the Total Area Network, “Financial services are like the Formula One race car of networking. Its data is not routed all over the world; networks are private and data paths are relatively set.” Thus, scaling out with components that are cost-effective, energy efficient and easy to manage generates greater returns than “scaling up” by adding more power and complexity to a smaller number of expensive components. New approaches that flatten the datacenter architecture into a dense configuration of racks and rows can shrink the datacenter footprint, enabling faster communication across fewer hops to provide laser-sharp latency.

The Way Forward
In the past, different discreet devices have been dedicated to achieve a specific role. Routers were used for Layer 3 networking, while switches were used for Layer 2 networking and server interconnects. Each of these added hops, latency, complexity, power consumption and potential risk of failure. The way forward to reduce latency and meet financial services’ objectives is to use multi-service devices that can provide Layer 2 switching and Layer 3 routing in a single device. Emerging “flat” network topologies, enabled by emerging technologies such as Transparent Interconnection of Lots of Links (Trill), eliminate the need for Layer 3 routing altogether, reducing the complexity and latency between the devices.

Some new deployments are removing the aggregation layer and connecting access layer switches directly to a 10GbE switch core. Other approaches minimize the use of core routers and instead place more emphasis on creating a high-speed 10GbE aggregation layer to handle compute-intensive traffic within the datacenter.

In both cases, flattening the network structure speeds up performance because there are fewer latency-adding hops that data must undertake before arriving at a destination. Switches with sub-microsecond hop time and embedded applications can further improve performance.

While a completely flat Layer 2 design incurs some risk of a rogue device wreaking havoc across the network, a balance can be achieved that reduces network hops and protects operations. The solution is a flat Layer 2 network with the following design features: 

Low Oversubscription Ratios: The oversubscription ratio is the amount of uplink bandwidth divided by the amount of server bandwidth. A low oversubscription ratio, or blocking ratio, is critical to maintaining application performance in a scaled-out datacenter. A non-blocking, zero oversubscription ratio is the ideal as long as the networking gear also provides low latency and line-rate throughput.

Multicast Traffic: In multicast, data intended for many recipients is transmitted from a single stream. Support for IP multicast traffic is required in the financial services datacenter so that traffic streams from multiple exchanges can be efficiently delivered to many end users without overloading the network.

Dynamic, Multi-path Routing: Use of multi-path routing topologies in the network can lower overall network latency because this enables optimal load sharing of traffic across multiple routes, while utilizing low CPU overhead. 

Loop-Free Topology: A loop-free architecture is necessary to ensure proper network operation.

Using a Clos architecture, a port-dense, fully-connected fabric can be created that connects high-bandwidth, multi-tier switches in a non-blocking fashion. The Clos, or “Fat Tree” architecture, uses standard Ethernet switching with simple extensions to scale the datacenter. The Clos architecture has tremendous potential for solving many networking challenges in the near future. New, low-latency switches combined with today’s 10GbE and tomorrow’s 40GbE and 100GbE networks should make this architecture an excellent choice. On the horizon, the Trill protocol will provide flexible, scalable and high-performance networks combined with fault tolerance and ease of deployment.  Next-generation network designs will likely incorporate this new standard.

Virtually Speaking
Financial services datacenters are also increasingly interested in the potential for server virtualization to reduce TCO. Server virtualization calls for innovative advances in virtualization-aware networking. However, server virtualization can bring both benefits and drawbacks to the financial datacenter: it can maximize underutilized resources and minimize infrastructure spending—but add complexity and administrative overhead for the network administrator. The latest best-of-breed network infrastructures are beginning to address this problem by automatically migrating network policies along with virtual machines as they move across different physical servers.

Conventional network switches are unaware of virtual machines (VMs), which leaves the network vulnerable to risks of service outages and security breaches due to incorrect network configuration. Sophisticated networks that are better equipped to handle transient VMs enable the unique identification and network configuration for each virtual machine. They “see” virtual machines as they migrate from server to server, reconfiguring the network in real-time—automatically preserving essential security, access and performance policies, within or across datacenters, and even automatically unifying and synchronizing physical and virtual networks between geographically dispersed datacenters.

Reliable and Regular
Ethernet continues to mature thanks to advances such as datacenter bridging (DCB) that provides an even, reliable and regulated flow of traffic between high-speed Ethernet nodes. These enable data and storage traffic to be converged on a unified networking fabric and high-speed “lanes” to be dedicated to high-priority traffic. Without DCB’s priority flow control (PFC), Ethernet treats all traffic equally—the server stops completely from sending any traffic once it receives the pause frame. DCB’s PFC provides a way to send a pause frame to traffic considered less critical. It carves up the 10GbE pipe into eight lanes and enables the assignment of applications per lane. The DCB switch can send pause frames only on certain lanes and not others, so your storage traffic will never get paused if running in separate lanes. The result is exceptional performance for IP-based storage environments using 10GbE system networks. DCB qualities bring to Ethernet, formerly a “best effort” technology, the lossless capabilities, which ensure that mission-critical applications, like the ones used in the financial sector such as algorithmic trading and market data feeds, are never delayed or paused due to congestion.

With these emerging technologies, network equipment providers and innovative financial services firms are realizing how they can address the challenge of management, utilization, scalability, low latency and high bandwidth—all at a low TCO.

Charles Ferland is vice president and general manager, EMEA, at Blade Network Technologies, an IBM company.

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here:

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: