Tips Google Adsense (indonesia)
Download Panduan Google Adsense
----------------------------------------------------------------
Tips Google Adsense (english)
Download Quick And Easy To Make Money and Making Money with Adsense

Tuesday, July 29, 2008

Cisco Nexus 5000 bridges the network gap

Mario Apicella Mon Jul 28, 6:00 AM ET

San Francisco - Traditionally, network transport has run on two separate technologies, FC (Fibre Channel) and Ethernet, which, like two railroads with different gauges, seemed bound to never meet.

Just about everybody agrees that having a unified network could bring significant financial and administrative benefits, but when exploring possible simplifications to the datacenter fabric, customers faced discouraging and costly options such as tearing down their FC investments or extending the FC network to reach every server and every application.

2008 started with industry signals that it would be the year when those two "railroads" would finally come together. We had a first glimpse that things were changing in that space when Brocade announced the DCX in January. Later that?? winter a new technology, FCoE (Fibre Channel over Ethernet) -- created by an offspring of Cisco, Nuova Systems -- came to maturity in the Nexus 5000 switches, promising to finally bring these two most critical networks under the same administrative banner.

This spring, about one year after first introducing the concept of FCoE, Cisco announced the Nexus 5000, a 10G Ethernet switch that supports the new protocol and promises to make consolidating FC and Ethernet traffic as easy and as reliable as bringing together Ethernet connections with different speeds on the same switch.

How do the approaches from Brocade and Cisco differ? I won???t stretch that rail analogy further than this, but it helps if you think of the first as a converging point for different railroads, and see the second as a unified rail where to roll heterogeneous transports.

In fact, FCoE brings seamlessly together the two protocols, potentially reaching any application server mounting a new breed of adapters, aptly name converged network adapters or CNA. A CNA essentially carries both protocols, Ethernet and FC on a single 10G port, which cuts in half the number of server adapters needed and, just as important, reduces significantly the number of connections and switches needed south of the servers.

The other important component of the FCoE architecture is obviously the Nexus 5000 switch, a device that essentially bridges the FC and Ethernet networks using compatible ports for each technology. Moreover, adding an FCoE switch requires minimal modifications, if any, to the existing storage fabric, which should grab the interest of customers and other vendors.

Cisco declares for the first model released, the Nexus 5020, an aggregate speed in excess of 1Tbit/sec and negligible latency. This, together with an impressive lineout of 10G ports, makes the switch a desirable machine to have when implementing server virtualization. To paraphrase what a Cisco executive said, perhaps a bit paradoxically, with FCoE you can burden a server with just about any traffic load.

Getting to the nexus of the 5000
A switch that promises to deliver the services of Ethernet and FC over the same wire without packet losses and without appreciable latency is certainly worth reviewing, but it didn???t take me long to realize that the evaluation required bringing together more equipment than it???s convenient to ship, which is why I ran my battery of tests at the Nuova Systems premises in San Jose, Calif.

In addition to 10G Ethernet ports, my test unit mounted some native FC ports, which made possible running tests to evaluate its behavior when emulating a native FC switch. Other items in my test plan were exploring the management features of the Nexus 5000 and running performance benchmarks to measure latency, I/O operations, and data rate.

The Nexus 5020 is a 2U rack mounted unit and packs in that small space an astonishing number of sockets: 40 to be precise. Each socket can host Ethernet ports running at 10G. Using an optional expansion module (the switch has room for two), you can extend connectivity with six more 10G Ethernet ports, eight more FC ports, or a combo module with four FC and four 10G Ethernet ports.

However, those sockets don???t need to be completely filled. For example, my test unit had only 15 10G ports and 4 FC ports active. At review time the Nexus 5000 offered support for all FC connectivity speeds, up to but not including 8G.

Typically, you would deploy the 5020 in the same rack where your app servers reside, or in an adjacent rack. Considering a resilient configuration with two 10G connections for each server, two Nexus 5000 can connect up 40 servers and still have room for more ports with the expansion modules.

The front of the 5000 hosts five large, always spinning and rather noisy fans. With only one power supply (a configuration with dual PSU is also available) I measured around 465 watts absorbed by the switch. Interestingly, the Nexus kept running when I removed one of the fans but, as I had been warned, shut down automatically when I removed a second fan. However, the remaining three fans kept spinning to keep the internal electronics cool.

When reinserted, the two fans I had removed began spinning immediately, but the rest of the system was still no go and I had to power cycle to restart. Taking advantage of this behavior (it???s by design), I measured 243 watts with only the five fans spinning, which suggests that the power usage of the other components of the switch is the delta to 465 watts, at least in my configuration.

Having more connections would obviously push up that number, but the consumption I measured seems to be in the same ballpark of what I read from the specs of 20 ports 10G switches from other vendors.

Policing with a policy
Obviously, the most important novelty that the Nexus 5000 brings to a datacenter and the greatest differentiator with other, single protocol switches is that Ethernet and FC are just two supported applications that you monitor and control from the same administrative interface.

With that in mind it???s easy to understand why the Nexus runs a new OS, the NX-OS, which, according to Cisco, inherits and brings together the best features of their Ethernet-focused IOS and their FC focused SAN-OS.

To access the OS features administrators can choose between a powerful CLI or the GUI-based Fabric Manager. I used the plural because the administrative tasks of the switch can be easily divided between multiple roles, each with a different login and confined to a specific environment, as defined by and under the supervision of a super admin. That???s a critical and much-needed option if you plan to bring multiple administrative domains and their administrators under the same banner.

This and other configuration setting of the Nexus 5000 are policy-driven, which makes for easy and transparent management. Another remarkable feature is that you can define classes of service that logically isolate different applications.

For example, after logging in to the switch, a simple command such as "sh policy-map interface Ethernet 1/1" listed all traffic statistics on that port, grouped for each CoS (class of service) and listing separated numbers for inbound and outbound packets.

Combining a certain CoS with a proper policy, an admin can not only monitor what traffic is running on the switch but can also automatically control where packets are routed and how. Load balancing is a typical application where that combination of policy and QoS shines, but there are others -- for example, automatically assigning packets with different MTU to different classes of traffic.

The NX-OS makes easy some otherwise challenging settings, such as mirroring the traffic flowing on one interface to another on the same or on a different VLAN. A similar setting can be useful for sensitive applications such as surveillance and remote monitoring, but can also help test the impact of a new application on a production VLAN.

Defining a correct policy can help also make sure that FC traffic, or any other traffic running on the 5000, will never drop a frame. Dropping a frame is obviously a mortal sin if a storage device is at one end of the connection, but other performance-sensitive applications can benefit from uninterrupted transport.

I was surprised to learn how easy that was to set up with just a handful of commands:

class-map critical
match cos 4
policy-map policy-pfc
class critical
pause no-drop
system qos
service-policy policy-pfc

In plain English this means the following: Never drop a frame and pause the traffic if you can???t keep up with the rate.

I should also mention that PFC stands for priority flow control, a new feature which is at the heart of the FCoE protocol and essentially makes Ethernet able to survive traffic congestion without data loss, by pausing the incoming flow of packets when needed.

My next command, a line that I am not showing, was to assign that policy to two ports on my switch.

How to fill up a 10G line
If setting that policy up was easy, testing that it was actually working was a bit more complicated and called for using the powerful features of IP Performance Tester, a traffic generator system by Ixia. One of the problems I had to solve was how to create significant traffic on my 10G connections, which is where IP Performance Tester, luckily already installed in my test system, was called to action. This isn't the only test where I've used IP Performance Tester, and I've found it to be a valuable tool.

For my PFC test, the Ixia system was set to generate enough traffic to cause a level of congestion which would have translated, without PFC, into losing packets. The switch under test passed this test with aplomb and without losses, proving that not only FC but also Ethernet can be a reliable, lossless protocol.

Of the many test scripts I ran on the Nexus 5000 this was, without any doubt, the most significant. The switch offers many powerful features, including guaranteed rate of traffic, automatic bandwidth management, and automated traffic span.

However, PFC is what legitimates FCoE as a viable convergence protocol that can bridge the gap between application servers and storage, and it makes the Nexus 5000 a much-needed component in datacenter consolidation projects.

One last question remained still unanswered in my evaluation: The Nexus 5000 had proven to have the features needed to be the connection point between servers and storage in a unified environment, but did the machine have enough bandwidth and responsiveness for the job?

To answer those I moved the testing to a different setting where the Nexus 5020 was connected to 8 hosts running NetPipe.

NetPipe is a remarkable performance benchmark tool that works particularly well with switches because you can measure end-to-end (host-to-host) performance and record (in Excel-compatible format) how those results vary when using different data transfers sizes.

A summary of what you can do with NetPipe is shown in the figure here (screen image).??

In essence you can set NetPipe to use one way or bidirectional data transfers and increase the data transfer size gradually within a range., recording the transfer rate in megabytes per second and the latency in microseconds..

I ran my tests with a data size range from 1 byte to 8,198 bytes, but for clarity I am not listing the whole range of results but only a few, following a power of two pattern.

Also to mimic a more realistic working condition, I ran the same tests first without any other traffic on the switch and then added one and two competing flows of traffic.

Finally, to have a better feeling of how much the switch impacts transfer rate and latency, I ran the same test back to back, in essence replacing the switch with a direct connection between the two hosts.



Click for larger view.

It???s interesting to note how the transfer rate increases gradually with higher data size reaching numbers very close to the theoretical capacity of 10G Ethernet.


Click for larger view.

The latency numbers, where lower is better, is obviously the most important proof of the switch responsiveness. Even if we consider the best results where the Nexus 5020 is in the path, the delta with the back-to back stays between 3 and 3.5 microseconds, which is essentially the latency added by the switch.

??This number is not only very close to what Cisco suggests for the 5020 , but is probably the shortest latency that you can put between your applications and your data.

A step for network consolidation
When reviewing products such as the Nexus 5000 that bear the first implementation of an innovative technology is often difficult to maintain judgments about of the technology separated from that about the solution. Which is probably why, at the end of my evaluation, I tend to think of the Nexus 5020 and of FCoE as a whole -- which they are, because at the moment there is no other switch that let you implement the new protocol.

However, even if I break apart the two, each piece has merits of its own. I like the unified view that FCoE brings to network transport and I like the speed and feather-light impact that the Nexus 5020 brings to that union.

Obviously the Nexus 5000 is a first version product and however well rounded, it???s easy to predict that future versions will move up the bar even further. As for the technology, perhaps the greatest endorsement that FCoE received is that Brocade is planning to ship a Nexus 5000 rival solution, based on FCoE by year's end. Obviously the old ???if you can???t beat them, join them??? battle cry of competition is still alive and well in the storage world.


No comments: