BlogOpenstack Network Performance

Short network performance analysis of some OpenStack plugins.

I checked how well TCP works with OpenVirtualSwitch, LinuxBridge and Docker, for comparision.

Ther's simplified flow of IP packet in Linux:

  • 1. Kernel device driver transfers ethernet packet from network interface card to its buffer.
  • 2. After device sends interrupt, software interrupt starts and kernel performs checks of IP packet.
  • 3. Data copied from socket's receive buffer to user space application.

But if ther'a no free buffers on network interface or in kernel, packet dropped. That is why ther's congestion control on sender's side.

You can say how efficient your connection is, for example, by checking TCP ``cwnd`` values over some time.

If packets were dropped, congestion window size will be reset.

The following experiments were performed with virtual machines, booted by OpenStack, which was configured with either openvswitch or linuxbridge.

In all experiments sender transmits one TCP stream to receiver for 25 seconds with the following command:

iperf -r -i 2 -m -t 25 -c

At the same time values of ``cwnd`` and ``ssthresh`` recorded, to see how kernel increases congestion window size, which should be delay-bandwidth product.

Default values of TCP memory settings were used ( /proc/sys/net/ipv4/tcp_mem ). Usually they are set on virtual machine boot time.

Graph 1: LinuxBridge:


We can see how fast congestion window size grows, but then ther'a some packet drops occur, which are red dots out of red line.

Graph 2: OpenVirtualSwitch:


Neutron ML2 openvswitch plugin was used from Juno release of OpenStack. In current version of Neutron, packet has to pass 9 bridges to reach virtual machine.


On graph we can see, that after peak of bandwidth reached, even bigger number of packet drops occur.

Standard TCP implementation, which follows algorithm, proposed by Van Jacobson in 1988, use less bandwidth than available and it allows to send less data, than congestion window size. Still, ther'a some issues with packets flow.

Graph 3: Docker:


Docker uses one linux bridge ( bridge linux module ) per container.

``snd_cwnd`` -- is the congestion window size. It is constantly readjusted by the sender depending on the congestion status of the network.

``snd_ssthresh`` -- increment of ``cwnd`` depends on this value. If ``cwnd`` less than ``ssthresh``, ``cwnd`` should be incremented by 1 every time ACK received. If ``cwnd`` greater than ``ssthresh``, then ``cwnd`` should be incremented by 1/``cwnd``. If packet dropped, TCP resets ``cwnd`` and threshold.

Conclusion

Experiments and analysis have shown that TCP packets might be dropped by receiving side in different amounts. This amount varies depending on what bridge implementation you use.

1 GB RAM VPS was used during openvswitch and linuxbridge experiments. No memory consumption differences were noticed on VPS during tests.

Software Versions

  • Linux kernel 3.16.5 ( Debian )
  • openvswitch 2.3.0
  • openstack Juno release
  • docker.io 1.3.0
  • lxc 1.0.6

Cubic congestion control algorithm was used during all tests.

31 October, 2014