2
0
mirror of https://github.com/openvswitch/ovs synced 2025-08-22 01:51:26 +00:00
ovs/ofproto
Ilya Maximets cb64234786 ofproto-dpif: Improve load balancing in dp_hash select groups.
ovn-kubernetes users observe uneven distribution of traffic among
service backends.  It gets noticeably worse when 9+ backends are
configured.  For example:

    Server        connections
    server-6nbg7     634
    server-72br8     326  <-----
    server-7b8rm     605
    server-9tnz7     655
    server-c84hw     596
    server-mjskl     622
    server-ptkcc     336  <-----
    server-rl7pk     592
    server-xd2cb     634

Here we can see that out of 5000 connections total, servers '72br8'
and 'ptkcc' received about 2 times less connections than any other
server.  With 10 backends, 4 servers will receive 2x less, etc.
The situation is common, because kubernetes users frequently scale
their applications with something like kubectl scale --replicas=10.

Services are implemented as OVN load-balancers, which are implemented
as OpenFlow select groups with a dp_hash selection method.  Balancing
is implemented by allocating a hash space (number of hashes is a
power of 2) and distributing hashes between the group buckets using
Webster's method taking into account bucket weights.

This works well enough when buckets have different weights, because
OVS needs to allocate larger hash space in order to accommodate the
weight difference.  However, in case of OVN load-balancers, all the
buckets (backends) have the same weight.  This leads to allocation
of a smaller hash space just enough to fit all the backends (closest
power of 2).  For example, if we have 10 backends (buckets), then
16 hashes will be allocated.  After hashes are distributed, we end
up with 6 buckets having 2 hashes each and 4 buckets having 1 hash.
So, each of the 6 buckets will receive on average 2x more traffic
than each of the remaining 4.

The problem can be mitigated by expanding the hash space and this is
already done for cases with small number of buckets by limiting the
hash space to have at least 16 hashes.  However, it is just not large
enough for 9 or 10 buckets to provide a good distribution.  At the
same time, blindly increasing the limit is also not a good solution
as we do not want too large of a hash space for small number of
buckets, simply because each hash is a separate datapath flow and
having too many of them may cause performance issues.

Approach taken in this change is to ensure that the hash space is
at least 4 times larger than the number of buckets, but not larger
than the maximum allowed (256).  This provides a better distribution
while not unnecessarily exploding number of datapath flows for
services with not that many backends.

Here is some data to demonstrate why the 4 was chosen as a coefficient:

  coeff.  :   1         2         3         4         5        100
  -------------------------------------------------------------------
  AvgDiff : 43.1 %    27.1 %    18.3 %    15.1 %    13.6 %    10.9 %
  MaxDiff : 50.0 %    33.3 %    25.0 %    20.0 %    20.0 %    20.0 %
  AvgDev  : 24.9 %    13.4 %     8.5 %     6.9 %     6.1 %     4.8 %
  MaxDev  : 35.4 %    20.4 %    14.4 %    11.2 %    11.2 %    11.2 %
  --------+----------------------------------------------------------
    16    :   1         1         1         1         1         -
    32    :  17         9         6         5         4         -
    64    :  33        17        11         9         7         -
   128    :  65        33        22        17        13         1
   256    : 129        65        43        33        26         2
  --------+----------------------------------------------------------
           current                       proposed

Table shows average and maximum load difference (Diff) between backends
across groups with 1 to 64 equally weighted backends.  And it shows
average and maximum standard deviation (Dev) of load distribution for
the same.  For example, with a coefficient 2, the maximum difference
between two backends will be 33% and the maximum standard deviation
will be 20.4%.  With the current logic (coefficient of 1) we have
maximum difference as high as 50%, as shown with the example at the
beginning, with the standard deviation of 35.4%.

The bottom half of the table shows from how many backends we start to
use a particular number of buckets.  For example, with a coeff. 3
we will have 16 hashes for 1 to 5 buckets, 32 hashes for 6-10, 64
buckets for 11-21 and so on.

According to the table, the number 4 is about where we achieve a good
enough standard deviation for the load (11.2%) while still not creating
too many hashes for cases with low number of backends.  The standard
deviation also doesn't go down that much with higher coefficient.

The data is aggregated for up to 64 backends because with higher
numbers we're getting close to the cap of 256 hashes, so deviation
increases.  However, the load difference with so many backends should
have lower impact on a cluster in general.  The change improves the
cases with 64+ backends, but to get very good numbers we'll need to
consider moving the upper limit for the hash space, which is a separate
change to think about.

Added test checks that standard deviation of the load distribution
between buckets is below 12.5% for all cases up to 64 buckets.  It's
just a pretty number that I've chosen that should be IMO good enough.

Note: when number of buckets is a power of 2, then we have an ideal
distribution with zero deviation, because hashes can be evenly mapped
to buckets.

Note 2: The change doesn't affect distribution for 4 or less backends,
since there are still minimum 16 hashes for those groups.

Reported-at: https://issues.redhat.com/browse/FDP-826
Acked-by: Eelco Chaudron <echaudro@redhat.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-10-07 21:17:06 +02:00
..
2024-07-15 11:15:30 +02:00
2021-06-22 21:29:57 +02:00