2
0
mirror of https://github.com/openvswitch/ovs synced 2025-08-22 09:58:01 +00:00
ovs/lib/dpif-netlink.c

5315 lines
168 KiB
C
Raw Normal View History

/*
* Copyright (c) 2008-2018 Nicira, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <config.h>
#include "dpif-netlink.h"
#include <ctype.h>
#include <errno.h>
#include <fcntl.h>
#include <inttypes.h>
#include <net/if.h>
#include <linux/types.h>
#include <linux/pkt_sched.h>
#include <poll.h>
#include <stdlib.h>
#include <strings.h>
#include <sys/epoll.h>
2010-05-20 13:26:48 -07:00
#include <sys/stat.h>
#include <unistd.h>
#include "bitmap.h"
#include "dpif-netlink-rtnl.h"
#include "dpif-provider.h"
#include "fat-rwlock.h"
#include "flow.h"
#include "netdev-linux.h"
#include "netdev-offload.h"
#include "netdev-provider.h"
#include "netdev-vport.h"
#include "netdev.h"
#include "netlink-conntrack.h"
#include "netlink-notifier.h"
#include "netlink-socket.h"
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
#include "netlink.h"
#include "netnsid.h"
#include "odp-util.h"
#include "openvswitch/dynamic-string.h"
#include "openvswitch/flow.h"
#include "openvswitch/hmap.h"
#include "openvswitch/match.h"
#include "openvswitch/ofpbuf.h"
#include "openvswitch/poll-loop.h"
#include "openvswitch/shash.h"
#include "openvswitch/thread.h"
#include "openvswitch/usdt-probes.h"
#include "openvswitch/vlog.h"
#include "packets.h"
#include "random.h"
#include "sset.h"
#include "timeval.h"
#include "unaligned.h"
#include "util.h"
VLOG_DEFINE_THIS_MODULE(dpif_netlink);
2014-10-23 08:27:34 -07:00
#ifdef _WIN32
Windows: Add internal switch port per OVS bridge This patch updates the following commands in the vswitch: ovs-vsctl add-br br-test ovs-vsctl del-br br-test ovs-vsctl add-br br-test: This command will now create an internal port on the MSFT virtual switch using the WMI interface from Msvm_VirtualEthernetSwitchManagementService leveraging the method AddResourceSettings. Before creating the actual port, the switch will be queried to see if there is not a port already created (good for restarts when restarting the vswitch daemon). If there is a port defined it will return success and log a message. After checking if the port already exists the command will also verify if the forwarding extension (windows datapath) is enabled and on a single switch. If it is not activated or if it is activated on multiple switches it will return an error and a message will be logged. After the port was created on the switch, we will disable the adapter on the host and rename to the corresponding OVS bridge name for consistency. The user will enable and set the values he wants after creation. ovs-vsctl del-br br-test This command will remove an internal port on the MSFT virtual switch using the Msvm_VirtualEthernetSwitchManagementService class and executing the method RemoveResourceSettings. Both commands will be blocking until the WMI job is finished, this allows us to guarantee that the ports are created and their name are set before issuing a netlink message to the windows datapath. This patch also includes helpers for normal WMI retrievals and initializations. Appveyor and documentation has been modified to include the libraries needed for COM objects. This patch was tested individually using IMallocSpy and CRT heap checks to ensure no new memory leaks are introduced. Tested on the following OS's: Windows 2012, Windows 2012r2, Windows 2016 Signed-off-by: Alin Gabriel Serdean <aserdean@cloudbasesolutions.com> Acked-by: Paul Boca <pboca@cloudbasesolutions.com> Acked-by: Sairam Venugopal <vsairam@vmware.com> Signed-off-by: Gurucharan Shetty <guru@ovn.org>
2016-12-20 19:41:22 +00:00
#include "wmi.h"
2014-10-23 08:27:34 -07:00
enum { WINDOWS = 1 };
#else
enum { WINDOWS = 0 };
#endif
enum { MAX_PORTS = USHRT_MAX };
/* This ethtool flag was introduced in Linux 2.6.24, so it might be
* missing if we have old headers. */
#define ETH_FLAG_LRO (1 << 15) /* LRO is enabled */
#define FLOW_DUMP_MAX_BATCH 50
#define OPERATE_MAX_OPS 50
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
#ifndef EPOLLEXCLUSIVE
#define EPOLLEXCLUSIVE (1u << 28)
#endif
#define OVS_DP_F_UNSUPPORTED (1u << 31);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
/* This PID is not used by the kernel datapath when using dispatch per CPU,
* but it is required to be set (not zero). */
#define DPIF_NETLINK_PER_CPU_PID UINT32_MAX
struct dpif_netlink_dp {
/* Generic Netlink header. */
uint8_t cmd;
/* struct ovs_header. */
int dp_ifindex;
/* Attributes. */
const char *name; /* OVS_DP_ATTR_NAME. */
const uint32_t *upcall_pid; /* OVS_DP_ATTR_UPCALL_PID. */
uint32_t user_features; /* OVS_DP_ATTR_USER_FEATURES */
uint32_t cache_size; /* OVS_DP_ATTR_MASKS_CACHE_SIZE */
const struct ovs_dp_stats *stats; /* OVS_DP_ATTR_STATS. */
const struct ovs_dp_megaflow_stats *megaflow_stats;
/* OVS_DP_ATTR_MEGAFLOW_STATS.*/
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
const uint32_t *upcall_pids; /* OVS_DP_ATTR_PER_CPU_PIDS */
uint32_t n_upcall_pids;
};
static void dpif_netlink_dp_init(struct dpif_netlink_dp *);
static int dpif_netlink_dp_from_ofpbuf(struct dpif_netlink_dp *,
const struct ofpbuf *);
static void dpif_netlink_dp_dump_start(struct nl_dump *);
static int dpif_netlink_dp_transact(const struct dpif_netlink_dp *request,
struct dpif_netlink_dp *reply,
struct ofpbuf **bufp);
static int dpif_netlink_dp_get(const struct dpif *,
struct dpif_netlink_dp *reply,
struct ofpbuf **bufp);
static int
dpif_netlink_set_features(struct dpif *dpif_, uint32_t new_features);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static void
dpif_netlink_unixctl_dispatch_mode(struct unixctl_conn *conn, int argc,
const char *argv[], void *aux);
struct dpif_netlink_flow {
/* Generic Netlink header. */
uint8_t cmd;
/* struct ovs_header. */
unsigned int nlmsg_flags;
int dp_ifindex;
/* Attributes.
*
* The 'stats' member points to 64-bit data that might only be aligned on
* 32-bit boundaries, so get_unaligned_u64() should be used to access its
* values.
*
* If 'actions' is nonnull then OVS_FLOW_ATTR_ACTIONS will be included in
* the Netlink version of the command, even if actions_len is zero. */
const struct nlattr *key; /* OVS_FLOW_ATTR_KEY. */
size_t key_len;
const struct nlattr *mask; /* OVS_FLOW_ATTR_MASK. */
size_t mask_len;
const struct nlattr *actions; /* OVS_FLOW_ATTR_ACTIONS. */
size_t actions_len;
ovs_u128 ufid; /* OVS_FLOW_ATTR_FLOW_ID. */
bool ufid_present; /* Is there a UFID? */
bool ufid_terse; /* Skip serializing key/mask/acts? */
const struct ovs_flow_stats *stats; /* OVS_FLOW_ATTR_STATS. */
const uint8_t *tcp_flags; /* OVS_FLOW_ATTR_TCP_FLAGS. */
const ovs_32aligned_u64 *used; /* OVS_FLOW_ATTR_USED. */
bool clear; /* OVS_FLOW_ATTR_CLEAR. */
bool probe; /* OVS_FLOW_ATTR_PROBE. */
};
static void dpif_netlink_flow_init(struct dpif_netlink_flow *);
static int dpif_netlink_flow_from_ofpbuf(struct dpif_netlink_flow *,
const struct ofpbuf *);
static void dpif_netlink_flow_to_ofpbuf(const struct dpif_netlink_flow *,
struct ofpbuf *);
static int dpif_netlink_flow_transact(struct dpif_netlink_flow *request,
struct dpif_netlink_flow *reply,
struct ofpbuf **bufp);
static void dpif_netlink_flow_get_stats(const struct dpif_netlink_flow *,
struct dpif_flow_stats *);
static void dpif_netlink_flow_to_dpif_flow(struct dpif_flow *,
const struct dpif_netlink_flow *);
/* One of the dpif channels between the kernel and userspace. */
struct dpif_channel {
struct nl_sock *sock; /* Netlink socket. */
long long int last_poll; /* Last time this channel was polled. */
};
2014-10-23 08:27:34 -07:00
#ifdef _WIN32
#define VPORT_SOCK_POOL_SIZE 1
/* On Windows, there is no native support for epoll. There are equivalent
* interfaces though, that are not used currently. For simpicity, a pool of
* netlink sockets is used. Each socket is represented by 'struct
* dpif_windows_vport_sock'. Since it is a pool, multiple OVS ports may be
* sharing the same socket. In the future, we can add a reference count and
* such fields. */
struct dpif_windows_vport_sock {
struct nl_sock *nl_sock; /* netlink socket. */
};
#endif
struct dpif_handler {
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
/* per-vport dispatch mode. */
struct epoll_event *epoll_events;
int epoll_fd; /* epoll fd that includes channel socks. */
int n_events; /* Num events returned by epoll_wait(). */
int event_offset; /* Offset into 'epoll_events'. */
2014-10-23 08:27:34 -07:00
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
/* per-cpu dispatch mode. */
struct nl_sock *sock; /* Each handler thread holds one netlink
socket. */
2014-10-23 08:27:34 -07:00
#ifdef _WIN32
/* Pool of sockets. */
struct dpif_windows_vport_sock *vport_sock_pool;
size_t last_used_pool_idx; /* Index to aid in allocating a
socket in the pool to a port. */
#endif
};
/* Datapath interface for the openvswitch Linux kernel module. */
struct dpif_netlink {
struct dpif dpif;
int dp_ifindex;
uint32_t user_features;
/* Upcall messages. */
struct fat_rwlock upcall_lock;
struct dpif_handler *handlers;
uint32_t n_handlers; /* Num of upcall handlers. */
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
/* Per-vport dispatch mode. */
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
struct dpif_channel *channels; /* Array of channels for each port. */
int uc_array_size; /* Size of 'handler->channels' and */
/* 'handler->epoll_events'. */
/* Change notification. */
struct nl_sock *port_notifier; /* vport multicast group subscriber. */
bool refresh_channels;
};
static void report_loss(struct dpif_netlink *, struct dpif_channel *,
uint32_t ch_idx, uint32_t handler_id);
static struct vlog_rate_limit error_rl = VLOG_RATE_LIMIT_INIT(9999, 5);
/* Generic Netlink family numbers for OVS.
*
* Initialized by dpif_netlink_init(). */
static int ovs_datapath_family;
static int ovs_vport_family;
static int ovs_flow_family;
static int ovs_packet_family;
static int ovs_meter_family;
static int ovs_ct_limit_family;
/* Generic Netlink multicast groups for OVS.
*
* Initialized by dpif_netlink_init(). */
static unsigned int ovs_vport_mcgroup;
/* If true, tunnel devices are created using OVS compat/genetlink.
* If false, tunnel devices are created with rtnetlink and using light weight
* tunnels. If we fail to create the tunnel the rtnetlink+LWT, then we fallback
* to using the compat interface. */
static bool ovs_tunnels_out_of_tree = true;
static int dpif_netlink_init(void);
static int open_dpif(const struct dpif_netlink_dp *, struct dpif **);
static uint32_t dpif_netlink_port_get_pid(const struct dpif *,
odp_port_t port_no);
2014-10-23 08:27:34 -07:00
static void dpif_netlink_handler_uninit(struct dpif_handler *handler);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static int dpif_netlink_refresh_handlers_vport_dispatch(struct dpif_netlink *,
uint32_t n_handlers);
static void destroy_all_channels(struct dpif_netlink *);
static int dpif_netlink_refresh_handlers_cpu_dispatch(struct dpif_netlink *);
static void destroy_all_handlers(struct dpif_netlink *);
static void dpif_netlink_vport_to_ofpbuf(const struct dpif_netlink_vport *,
struct ofpbuf *);
static int dpif_netlink_vport_from_ofpbuf(struct dpif_netlink_vport *,
const struct ofpbuf *);
static int dpif_netlink_port_query__(const struct dpif_netlink *dpif,
odp_port_t port_no, const char *port_name,
struct dpif_port *dpif_port);
static int
create_nl_sock(struct dpif_netlink *dpif OVS_UNUSED, struct nl_sock **sockp)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
#ifndef _WIN32
return nl_sock_create(NETLINK_GENERIC, sockp);
#else
/* Pick netlink sockets to use in a round-robin fashion from each
* handler's pool of sockets. */
struct dpif_handler *handler = &dpif->handlers[0];
struct dpif_windows_vport_sock *sock_pool = handler->vport_sock_pool;
size_t index = handler->last_used_pool_idx;
/* A pool of sockets is allocated when the handler is initialized. */
if (sock_pool == NULL) {
*sockp = NULL;
return EINVAL;
}
ovs_assert(index < VPORT_SOCK_POOL_SIZE);
*sockp = sock_pool[index].nl_sock;
ovs_assert(*sockp);
index = (index == VPORT_SOCK_POOL_SIZE - 1) ? 0 : index + 1;
handler->last_used_pool_idx = index;
return 0;
#endif
}
static void
close_nl_sock(struct nl_sock *sock)
{
#ifndef _WIN32
nl_sock_destroy(sock);
#endif
}
static struct dpif_netlink *
dpif_netlink_cast(const struct dpif *dpif)
{
dpif_assert_class(dpif, &dpif_netlink_class);
return CONTAINER_OF(dpif, struct dpif_netlink, dpif);
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static inline bool
dpif_netlink_upcall_per_cpu(const struct dpif_netlink *dpif) {
return !!((dpif)->user_features & OVS_DP_F_DISPATCH_UPCALL_PER_CPU);
}
static int
dpif_netlink_enumerate(struct sset *all_dps,
const struct dpif_class *dpif_class OVS_UNUSED)
{
struct nl_dump dump;
uint64_t reply_stub[NL_DUMP_BUFSIZE / 8];
struct ofpbuf msg, buf;
int error;
error = dpif_netlink_init();
if (error) {
return error;
}
ofpbuf_use_stub(&buf, reply_stub, sizeof reply_stub);
dpif_netlink_dp_dump_start(&dump);
while (nl_dump_next(&dump, &msg, &buf)) {
struct dpif_netlink_dp dp;
if (!dpif_netlink_dp_from_ofpbuf(&dp, &msg)) {
sset_add(all_dps, dp.name);
}
}
ofpbuf_uninit(&buf);
return nl_dump_done(&dump);
}
static int
dpif_netlink_open(const struct dpif_class *class OVS_UNUSED, const char *name,
bool create, struct dpif **dpifp)
{
struct dpif_netlink_dp dp_request, dp;
struct ofpbuf *buf;
uint32_t upcall_pid;
int error;
error = dpif_netlink_init();
if (error) {
return error;
}
/* Create or look up datapath. */
dpif_netlink_dp_init(&dp_request);
upcall_pid = 0;
dp_request.upcall_pid = &upcall_pid;
dp_request.name = name;
if (create) {
dp_request.cmd = OVS_DP_CMD_NEW;
} else {
dp_request.cmd = OVS_DP_CMD_GET;
error = dpif_netlink_dp_transact(&dp_request, &dp, &buf);
if (error) {
return error;
}
dp_request.user_features = dp.user_features;
ofpbuf_delete(buf);
/* Use OVS_DP_CMD_SET to report user features */
dp_request.cmd = OVS_DP_CMD_SET;
}
/* Some older kernels will not reject unknown features. This will cause
* 'ovs-vswitchd' to incorrectly assume a feature is supported. In order to
* test for that, we attempt to set a feature that we know is not supported
* by any kernel. If this feature is not rejected, we can assume we are
* running on one of these older kernels.
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
*/
dp_request.user_features |= OVS_DP_F_UNALIGNED;
dp_request.user_features |= OVS_DP_F_VPORT_PIDS;
dp_request.user_features |= OVS_DP_F_UNSUPPORTED;
dpif-netlink: Fix memory leak dpif_netlink_open(). In the specific call to dpif_netlink_dp_transact() (line 398) in dpif_netlink_open(), the 'dp' content is not being used in the branch when no error is returned (starting line 430). Furthermore, the 'dp' and 'buf' variables are overwritten later in this same branch when a new netlink request is sent (line 437), which results in a memory leak. Reported by Address Sanitizer. Indirect leak of 1024 byte(s) in 1 object(s) allocated from: 0 0x7fe09d3bfe70 in __interceptor_malloc (/usr/lib64/libasan.so.4+0xe0e70) 1 0x8759be in xmalloc__ lib/util.c:140 2 0x875a9a in xmalloc lib/util.c:175 3 0x7ba0d2 in ofpbuf_init lib/ofpbuf.c:141 4 0x7ba1d6 in ofpbuf_new lib/ofpbuf.c:169 5 0x9057f9 in nl_sock_transact lib/netlink-socket.c:1113 6 0x907a7e in nl_transact lib/netlink-socket.c:1817 7 0x8b5abe in dpif_netlink_dp_transact lib/dpif-netlink.c:5007 8 0x89a6b5 in dpif_netlink_open lib/dpif-netlink.c:398 9 0x5de16f in do_open lib/dpif.c:348 10 0x5de69a in dpif_open lib/dpif.c:393 11 0x5de71f in dpif_create_and_open lib/dpif.c:419 12 0x47b918 in open_dpif_backer ofproto/ofproto-dpif.c:764 13 0x483e4a in construct ofproto/ofproto-dpif.c:1658 14 0x441644 in ofproto_create ofproto/ofproto.c:556 15 0x40ba5a in bridge_reconfigure vswitchd/bridge.c:885 16 0x41f1a9 in bridge_run vswitchd/bridge.c:3313 17 0x42d4fb in main vswitchd/ovs-vswitchd.c:132 18 0x7fe09cc03c86 in __libc_start_main (/usr/lib64/libc.so.6+0x25c86) Fixes: b841e3cd4a28 ("dpif-netlink: Fix feature negotiation for older kernels.") Reviewed-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2023-04-24 19:54:58 +08:00
error = dpif_netlink_dp_transact(&dp_request, NULL, NULL);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (error) {
/* The Open vSwitch kernel module has two modes for dispatching
* upcalls: per-vport and per-cpu.
*
* When dispatching upcalls per-vport, the kernel will
* send the upcall via a Netlink socket that has been selected based on
* the vport that received the packet that is causing the upcall.
*
* When dispatching upcall per-cpu, the kernel will send the upcall via
* a Netlink socket that has been selected based on the cpu that
* received the packet that is causing the upcall.
*
* First we test to see if the kernel module supports per-cpu
* dispatching (the preferred method). If it does not support per-cpu
* dispatching, we fall back to the per-vport dispatch mode.
*/
dp_request.user_features &= ~OVS_DP_F_UNSUPPORTED;
dp_request.user_features &= ~OVS_DP_F_VPORT_PIDS;
dp_request.user_features |= OVS_DP_F_DISPATCH_UPCALL_PER_CPU;
error = dpif_netlink_dp_transact(&dp_request, &dp, &buf);
if (error == EOPNOTSUPP) {
dp_request.user_features &= ~OVS_DP_F_DISPATCH_UPCALL_PER_CPU;
dp_request.user_features |= OVS_DP_F_VPORT_PIDS;
error = dpif_netlink_dp_transact(&dp_request, &dp, &buf);
}
if (error) {
return error;
}
error = open_dpif(&dp, dpifp);
dpif_netlink_set_features(*dpifp, OVS_DP_F_TC_RECIRC_SHARING);
} else {
VLOG_INFO("Kernel does not correctly support feature negotiation. "
"Using standard features.");
dp_request.cmd = OVS_DP_CMD_SET;
dp_request.user_features = 0;
dp_request.user_features |= OVS_DP_F_UNALIGNED;
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
dp_request.user_features |= OVS_DP_F_VPORT_PIDS;
error = dpif_netlink_dp_transact(&dp_request, &dp, &buf);
if (error) {
return error;
}
error = open_dpif(&dp, dpifp);
}
ofpbuf_delete(buf);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (create) {
VLOG_INFO("Datapath dispatch mode: %s",
dpif_netlink_upcall_per_cpu(dpif_netlink_cast(*dpifp)) ?
"per-cpu" : "per-vport");
}
return error;
}
static int
open_dpif(const struct dpif_netlink_dp *dp, struct dpif **dpifp)
{
struct dpif_netlink *dpif;
dpif = xzalloc(sizeof *dpif);
dpif->port_notifier = NULL;
fat_rwlock_init(&dpif->upcall_lock);
dpif_init(&dpif->dpif, &dpif_netlink_class, dp->name,
dp->dp_ifindex, dp->dp_ifindex);
dpif->dp_ifindex = dp->dp_ifindex;
dpif->user_features = dp->user_features;
*dpifp = &dpif->dpif;
return 0;
}
2014-10-23 08:27:34 -07:00
#ifdef _WIN32
static void
vport_delete_sock_pool(struct dpif_handler *handler)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
if (handler->vport_sock_pool) {
uint32_t i;
struct dpif_windows_vport_sock *sock_pool =
handler->vport_sock_pool;
for (i = 0; i < VPORT_SOCK_POOL_SIZE; i++) {
if (sock_pool[i].nl_sock) {
nl_sock_unsubscribe_packets(sock_pool[i].nl_sock);
nl_sock_destroy(sock_pool[i].nl_sock);
sock_pool[i].nl_sock = NULL;
}
}
free(handler->vport_sock_pool);
handler->vport_sock_pool = NULL;
}
}
static int
vport_create_sock_pool(struct dpif_handler *handler)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
struct dpif_windows_vport_sock *sock_pool;
size_t i;
int error = 0;
sock_pool = xzalloc(VPORT_SOCK_POOL_SIZE * sizeof *sock_pool);
for (i = 0; i < VPORT_SOCK_POOL_SIZE; i++) {
error = nl_sock_create(NETLINK_GENERIC, &sock_pool[i].nl_sock);
if (error) {
goto error;
}
/* Enable the netlink socket to receive packets. This is equivalent to
* calling nl_sock_join_mcgroup() to receive events. */
error = nl_sock_subscribe_packets(sock_pool[i].nl_sock);
if (error) {
goto error;
}
}
handler->vport_sock_pool = sock_pool;
handler->last_used_pool_idx = 0;
return 0;
error:
vport_delete_sock_pool(handler);
return error;
}
#endif /* _WIN32 */
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
/* Given the port number 'port_idx', extracts the pid of netlink socket
* associated to the port and assigns it to 'upcall_pid'. */
static bool
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
vport_get_pid(struct dpif_netlink *dpif, uint32_t port_idx,
uint32_t *upcall_pid)
{
/* Since the nl_sock can only be assigned in either all
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
* or none "dpif" channels, the following check
* would suffice. */
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
if (!dpif->channels[port_idx].sock) {
return false;
}
2014-10-23 08:27:34 -07:00
ovs_assert(!WINDOWS || dpif->n_handlers <= 1);
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
*upcall_pid = nl_sock_pid(dpif->channels[port_idx].sock);
return true;
}
static int
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
vport_add_channel(struct dpif_netlink *dpif, odp_port_t port_no,
struct nl_sock *sock)
{
struct epoll_event event;
uint32_t port_idx = odp_to_u32(port_no);
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
size_t i;
int error;
if (dpif->handlers == NULL) {
close_nl_sock(sock);
return 0;
}
/* We assume that the datapath densely chooses port numbers, which can
* therefore be used as an index into 'channels' and 'epoll_events' of
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
* 'dpif'. */
if (port_idx >= dpif->uc_array_size) {
uint32_t new_size = port_idx + 1;
if (new_size > MAX_PORTS) {
VLOG_WARN_RL(&error_rl, "%s: datapath port %"PRIu32" too big",
dpif_name(&dpif->dpif), port_no);
return EFBIG;
}
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
dpif->channels = xrealloc(dpif->channels,
new_size * sizeof *dpif->channels);
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
for (i = dpif->uc_array_size; i < new_size; i++) {
dpif->channels[i].sock = NULL;
}
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
for (i = 0; i < dpif->n_handlers; i++) {
struct dpif_handler *handler = &dpif->handlers[i];
handler->epoll_events = xrealloc(handler->epoll_events,
new_size * sizeof *handler->epoll_events);
}
dpif->uc_array_size = new_size;
}
memset(&event, 0, sizeof event);
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
event.events = EPOLLIN | EPOLLEXCLUSIVE;
event.data.u32 = port_idx;
for (i = 0; i < dpif->n_handlers; i++) {
struct dpif_handler *handler = &dpif->handlers[i];
2014-10-23 08:27:34 -07:00
#ifndef _WIN32
if (epoll_ctl(handler->epoll_fd, EPOLL_CTL_ADD, nl_sock_fd(sock),
&event) < 0) {
error = errno;
goto error;
}
#endif
}
dpif->channels[port_idx].sock = sock;
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
dpif->channels[port_idx].last_poll = LLONG_MIN;
return 0;
error:
2014-10-23 08:27:34 -07:00
#ifndef _WIN32
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
while (i--) {
epoll_ctl(dpif->handlers[i].epoll_fd, EPOLL_CTL_DEL,
nl_sock_fd(sock), NULL);
}
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
#endif
dpif->channels[port_idx].sock = NULL;
return error;
}
static void
vport_del_channels(struct dpif_netlink *dpif, odp_port_t port_no)
{
uint32_t port_idx = odp_to_u32(port_no);
size_t i;
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
if (!dpif->handlers || port_idx >= dpif->uc_array_size
|| !dpif->channels[port_idx].sock) {
return;
}
for (i = 0; i < dpif->n_handlers; i++) {
struct dpif_handler *handler = &dpif->handlers[i];
2014-10-23 08:27:34 -07:00
#ifndef _WIN32
epoll_ctl(handler->epoll_fd, EPOLL_CTL_DEL,
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
nl_sock_fd(dpif->channels[port_idx].sock), NULL);
2014-10-23 08:27:34 -07:00
#endif
handler->event_offset = handler->n_events = 0;
}
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
#ifndef _WIN32
nl_sock_destroy(dpif->channels[port_idx].sock);
#endif
dpif->channels[port_idx].sock = NULL;
}
static void
destroy_all_channels(struct dpif_netlink *dpif)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
unsigned int i;
if (!dpif->handlers) {
return;
}
for (i = 0; i < dpif->uc_array_size; i++ ) {
struct dpif_netlink_vport vport_request;
uint32_t upcall_pids = 0;
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
if (!dpif->channels[i].sock) {
continue;
}
/* Turn off upcalls. */
dpif_netlink_vport_init(&vport_request);
vport_request.cmd = OVS_VPORT_CMD_SET;
vport_request.dp_ifindex = dpif->dp_ifindex;
vport_request.port_no = u32_to_odp(i);
vport_request.n_upcall_pids = 1;
vport_request.upcall_pids = &upcall_pids;
dpif_netlink_vport_transact(&vport_request, NULL, NULL);
vport_del_channels(dpif, u32_to_odp(i));
}
for (i = 0; i < dpif->n_handlers; i++) {
struct dpif_handler *handler = &dpif->handlers[i];
2014-10-23 08:27:34 -07:00
dpif_netlink_handler_uninit(handler);
free(handler->epoll_events);
}
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
free(dpif->channels);
free(dpif->handlers);
dpif->handlers = NULL;
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
dpif->channels = NULL;
dpif->n_handlers = 0;
dpif->uc_array_size = 0;
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static void
destroy_all_handlers(struct dpif_netlink *dpif)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
int i = 0;
if (!dpif->handlers) {
return;
}
for (i = 0; i < dpif->n_handlers; i++) {
struct dpif_handler *handler = &dpif->handlers[i];
close_nl_sock(handler->sock);
}
free(dpif->handlers);
dpif->handlers = NULL;
dpif->n_handlers = 0;
}
static void
dpif_netlink_close(struct dpif *dpif_)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
nl_sock_destroy(dpif->port_notifier);
fat_rwlock_wrlock(&dpif->upcall_lock);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (dpif_netlink_upcall_per_cpu(dpif)) {
destroy_all_handlers(dpif);
} else {
destroy_all_channels(dpif);
}
fat_rwlock_unlock(&dpif->upcall_lock);
fat_rwlock_destroy(&dpif->upcall_lock);
free(dpif);
}
static int
dpif_netlink_destroy(struct dpif *dpif_)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_netlink_dp dp;
dpif_netlink_dp_init(&dp);
dp.cmd = OVS_DP_CMD_DEL;
dp.dp_ifindex = dpif->dp_ifindex;
return dpif_netlink_dp_transact(&dp, NULL, NULL);
}
static bool
dpif_netlink_run(struct dpif *dpif_)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (!dpif_netlink_upcall_per_cpu(dpif)) {
if (dpif->refresh_channels) {
dpif->refresh_channels = false;
fat_rwlock_wrlock(&dpif->upcall_lock);
dpif_netlink_refresh_handlers_vport_dispatch(dpif,
dpif->n_handlers);
fat_rwlock_unlock(&dpif->upcall_lock);
}
}
return false;
}
static int
dpif_netlink_get_stats(const struct dpif *dpif_, struct dpif_dp_stats *stats)
{
struct dpif_netlink_dp dp;
struct ofpbuf *buf;
int error;
error = dpif_netlink_dp_get(dpif_, &dp, &buf);
if (!error) {
memset(stats, 0, sizeof *stats);
if (dp.stats) {
stats->n_hit = get_32aligned_u64(&dp.stats->n_hit);
stats->n_missed = get_32aligned_u64(&dp.stats->n_missed);
stats->n_lost = get_32aligned_u64(&dp.stats->n_lost);
stats->n_flows = get_32aligned_u64(&dp.stats->n_flows);
}
if (dp.megaflow_stats) {
stats->n_masks = dp.megaflow_stats->n_masks;
stats->n_mask_hit = get_32aligned_u64(
&dp.megaflow_stats->n_mask_hit);
stats->n_cache_hit = get_32aligned_u64(
&dp.megaflow_stats->n_cache_hit);
if (!stats->n_cache_hit) {
/* Old kernels don't use this field and always
* report zero instead. Disable this stat. */
stats->n_cache_hit = UINT64_MAX;
}
} else {
stats->n_masks = UINT32_MAX;
stats->n_mask_hit = UINT64_MAX;
stats->n_cache_hit = UINT64_MAX;
}
ofpbuf_delete(buf);
}
return error;
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static int
dpif_netlink_set_handler_pids(struct dpif *dpif_, const uint32_t *upcall_pids,
uint32_t n_upcall_pids)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
handlers: Fix handlers mapping. The handler and CPU mapping in upcalls are incorrect, and this is specially noticeable systems with cpu isolation enabled. Say we have a 12 core system where only every even number CPU is enabled C0, C2, C4, C6, C8, C10 This means we will create an array of size 6 that will be sent to kernel that is populated with sockets [S0, S1, S2, S3, S4, S5] The problem is when the kernel does an upcall it checks the socket array via the index of the CPU, effectively adding additional load on some CPUs while leaving no work on other CPUs. e.g. C0 indexes to S0 C2 indexes to S2 (should be S1) C4 indexes to S4 (should be S2) Modulo of 6 (size of socket array) is applied, so we wrap back to S0 C6 indexes to S0 (should be S3) C8 indexes to S2 (should be S4) C10 indexes to S4 (should be S5) Effectively sockets S0, S2, S4 get overloaded while sockets S1, S3, S5 get no work assigned to them This leads to the kernel to throw the following message: "openvswitch: cpu_id mismatch with handler threads" Instead we will send the kernel a corrected array of sockets the size of all CPUs in the system, or the largest core_id on the system, which ever one is greatest. This is to take care of systems with non-continous core cpus. In the above example we would create a corrected array in a round-robin(assuming prime bias) fashion as follows: [S0, S1, S2, S3, S4, S5, S6, S0, S1, S2, S3, S4] Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Co-authored-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:15 -04:00
int largest_cpu_id = ovs_numa_get_largest_core_id();
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
struct dpif_netlink_dp request, reply;
struct ofpbuf *bufp;
handlers: Fix handlers mapping. The handler and CPU mapping in upcalls are incorrect, and this is specially noticeable systems with cpu isolation enabled. Say we have a 12 core system where only every even number CPU is enabled C0, C2, C4, C6, C8, C10 This means we will create an array of size 6 that will be sent to kernel that is populated with sockets [S0, S1, S2, S3, S4, S5] The problem is when the kernel does an upcall it checks the socket array via the index of the CPU, effectively adding additional load on some CPUs while leaving no work on other CPUs. e.g. C0 indexes to S0 C2 indexes to S2 (should be S1) C4 indexes to S4 (should be S2) Modulo of 6 (size of socket array) is applied, so we wrap back to S0 C6 indexes to S0 (should be S3) C8 indexes to S2 (should be S4) C10 indexes to S4 (should be S5) Effectively sockets S0, S2, S4 get overloaded while sockets S1, S3, S5 get no work assigned to them This leads to the kernel to throw the following message: "openvswitch: cpu_id mismatch with handler threads" Instead we will send the kernel a corrected array of sockets the size of all CPUs in the system, or the largest core_id on the system, which ever one is greatest. This is to take care of systems with non-continous core cpus. In the above example we would create a corrected array in a round-robin(assuming prime bias) fashion as follows: [S0, S1, S2, S3, S4, S5, S6, S0, S1, S2, S3, S4] Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Co-authored-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:15 -04:00
uint32_t *corrected;
int error, i, n_cores;
if (largest_cpu_id == OVS_NUMA_UNSPEC) {
largest_cpu_id = -1;
}
/* Some systems have non-continuous cpu core ids. count_total_cores()
* would return an accurate number, however, this number cannot be used.
* e.g. If the largest core_id of a system is cpu9, but the system only
* has 4 cpus then the OVS kernel module would throw a "CPU mismatch"
* warning. With the MAX() in place in this example we send an array of
* size 10 and prevent the warning. This has no bearing on the number of
* threads created.
*/
n_cores = MAX(count_total_cores(), largest_cpu_id + 1);
VLOG_DBG("Dispatch mode(per-cpu): Setting up handler PIDs for %d cores",
n_cores);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
dpif_netlink_dp_init(&request);
request.cmd = OVS_DP_CMD_SET;
request.name = dpif_->base_name;
request.dp_ifindex = dpif->dp_ifindex;
request.user_features = dpif->user_features |
OVS_DP_F_DISPATCH_UPCALL_PER_CPU;
handlers: Fix handlers mapping. The handler and CPU mapping in upcalls are incorrect, and this is specially noticeable systems with cpu isolation enabled. Say we have a 12 core system where only every even number CPU is enabled C0, C2, C4, C6, C8, C10 This means we will create an array of size 6 that will be sent to kernel that is populated with sockets [S0, S1, S2, S3, S4, S5] The problem is when the kernel does an upcall it checks the socket array via the index of the CPU, effectively adding additional load on some CPUs while leaving no work on other CPUs. e.g. C0 indexes to S0 C2 indexes to S2 (should be S1) C4 indexes to S4 (should be S2) Modulo of 6 (size of socket array) is applied, so we wrap back to S0 C6 indexes to S0 (should be S3) C8 indexes to S2 (should be S4) C10 indexes to S4 (should be S5) Effectively sockets S0, S2, S4 get overloaded while sockets S1, S3, S5 get no work assigned to them This leads to the kernel to throw the following message: "openvswitch: cpu_id mismatch with handler threads" Instead we will send the kernel a corrected array of sockets the size of all CPUs in the system, or the largest core_id on the system, which ever one is greatest. This is to take care of systems with non-continous core cpus. In the above example we would create a corrected array in a round-robin(assuming prime bias) fashion as follows: [S0, S1, S2, S3, S4, S5, S6, S0, S1, S2, S3, S4] Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Co-authored-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:15 -04:00
corrected = xcalloc(n_cores, sizeof *corrected);
for (i = 0; i < n_cores; i++) {
corrected[i] = upcall_pids[i % n_upcall_pids];
}
request.upcall_pids = corrected;
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
request.n_upcall_pids = n_cores;
error = dpif_netlink_dp_transact(&request, &reply, &bufp);
if (!error) {
dpif->user_features = reply.user_features;
ofpbuf_delete(bufp);
if (!dpif_netlink_upcall_per_cpu(dpif)) {
handlers: Fix handlers mapping. The handler and CPU mapping in upcalls are incorrect, and this is specially noticeable systems with cpu isolation enabled. Say we have a 12 core system where only every even number CPU is enabled C0, C2, C4, C6, C8, C10 This means we will create an array of size 6 that will be sent to kernel that is populated with sockets [S0, S1, S2, S3, S4, S5] The problem is when the kernel does an upcall it checks the socket array via the index of the CPU, effectively adding additional load on some CPUs while leaving no work on other CPUs. e.g. C0 indexes to S0 C2 indexes to S2 (should be S1) C4 indexes to S4 (should be S2) Modulo of 6 (size of socket array) is applied, so we wrap back to S0 C6 indexes to S0 (should be S3) C8 indexes to S2 (should be S4) C10 indexes to S4 (should be S5) Effectively sockets S0, S2, S4 get overloaded while sockets S1, S3, S5 get no work assigned to them This leads to the kernel to throw the following message: "openvswitch: cpu_id mismatch with handler threads" Instead we will send the kernel a corrected array of sockets the size of all CPUs in the system, or the largest core_id on the system, which ever one is greatest. This is to take care of systems with non-continous core cpus. In the above example we would create a corrected array in a round-robin(assuming prime bias) fashion as follows: [S0, S1, S2, S3, S4, S5, S6, S0, S1, S2, S3, S4] Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Co-authored-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:15 -04:00
error = -EOPNOTSUPP;
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
}
}
handlers: Fix handlers mapping. The handler and CPU mapping in upcalls are incorrect, and this is specially noticeable systems with cpu isolation enabled. Say we have a 12 core system where only every even number CPU is enabled C0, C2, C4, C6, C8, C10 This means we will create an array of size 6 that will be sent to kernel that is populated with sockets [S0, S1, S2, S3, S4, S5] The problem is when the kernel does an upcall it checks the socket array via the index of the CPU, effectively adding additional load on some CPUs while leaving no work on other CPUs. e.g. C0 indexes to S0 C2 indexes to S2 (should be S1) C4 indexes to S4 (should be S2) Modulo of 6 (size of socket array) is applied, so we wrap back to S0 C6 indexes to S0 (should be S3) C8 indexes to S2 (should be S4) C10 indexes to S4 (should be S5) Effectively sockets S0, S2, S4 get overloaded while sockets S1, S3, S5 get no work assigned to them This leads to the kernel to throw the following message: "openvswitch: cpu_id mismatch with handler threads" Instead we will send the kernel a corrected array of sockets the size of all CPUs in the system, or the largest core_id on the system, which ever one is greatest. This is to take care of systems with non-continous core cpus. In the above example we would create a corrected array in a round-robin(assuming prime bias) fashion as follows: [S0, S1, S2, S3, S4, S5, S6, S0, S1, S2, S3, S4] Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Co-authored-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:15 -04:00
free(corrected);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
return error;
}
static int
dpif_netlink_set_features(struct dpif *dpif_, uint32_t new_features)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_netlink_dp request, reply;
struct ofpbuf *bufp;
int error;
dpif_netlink_dp_init(&request);
request.cmd = OVS_DP_CMD_SET;
request.name = dpif_->base_name;
request.dp_ifindex = dpif->dp_ifindex;
request.user_features = dpif->user_features | new_features;
error = dpif_netlink_dp_transact(&request, &reply, &bufp);
if (!error) {
dpif->user_features = reply.user_features;
ofpbuf_delete(bufp);
if (!(dpif->user_features & new_features)) {
return -EOPNOTSUPP;
}
}
return error;
}
static const char *
get_vport_type(const struct dpif_netlink_vport *vport)
{
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 20);
switch (vport->type) {
case OVS_VPORT_TYPE_NETDEV: {
const char *type = netdev_get_type_from_name(vport->name);
return type ? type : "system";
}
case OVS_VPORT_TYPE_INTERNAL:
return "internal";
case OVS_VPORT_TYPE_GENEVE:
return "geneve";
case OVS_VPORT_TYPE_GRE:
return "gre";
case OVS_VPORT_TYPE_VXLAN:
return "vxlan";
compat: Add ipv6 GRE and IPV6 Tunneling This patch backports upstream ipv6 GRE and tunneling into the OVS OOT (Out of Tree) datapath drivers. The primary reason for this is to support the ERSPAN feature. Because there is no previous history of ipv6 GRE and tunneling it is not possible to exactly reproduce the history of all the files in the patch. The two newly added files - ip6_gre.c and ip6_tunnel.c - are cut from whole cloth out of the upstream Linux 4.15 kernel and then modified as necessary with compatibility layer fixups. These two files already included parts of several other upstream commits that also touched other upstream files. As such, this patch may incorporate parts or all of the following commits: d350a82 net: erspan: create erspan metadata uapi header c69de58 net: erspan: use bitfield instead of mask and offset b423d13 net: erspan: fix use-after-free 214bb1c net: erspan: remove md NULL check afb4c97 ip6_gre: fix potential memory leak in ip6erspan_rcv 50670b6 ip_gre: fix potential memory leak in erspan_rcv a734321 ip6_gre: fix error path when ip6erspan_rcv failed dd8d5b8 ip_gre: fix error path when erspan_rcv failed 293a199 ip6_gre: fix a pontential issue in ip6erspan_rcv d91e8db5 net: erspan: reload pointer after pskb_may_pull ae3e133 net: erspan: fix wrong return value c05fad5 ip_gre: fix wrong return value of erspan_rcv 94d7d8f ip6_gre: add erspan v2 support f551c91 net: erspan: introduce erspan v2 for ip_gre 1d7e2ed net: erspan: refactor existing erspan code ef7baf5 ip6_gre: add ip6 erspan collect_md mode 5a963eb ip6_gre: Add ERSPAN native tunnel support ceaa001 openvswitch: Add erspan tunnel support. f192970 ip_gre: check packet length and mtu correctly in erspan tx c84bed4 ip_gre: erspan device should keep dst c122fda ip_gre: set tunnel hlen properly in erspan_tunnel_init 5513d08 ip_gre: check packet length and mtu correctly in erspan_xmit 935a974 ip_gre: get key from session_id correctly in erspan_rcv 1a66a83 gre: add collect_md mode to ERSPAN tunnel 84e54fe gre: introduce native tunnel support for ERSPAN In cases where the listed commits also touched other source code files then the patches are also listed separately within this patch series. Signed-off-by: Greg Rose <gvrose8192@gmail.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: William Tu <u9012063@gmail.com>
2018-03-05 10:11:57 -08:00
case OVS_VPORT_TYPE_ERSPAN:
return "erspan";
compat: Add ipv6 GRE and IPV6 Tunneling This patch backports upstream ipv6 GRE and tunneling into the OVS OOT (Out of Tree) datapath drivers. The primary reason for this is to support the ERSPAN feature. Because there is no previous history of ipv6 GRE and tunneling it is not possible to exactly reproduce the history of all the files in the patch. The two newly added files - ip6_gre.c and ip6_tunnel.c - are cut from whole cloth out of the upstream Linux 4.15 kernel and then modified as necessary with compatibility layer fixups. These two files already included parts of several other upstream commits that also touched other upstream files. As such, this patch may incorporate parts or all of the following commits: d350a82 net: erspan: create erspan metadata uapi header c69de58 net: erspan: use bitfield instead of mask and offset b423d13 net: erspan: fix use-after-free 214bb1c net: erspan: remove md NULL check afb4c97 ip6_gre: fix potential memory leak in ip6erspan_rcv 50670b6 ip_gre: fix potential memory leak in erspan_rcv a734321 ip6_gre: fix error path when ip6erspan_rcv failed dd8d5b8 ip_gre: fix error path when erspan_rcv failed 293a199 ip6_gre: fix a pontential issue in ip6erspan_rcv d91e8db5 net: erspan: reload pointer after pskb_may_pull ae3e133 net: erspan: fix wrong return value c05fad5 ip_gre: fix wrong return value of erspan_rcv 94d7d8f ip6_gre: add erspan v2 support f551c91 net: erspan: introduce erspan v2 for ip_gre 1d7e2ed net: erspan: refactor existing erspan code ef7baf5 ip6_gre: add ip6 erspan collect_md mode 5a963eb ip6_gre: Add ERSPAN native tunnel support ceaa001 openvswitch: Add erspan tunnel support. f192970 ip_gre: check packet length and mtu correctly in erspan tx c84bed4 ip_gre: erspan device should keep dst c122fda ip_gre: set tunnel hlen properly in erspan_tunnel_init 5513d08 ip_gre: check packet length and mtu correctly in erspan_xmit 935a974 ip_gre: get key from session_id correctly in erspan_rcv 1a66a83 gre: add collect_md mode to ERSPAN tunnel 84e54fe gre: introduce native tunnel support for ERSPAN In cases where the listed commits also touched other source code files then the patches are also listed separately within this patch series. Signed-off-by: Greg Rose <gvrose8192@gmail.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: William Tu <u9012063@gmail.com>
2018-03-05 10:11:57 -08:00
case OVS_VPORT_TYPE_IP6ERSPAN:
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
return "ip6erspan";
compat: Add ipv6 GRE and IPV6 Tunneling This patch backports upstream ipv6 GRE and tunneling into the OVS OOT (Out of Tree) datapath drivers. The primary reason for this is to support the ERSPAN feature. Because there is no previous history of ipv6 GRE and tunneling it is not possible to exactly reproduce the history of all the files in the patch. The two newly added files - ip6_gre.c and ip6_tunnel.c - are cut from whole cloth out of the upstream Linux 4.15 kernel and then modified as necessary with compatibility layer fixups. These two files already included parts of several other upstream commits that also touched other upstream files. As such, this patch may incorporate parts or all of the following commits: d350a82 net: erspan: create erspan metadata uapi header c69de58 net: erspan: use bitfield instead of mask and offset b423d13 net: erspan: fix use-after-free 214bb1c net: erspan: remove md NULL check afb4c97 ip6_gre: fix potential memory leak in ip6erspan_rcv 50670b6 ip_gre: fix potential memory leak in erspan_rcv a734321 ip6_gre: fix error path when ip6erspan_rcv failed dd8d5b8 ip_gre: fix error path when erspan_rcv failed 293a199 ip6_gre: fix a pontential issue in ip6erspan_rcv d91e8db5 net: erspan: reload pointer after pskb_may_pull ae3e133 net: erspan: fix wrong return value c05fad5 ip_gre: fix wrong return value of erspan_rcv 94d7d8f ip6_gre: add erspan v2 support f551c91 net: erspan: introduce erspan v2 for ip_gre 1d7e2ed net: erspan: refactor existing erspan code ef7baf5 ip6_gre: add ip6 erspan collect_md mode 5a963eb ip6_gre: Add ERSPAN native tunnel support ceaa001 openvswitch: Add erspan tunnel support. f192970 ip_gre: check packet length and mtu correctly in erspan tx c84bed4 ip_gre: erspan device should keep dst c122fda ip_gre: set tunnel hlen properly in erspan_tunnel_init 5513d08 ip_gre: check packet length and mtu correctly in erspan_xmit 935a974 ip_gre: get key from session_id correctly in erspan_rcv 1a66a83 gre: add collect_md mode to ERSPAN tunnel 84e54fe gre: introduce native tunnel support for ERSPAN In cases where the listed commits also touched other source code files then the patches are also listed separately within this patch series. Signed-off-by: Greg Rose <gvrose8192@gmail.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: William Tu <u9012063@gmail.com>
2018-03-05 10:11:57 -08:00
case OVS_VPORT_TYPE_IP6GRE:
return "ip6gre";
compat: Add ipv6 GRE and IPV6 Tunneling This patch backports upstream ipv6 GRE and tunneling into the OVS OOT (Out of Tree) datapath drivers. The primary reason for this is to support the ERSPAN feature. Because there is no previous history of ipv6 GRE and tunneling it is not possible to exactly reproduce the history of all the files in the patch. The two newly added files - ip6_gre.c and ip6_tunnel.c - are cut from whole cloth out of the upstream Linux 4.15 kernel and then modified as necessary with compatibility layer fixups. These two files already included parts of several other upstream commits that also touched other upstream files. As such, this patch may incorporate parts or all of the following commits: d350a82 net: erspan: create erspan metadata uapi header c69de58 net: erspan: use bitfield instead of mask and offset b423d13 net: erspan: fix use-after-free 214bb1c net: erspan: remove md NULL check afb4c97 ip6_gre: fix potential memory leak in ip6erspan_rcv 50670b6 ip_gre: fix potential memory leak in erspan_rcv a734321 ip6_gre: fix error path when ip6erspan_rcv failed dd8d5b8 ip_gre: fix error path when erspan_rcv failed 293a199 ip6_gre: fix a pontential issue in ip6erspan_rcv d91e8db5 net: erspan: reload pointer after pskb_may_pull ae3e133 net: erspan: fix wrong return value c05fad5 ip_gre: fix wrong return value of erspan_rcv 94d7d8f ip6_gre: add erspan v2 support f551c91 net: erspan: introduce erspan v2 for ip_gre 1d7e2ed net: erspan: refactor existing erspan code ef7baf5 ip6_gre: add ip6 erspan collect_md mode 5a963eb ip6_gre: Add ERSPAN native tunnel support ceaa001 openvswitch: Add erspan tunnel support. f192970 ip_gre: check packet length and mtu correctly in erspan tx c84bed4 ip_gre: erspan device should keep dst c122fda ip_gre: set tunnel hlen properly in erspan_tunnel_init 5513d08 ip_gre: check packet length and mtu correctly in erspan_xmit 935a974 ip_gre: get key from session_id correctly in erspan_rcv 1a66a83 gre: add collect_md mode to ERSPAN tunnel 84e54fe gre: introduce native tunnel support for ERSPAN In cases where the listed commits also touched other source code files then the patches are also listed separately within this patch series. Signed-off-by: Greg Rose <gvrose8192@gmail.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: William Tu <u9012063@gmail.com>
2018-03-05 10:11:57 -08:00
case OVS_VPORT_TYPE_GTPU:
return "gtpu";
case OVS_VPORT_TYPE_SRV6:
return "srv6";
tunnel: Bareudp Tunnel Support. There are various L3 encapsulation standards using UDP being discussed to leverage the UDP based load balancing capability of different networks. MPLSoUDP (__ https://tools.ietf.org/html/rfc7510) is one among them. The Bareudp tunnel provides a generic L3 encapsulation support for tunnelling different L3 protocols like MPLS, IP, NSH etc. inside a UDP tunnel. An example to create bareudp device to tunnel MPLS traffic is given $ ovs-vsctl add-port br_mpls udp_port -- set interface udp_port \ type=bareudp options:remote_ip=2.1.1.3 options:local_ip=2.1.1.2 \ options:payload_type=0x8847 options:dst_port=6635 The bareudp device supports special handling for MPLS & IP as they can have multiple ethertypes. MPLS procotcol can have ethertypes ETH_P_MPLS_UC (unicast) & ETH_P_MPLS_MC (multicast). IP protocol can have ethertypes ETH_P_IP (v4) & ETH_P_IPV6 (v6). The bareudp device to tunnel L3 traffic with multiple ethertypes (MPLS & IP) can be created by passing the L3 protocol name as string in the field payload_type. An example to create bareudp device to tunnel MPLS unicast & multicast traffic is given below.:: $ ovs-vsctl add-port br_mpls udp_port -- set interface udp_port \ type=bareudp options:remote_ip=2.1.1.3 options:local_ip=2.1.1.2 \ options:payload_type=mpls options:dst_port=6635 Signed-off-by: Martin Varghese <martin.varghese@nokia.com> Acked-By: Greg Rose <gvrose8192@gmail.com> Tested-by: Greg Rose <gvrose8192@gmail.com> Acked-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-12-17 12:48:41 +05:30
case OVS_VPORT_TYPE_BAREUDP:
return "bareudp";
case OVS_VPORT_TYPE_UNSPEC:
case __OVS_VPORT_TYPE_MAX:
break;
}
VLOG_WARN_RL(&rl, "dp%d: port `%s' has unsupported type %u",
vport->dp_ifindex, vport->name, (unsigned int) vport->type);
return "unknown";
}
enum ovs_vport_type
netdev_to_ovs_vport_type(const char *type)
{
if (!strcmp(type, "tap") || !strcmp(type, "system")) {
return OVS_VPORT_TYPE_NETDEV;
} else if (!strcmp(type, "internal")) {
return OVS_VPORT_TYPE_INTERNAL;
} else if (!strcmp(type, "geneve")) {
return OVS_VPORT_TYPE_GENEVE;
} else if (!strcmp(type, "vxlan")) {
return OVS_VPORT_TYPE_VXLAN;
} else if (!strcmp(type, "erspan")) {
return OVS_VPORT_TYPE_ERSPAN;
} else if (!strcmp(type, "ip6erspan")) {
return OVS_VPORT_TYPE_IP6ERSPAN;
} else if (!strcmp(type, "ip6gre")) {
return OVS_VPORT_TYPE_IP6GRE;
} else if (!strcmp(type, "gre")) {
return OVS_VPORT_TYPE_GRE;
} else if (!strcmp(type, "gtpu")) {
return OVS_VPORT_TYPE_GTPU;
} else if (!strcmp(type, "srv6")) {
return OVS_VPORT_TYPE_SRV6;
tunnel: Bareudp Tunnel Support. There are various L3 encapsulation standards using UDP being discussed to leverage the UDP based load balancing capability of different networks. MPLSoUDP (__ https://tools.ietf.org/html/rfc7510) is one among them. The Bareudp tunnel provides a generic L3 encapsulation support for tunnelling different L3 protocols like MPLS, IP, NSH etc. inside a UDP tunnel. An example to create bareudp device to tunnel MPLS traffic is given $ ovs-vsctl add-port br_mpls udp_port -- set interface udp_port \ type=bareudp options:remote_ip=2.1.1.3 options:local_ip=2.1.1.2 \ options:payload_type=0x8847 options:dst_port=6635 The bareudp device supports special handling for MPLS & IP as they can have multiple ethertypes. MPLS procotcol can have ethertypes ETH_P_MPLS_UC (unicast) & ETH_P_MPLS_MC (multicast). IP protocol can have ethertypes ETH_P_IP (v4) & ETH_P_IPV6 (v6). The bareudp device to tunnel L3 traffic with multiple ethertypes (MPLS & IP) can be created by passing the L3 protocol name as string in the field payload_type. An example to create bareudp device to tunnel MPLS unicast & multicast traffic is given below.:: $ ovs-vsctl add-port br_mpls udp_port -- set interface udp_port \ type=bareudp options:remote_ip=2.1.1.3 options:local_ip=2.1.1.2 \ options:payload_type=mpls options:dst_port=6635 Signed-off-by: Martin Varghese <martin.varghese@nokia.com> Acked-By: Greg Rose <gvrose8192@gmail.com> Tested-by: Greg Rose <gvrose8192@gmail.com> Acked-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-12-17 12:48:41 +05:30
} else if (!strcmp(type, "bareudp")) {
return OVS_VPORT_TYPE_BAREUDP;
} else {
return OVS_VPORT_TYPE_UNSPEC;
}
}
static int
dpif_netlink_port_add__(struct dpif_netlink *dpif, const char *name,
enum ovs_vport_type type,
struct ofpbuf *options,
odp_port_t *port_nop)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
struct dpif_netlink_vport request, reply;
struct ofpbuf *buf;
struct nl_sock *sock = NULL;
uint32_t upcall_pids = 0;
int error = 0;
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
/* per-cpu dispatch mode does not require a socket per vport. */
if (!dpif_netlink_upcall_per_cpu(dpif)) {
if (dpif->handlers) {
error = create_nl_sock(dpif, &sock);
if (error) {
return error;
}
}
if (sock) {
upcall_pids = nl_sock_pid(sock);
}
}
dpif_netlink_vport_init(&request);
request.cmd = OVS_VPORT_CMD_NEW;
request.dp_ifindex = dpif->dp_ifindex;
request.type = type;
request.name = name;
request.port_no = *port_nop;
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
request.n_upcall_pids = 1;
request.upcall_pids = &upcall_pids;
if (options) {
request.options = options->data;
request.options_len = options->size;
}
error = dpif_netlink_vport_transact(&request, &reply, &buf);
if (!error) {
*port_nop = reply.port_no;
} else {
if (error == EBUSY && *port_nop != ODPP_NONE) {
VLOG_INFO("%s: requested port %"PRIu32" is in use",
dpif_name(&dpif->dpif), *port_nop);
}
close_nl_sock(sock);
goto exit;
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (!dpif_netlink_upcall_per_cpu(dpif)) {
error = vport_add_channel(dpif, *port_nop, sock);
if (error) {
VLOG_INFO("%s: could not add channel for port %s",
dpif_name(&dpif->dpif), name);
/* Delete the port. */
dpif_netlink_vport_init(&request);
request.cmd = OVS_VPORT_CMD_DEL;
request.dp_ifindex = dpif->dp_ifindex;
request.port_no = *port_nop;
dpif_netlink_vport_transact(&request, NULL, NULL);
close_nl_sock(sock);
goto exit;
}
}
exit:
ofpbuf_delete(buf);
return error;
}
static int
dpif_netlink_port_add_compat(struct dpif_netlink *dpif, struct netdev *netdev,
odp_port_t *port_nop)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
const struct netdev_tunnel_config *tnl_cfg;
char namebuf[NETDEV_VPORT_NAME_BUFSIZE];
const char *type = netdev_get_type(netdev);
uint64_t options_stub[64 / 8];
enum ovs_vport_type ovs_type;
struct ofpbuf options;
const char *name;
name = netdev_vport_get_dpif_port(netdev, namebuf, sizeof namebuf);
ovs_type = netdev_to_ovs_vport_type(netdev_get_type(netdev));
if (ovs_type == OVS_VPORT_TYPE_UNSPEC) {
VLOG_WARN_RL(&error_rl, "%s: cannot create port `%s' because it has "
"unsupported type `%s'",
dpif_name(&dpif->dpif), name, type);
return EINVAL;
}
if (ovs_type == OVS_VPORT_TYPE_NETDEV) {
#ifdef _WIN32
2014-10-23 08:27:34 -07:00
/* XXX : Map appropiate Windows handle */
#else
netdev_linux_ethtool_set_flag(netdev, ETH_FLAG_LRO, "LRO", false);
#endif
}
Windows: Add internal switch port per OVS bridge This patch updates the following commands in the vswitch: ovs-vsctl add-br br-test ovs-vsctl del-br br-test ovs-vsctl add-br br-test: This command will now create an internal port on the MSFT virtual switch using the WMI interface from Msvm_VirtualEthernetSwitchManagementService leveraging the method AddResourceSettings. Before creating the actual port, the switch will be queried to see if there is not a port already created (good for restarts when restarting the vswitch daemon). If there is a port defined it will return success and log a message. After checking if the port already exists the command will also verify if the forwarding extension (windows datapath) is enabled and on a single switch. If it is not activated or if it is activated on multiple switches it will return an error and a message will be logged. After the port was created on the switch, we will disable the adapter on the host and rename to the corresponding OVS bridge name for consistency. The user will enable and set the values he wants after creation. ovs-vsctl del-br br-test This command will remove an internal port on the MSFT virtual switch using the Msvm_VirtualEthernetSwitchManagementService class and executing the method RemoveResourceSettings. Both commands will be blocking until the WMI job is finished, this allows us to guarantee that the ports are created and their name are set before issuing a netlink message to the windows datapath. This patch also includes helpers for normal WMI retrievals and initializations. Appveyor and documentation has been modified to include the libraries needed for COM objects. This patch was tested individually using IMallocSpy and CRT heap checks to ensure no new memory leaks are introduced. Tested on the following OS's: Windows 2012, Windows 2012r2, Windows 2016 Signed-off-by: Alin Gabriel Serdean <aserdean@cloudbasesolutions.com> Acked-by: Paul Boca <pboca@cloudbasesolutions.com> Acked-by: Sairam Venugopal <vsairam@vmware.com> Signed-off-by: Gurucharan Shetty <guru@ovn.org>
2016-12-20 19:41:22 +00:00
#ifdef _WIN32
if (ovs_type == OVS_VPORT_TYPE_INTERNAL) {
Windows: Add internal switch port per OVS bridge This patch updates the following commands in the vswitch: ovs-vsctl add-br br-test ovs-vsctl del-br br-test ovs-vsctl add-br br-test: This command will now create an internal port on the MSFT virtual switch using the WMI interface from Msvm_VirtualEthernetSwitchManagementService leveraging the method AddResourceSettings. Before creating the actual port, the switch will be queried to see if there is not a port already created (good for restarts when restarting the vswitch daemon). If there is a port defined it will return success and log a message. After checking if the port already exists the command will also verify if the forwarding extension (windows datapath) is enabled and on a single switch. If it is not activated or if it is activated on multiple switches it will return an error and a message will be logged. After the port was created on the switch, we will disable the adapter on the host and rename to the corresponding OVS bridge name for consistency. The user will enable and set the values he wants after creation. ovs-vsctl del-br br-test This command will remove an internal port on the MSFT virtual switch using the Msvm_VirtualEthernetSwitchManagementService class and executing the method RemoveResourceSettings. Both commands will be blocking until the WMI job is finished, this allows us to guarantee that the ports are created and their name are set before issuing a netlink message to the windows datapath. This patch also includes helpers for normal WMI retrievals and initializations. Appveyor and documentation has been modified to include the libraries needed for COM objects. This patch was tested individually using IMallocSpy and CRT heap checks to ensure no new memory leaks are introduced. Tested on the following OS's: Windows 2012, Windows 2012r2, Windows 2016 Signed-off-by: Alin Gabriel Serdean <aserdean@cloudbasesolutions.com> Acked-by: Paul Boca <pboca@cloudbasesolutions.com> Acked-by: Sairam Venugopal <vsairam@vmware.com> Signed-off-by: Gurucharan Shetty <guru@ovn.org>
2016-12-20 19:41:22 +00:00
if (!create_wmi_port(name)){
VLOG_ERR("Could not create wmi internal port with name:%s", name);
return EINVAL;
};
}
#endif
tnl_cfg = netdev_get_tunnel_config(netdev);
if (tnl_cfg && (tnl_cfg->dst_port != 0 || tnl_cfg->exts)) {
ofpbuf_use_stack(&options, options_stub, sizeof options_stub);
if (tnl_cfg->dst_port) {
nl_msg_put_u16(&options, OVS_TUNNEL_ATTR_DST_PORT,
ntohs(tnl_cfg->dst_port));
}
if (tnl_cfg->exts) {
size_t ext_ofs;
int i;
ext_ofs = nl_msg_start_nested(&options, OVS_TUNNEL_ATTR_EXTENSION);
for (i = 0; i < 32; i++) {
if (tnl_cfg->exts & (UINT32_C(1) << i)) {
nl_msg_put_flag(&options, i);
}
}
nl_msg_end_nested(&options, ext_ofs);
}
return dpif_netlink_port_add__(dpif, name, ovs_type, &options,
port_nop);
} else {
return dpif_netlink_port_add__(dpif, name, ovs_type, NULL, port_nop);
}
}
static int
dpif_netlink_rtnl_port_create_and_add(struct dpif_netlink *dpif,
struct netdev *netdev,
odp_port_t *port_nop)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 20);
char namebuf[NETDEV_VPORT_NAME_BUFSIZE];
const char *name;
int error;
error = dpif_netlink_rtnl_port_create(netdev);
if (error) {
if (error != EOPNOTSUPP) {
VLOG_WARN_RL(&rl, "Failed to create %s with rtnetlink: %s",
netdev_get_name(netdev), ovs_strerror(error));
}
return error;
}
name = netdev_vport_get_dpif_port(netdev, namebuf, sizeof namebuf);
error = dpif_netlink_port_add__(dpif, name, OVS_VPORT_TYPE_NETDEV, NULL,
port_nop);
if (error) {
dpif_netlink_rtnl_port_destroy(name, netdev_get_type(netdev));
}
return error;
}
static int
dpif_netlink_port_add(struct dpif *dpif_, struct netdev *netdev,
odp_port_t *port_nop)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
int error = EOPNOTSUPP;
fat_rwlock_wrlock(&dpif->upcall_lock);
if (!ovs_tunnels_out_of_tree) {
error = dpif_netlink_rtnl_port_create_and_add(dpif, netdev, port_nop);
}
if (error) {
error = dpif_netlink_port_add_compat(dpif, netdev, port_nop);
}
fat_rwlock_unlock(&dpif->upcall_lock);
return error;
}
static int
dpif_netlink_port_del__(struct dpif_netlink *dpif, odp_port_t port_no)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
struct dpif_netlink_vport vport;
struct dpif_port dpif_port;
int error;
error = dpif_netlink_port_query__(dpif, port_no, NULL, &dpif_port);
if (error) {
return error;
}
dpif_netlink_vport_init(&vport);
vport.cmd = OVS_VPORT_CMD_DEL;
vport.dp_ifindex = dpif->dp_ifindex;
vport.port_no = port_no;
Windows: Add internal switch port per OVS bridge This patch updates the following commands in the vswitch: ovs-vsctl add-br br-test ovs-vsctl del-br br-test ovs-vsctl add-br br-test: This command will now create an internal port on the MSFT virtual switch using the WMI interface from Msvm_VirtualEthernetSwitchManagementService leveraging the method AddResourceSettings. Before creating the actual port, the switch will be queried to see if there is not a port already created (good for restarts when restarting the vswitch daemon). If there is a port defined it will return success and log a message. After checking if the port already exists the command will also verify if the forwarding extension (windows datapath) is enabled and on a single switch. If it is not activated or if it is activated on multiple switches it will return an error and a message will be logged. After the port was created on the switch, we will disable the adapter on the host and rename to the corresponding OVS bridge name for consistency. The user will enable and set the values he wants after creation. ovs-vsctl del-br br-test This command will remove an internal port on the MSFT virtual switch using the Msvm_VirtualEthernetSwitchManagementService class and executing the method RemoveResourceSettings. Both commands will be blocking until the WMI job is finished, this allows us to guarantee that the ports are created and their name are set before issuing a netlink message to the windows datapath. This patch also includes helpers for normal WMI retrievals and initializations. Appveyor and documentation has been modified to include the libraries needed for COM objects. This patch was tested individually using IMallocSpy and CRT heap checks to ensure no new memory leaks are introduced. Tested on the following OS's: Windows 2012, Windows 2012r2, Windows 2016 Signed-off-by: Alin Gabriel Serdean <aserdean@cloudbasesolutions.com> Acked-by: Paul Boca <pboca@cloudbasesolutions.com> Acked-by: Sairam Venugopal <vsairam@vmware.com> Signed-off-by: Gurucharan Shetty <guru@ovn.org>
2016-12-20 19:41:22 +00:00
#ifdef _WIN32
if (!strcmp(dpif_port.type, "internal")) {
if (!delete_wmi_port(dpif_port.name)) {
Windows: Add internal switch port per OVS bridge This patch updates the following commands in the vswitch: ovs-vsctl add-br br-test ovs-vsctl del-br br-test ovs-vsctl add-br br-test: This command will now create an internal port on the MSFT virtual switch using the WMI interface from Msvm_VirtualEthernetSwitchManagementService leveraging the method AddResourceSettings. Before creating the actual port, the switch will be queried to see if there is not a port already created (good for restarts when restarting the vswitch daemon). If there is a port defined it will return success and log a message. After checking if the port already exists the command will also verify if the forwarding extension (windows datapath) is enabled and on a single switch. If it is not activated or if it is activated on multiple switches it will return an error and a message will be logged. After the port was created on the switch, we will disable the adapter on the host and rename to the corresponding OVS bridge name for consistency. The user will enable and set the values he wants after creation. ovs-vsctl del-br br-test This command will remove an internal port on the MSFT virtual switch using the Msvm_VirtualEthernetSwitchManagementService class and executing the method RemoveResourceSettings. Both commands will be blocking until the WMI job is finished, this allows us to guarantee that the ports are created and their name are set before issuing a netlink message to the windows datapath. This patch also includes helpers for normal WMI retrievals and initializations. Appveyor and documentation has been modified to include the libraries needed for COM objects. This patch was tested individually using IMallocSpy and CRT heap checks to ensure no new memory leaks are introduced. Tested on the following OS's: Windows 2012, Windows 2012r2, Windows 2016 Signed-off-by: Alin Gabriel Serdean <aserdean@cloudbasesolutions.com> Acked-by: Paul Boca <pboca@cloudbasesolutions.com> Acked-by: Sairam Venugopal <vsairam@vmware.com> Signed-off-by: Gurucharan Shetty <guru@ovn.org>
2016-12-20 19:41:22 +00:00
VLOG_ERR("Could not delete wmi port with name: %s",
dpif_port.name);
Windows: Add internal switch port per OVS bridge This patch updates the following commands in the vswitch: ovs-vsctl add-br br-test ovs-vsctl del-br br-test ovs-vsctl add-br br-test: This command will now create an internal port on the MSFT virtual switch using the WMI interface from Msvm_VirtualEthernetSwitchManagementService leveraging the method AddResourceSettings. Before creating the actual port, the switch will be queried to see if there is not a port already created (good for restarts when restarting the vswitch daemon). If there is a port defined it will return success and log a message. After checking if the port already exists the command will also verify if the forwarding extension (windows datapath) is enabled and on a single switch. If it is not activated or if it is activated on multiple switches it will return an error and a message will be logged. After the port was created on the switch, we will disable the adapter on the host and rename to the corresponding OVS bridge name for consistency. The user will enable and set the values he wants after creation. ovs-vsctl del-br br-test This command will remove an internal port on the MSFT virtual switch using the Msvm_VirtualEthernetSwitchManagementService class and executing the method RemoveResourceSettings. Both commands will be blocking until the WMI job is finished, this allows us to guarantee that the ports are created and their name are set before issuing a netlink message to the windows datapath. This patch also includes helpers for normal WMI retrievals and initializations. Appveyor and documentation has been modified to include the libraries needed for COM objects. This patch was tested individually using IMallocSpy and CRT heap checks to ensure no new memory leaks are introduced. Tested on the following OS's: Windows 2012, Windows 2012r2, Windows 2016 Signed-off-by: Alin Gabriel Serdean <aserdean@cloudbasesolutions.com> Acked-by: Paul Boca <pboca@cloudbasesolutions.com> Acked-by: Sairam Venugopal <vsairam@vmware.com> Signed-off-by: Gurucharan Shetty <guru@ovn.org>
2016-12-20 19:41:22 +00:00
};
}
#endif
error = dpif_netlink_vport_transact(&vport, NULL, NULL);
vport_del_channels(dpif, port_no);
if (!error && !ovs_tunnels_out_of_tree) {
error = dpif_netlink_rtnl_port_destroy(dpif_port.name, dpif_port.type);
if (error == EOPNOTSUPP) {
error = 0;
}
}
dpif_port_destroy(&dpif_port);
return error;
}
static int
dpif_netlink_port_del(struct dpif *dpif_, odp_port_t port_no)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
int error;
fat_rwlock_wrlock(&dpif->upcall_lock);
error = dpif_netlink_port_del__(dpif, port_no);
fat_rwlock_unlock(&dpif->upcall_lock);
return error;
}
static int
dpif_netlink_port_query__(const struct dpif_netlink *dpif, odp_port_t port_no,
const char *port_name, struct dpif_port *dpif_port)
{
struct dpif_netlink_vport request;
struct dpif_netlink_vport reply;
struct ofpbuf *buf;
int error;
dpif_netlink_vport_init(&request);
request.cmd = OVS_VPORT_CMD_GET;
request.dp_ifindex = dpif->dp_ifindex;
request.port_no = port_no;
request.name = port_name;
error = dpif_netlink_vport_transact(&request, &reply, &buf);
if (!error) {
if (reply.dp_ifindex != request.dp_ifindex) {
/* A query by name reported that 'port_name' is in some datapath
* other than 'dpif', but the caller wants to know about 'dpif'. */
error = ENODEV;
} else if (dpif_port) {
dpif_port->name = xstrdup(reply.name);
dpif_port->type = xstrdup(get_vport_type(&reply));
dpif_port->port_no = reply.port_no;
}
ofpbuf_delete(buf);
}
return error;
}
static int
dpif_netlink_port_query_by_number(const struct dpif *dpif_, odp_port_t port_no,
struct dpif_port *dpif_port)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
return dpif_netlink_port_query__(dpif, port_no, NULL, dpif_port);
}
static int
dpif_netlink_port_query_by_name(const struct dpif *dpif_, const char *devname,
struct dpif_port *dpif_port)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
return dpif_netlink_port_query__(dpif, 0, devname, dpif_port);
}
static uint32_t
dpif_netlink_port_get_pid__(const struct dpif_netlink *dpif,
odp_port_t port_no)
OVS_REQ_RDLOCK(dpif->upcall_lock)
{
uint32_t port_idx = odp_to_u32(port_no);
uint32_t pid = 0;
if (dpif->handlers && dpif->uc_array_size > 0) {
/* The ODPP_NONE "reserved" port number uses the "ovs-system"'s
* channel, since it is not heavily loaded. */
uint32_t idx = port_idx >= dpif->uc_array_size ? 0 : port_idx;
/* Needs to check in case the socket pointer is changed in between
* the holding of upcall_lock. A known case happens when the main
* thread deletes the vport while the handler thread is handling
* the upcall from that port. */
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
if (dpif->channels[idx].sock) {
pid = nl_sock_pid(dpif->channels[idx].sock);
}
}
return pid;
}
static uint32_t
dpif_netlink_port_get_pid(const struct dpif *dpif_, odp_port_t port_no)
{
const struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
uint32_t ret;
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
/* In per-cpu dispatch mode, vports do not have an associated PID */
if (dpif_netlink_upcall_per_cpu(dpif)) {
/* In per-cpu dispatch mode, this will be ignored as kernel space will
* select the PID before sending to user space. We set to
* DPIF_NETLINK_PER_CPU_PID as 0 is rejected by kernel space as an
* invalid PID.
*/
return DPIF_NETLINK_PER_CPU_PID;
}
fat_rwlock_rdlock(&dpif->upcall_lock);
ret = dpif_netlink_port_get_pid__(dpif, port_no);
fat_rwlock_unlock(&dpif->upcall_lock);
return ret;
}
static int
dpif_netlink_flow_flush(struct dpif *dpif_)
{
const char *dpif_type_str = dpif_normalize_type(dpif_type(dpif_));
const struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_netlink_flow flow;
dpif_netlink_flow_init(&flow);
flow.cmd = OVS_FLOW_CMD_DEL;
flow.dp_ifindex = dpif->dp_ifindex;
if (netdev_is_flow_api_enabled()) {
netdev_ports_flow_flush(dpif_type_str);
}
return dpif_netlink_flow_transact(&flow, NULL, NULL);
}
struct dpif_netlink_port_state {
struct nl_dump dump;
struct ofpbuf buf;
};
static void
dpif_netlink_port_dump_start__(const struct dpif_netlink *dpif,
struct nl_dump *dump)
{
struct dpif_netlink_vport request;
struct ofpbuf *buf;
dpif_netlink_vport_init(&request);
request.cmd = OVS_VPORT_CMD_GET;
request.dp_ifindex = dpif->dp_ifindex;
buf = ofpbuf_new(1024);
dpif_netlink_vport_to_ofpbuf(&request, buf);
nl_dump_start(dump, NETLINK_GENERIC, buf);
ofpbuf_delete(buf);
}
static int
dpif_netlink_port_dump_start(const struct dpif *dpif_, void **statep)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_netlink_port_state *state;
*statep = state = xmalloc(sizeof *state);
dpif_netlink_port_dump_start__(dpif, &state->dump);
ofpbuf_init(&state->buf, NL_DUMP_BUFSIZE);
2011-01-10 13:12:12 -08:00
return 0;
}
static int
dpif_netlink_port_dump_next__(const struct dpif_netlink *dpif,
struct nl_dump *dump,
struct dpif_netlink_vport *vport,
struct ofpbuf *buffer)
{
struct ofpbuf buf;
int error;
if (!nl_dump_next(dump, &buf, buffer)) {
return EOF;
}
error = dpif_netlink_vport_from_ofpbuf(vport, &buf);
if (error) {
VLOG_WARN_RL(&error_rl, "%s: failed to parse vport record (%s)",
dpif_name(&dpif->dpif), ovs_strerror(error));
}
return error;
}
2011-01-10 13:12:12 -08:00
static int
dpif_netlink_port_dump_next(const struct dpif *dpif_, void *state_,
struct dpif_port *dpif_port)
2011-01-10 13:12:12 -08:00
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_netlink_port_state *state = state_;
struct dpif_netlink_vport vport;
int error;
error = dpif_netlink_port_dump_next__(dpif, &state->dump, &vport,
&state->buf);
if (error) {
return error;
}
dpif_port->name = CONST_CAST(char *, vport.name);
dpif_port->type = CONST_CAST(char *, get_vport_type(&vport));
dpif_port->port_no = vport.port_no;
return 0;
2011-01-10 13:12:12 -08:00
}
static int
dpif_netlink_port_dump_done(const struct dpif *dpif_ OVS_UNUSED, void *state_)
2011-01-10 13:12:12 -08:00
{
struct dpif_netlink_port_state *state = state_;
int error = nl_dump_done(&state->dump);
ofpbuf_uninit(&state->buf);
2011-01-10 13:12:12 -08:00
free(state);
return error;
}
static int
dpif_netlink_port_poll(const struct dpif *dpif_, char **devnamep)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
/* Lazily create the Netlink socket to listen for notifications. */
if (!dpif->port_notifier) {
struct nl_sock *sock;
int error;
error = nl_sock_create(NETLINK_GENERIC, &sock);
if (error) {
return error;
}
error = nl_sock_join_mcgroup(sock, ovs_vport_mcgroup);
if (error) {
nl_sock_destroy(sock);
return error;
}
dpif->port_notifier = sock;
/* We have no idea of the current state so report that everything
* changed. */
return ENOBUFS;
}
for (;;) {
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
uint64_t buf_stub[4096 / 8];
struct ofpbuf buf;
int error;
ofpbuf_use_stub(&buf, buf_stub, sizeof buf_stub);
error = nl_sock_recv(dpif->port_notifier, &buf, NULL, false);
if (!error) {
struct dpif_netlink_vport vport;
error = dpif_netlink_vport_from_ofpbuf(&vport, &buf);
if (!error) {
if (vport.dp_ifindex == dpif->dp_ifindex
&& (vport.cmd == OVS_VPORT_CMD_NEW
|| vport.cmd == OVS_VPORT_CMD_DEL
|| vport.cmd == OVS_VPORT_CMD_SET)) {
VLOG_DBG("port_changed: dpif:%s vport:%s cmd:%"PRIu8,
dpif->dpif.full_name, vport.name, vport.cmd);
if (vport.cmd == OVS_VPORT_CMD_DEL && dpif->handlers) {
dpif->refresh_channels = true;
}
*devnamep = xstrdup(vport.name);
ofpbuf_uninit(&buf);
return 0;
}
}
} else if (error != EAGAIN) {
VLOG_WARN_RL(&rl, "error reading or parsing netlink (%s)",
ovs_strerror(error));
nl_sock_drain(dpif->port_notifier);
error = ENOBUFS;
}
ofpbuf_uninit(&buf);
if (error) {
return error;
}
}
}
static void
dpif_netlink_port_poll_wait(const struct dpif *dpif_)
{
const struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
if (dpif->port_notifier) {
nl_sock_wait(dpif->port_notifier, POLLIN);
} else {
poll_immediate_wake();
}
}
static void
dpif_netlink_flow_init_ufid(struct dpif_netlink_flow *request,
const ovs_u128 *ufid, bool terse)
{
if (ufid) {
request->ufid = *ufid;
request->ufid_present = true;
} else {
request->ufid_present = false;
}
request->ufid_terse = terse;
}
static void
dpif_netlink_init_flow_get__(const struct dpif_netlink *dpif,
const struct nlattr *key, size_t key_len,
const ovs_u128 *ufid, bool terse,
struct dpif_netlink_flow *request)
{
dpif_netlink_flow_init(request);
request->cmd = OVS_FLOW_CMD_GET;
request->dp_ifindex = dpif->dp_ifindex;
request->key = key;
request->key_len = key_len;
dpif_netlink_flow_init_ufid(request, ufid, terse);
}
static void
dpif_netlink_init_flow_get(const struct dpif_netlink *dpif,
const struct dpif_flow_get *get,
struct dpif_netlink_flow *request)
{
dpif_netlink_init_flow_get__(dpif, get->key, get->key_len, get->ufid,
false, request);
}
static int
dpif_netlink_flow_get__(const struct dpif_netlink *dpif,
const struct nlattr *key, size_t key_len,
const ovs_u128 *ufid, bool terse,
struct dpif_netlink_flow *reply, struct ofpbuf **bufp)
{
struct dpif_netlink_flow request;
dpif_netlink_init_flow_get__(dpif, key, key_len, ufid, terse, &request);
return dpif_netlink_flow_transact(&request, reply, bufp);
}
static int
dpif_netlink_flow_get(const struct dpif_netlink *dpif,
const struct dpif_netlink_flow *flow,
struct dpif_netlink_flow *reply, struct ofpbuf **bufp)
{
return dpif_netlink_flow_get__(dpif, flow->key, flow->key_len,
flow->ufid_present ? &flow->ufid : NULL,
false, reply, bufp);
}
static void
dpif_netlink_init_flow_put(struct dpif_netlink *dpif,
const struct dpif_flow_put *put,
struct dpif_netlink_flow *request)
{
static const struct nlattr dummy_action;
dpif_netlink_flow_init(request);
request->cmd = (put->flags & DPIF_FP_CREATE
? OVS_FLOW_CMD_NEW : OVS_FLOW_CMD_SET);
request->dp_ifindex = dpif->dp_ifindex;
request->key = put->key;
request->key_len = put->key_len;
request->mask = put->mask;
request->mask_len = put->mask_len;
dpif_netlink_flow_init_ufid(request, put->ufid, false);
/* Ensure that OVS_FLOW_ATTR_ACTIONS will always be included. */
request->actions = (put->actions
? put->actions
: CONST_CAST(struct nlattr *, &dummy_action));
request->actions_len = put->actions_len;
if (put->flags & DPIF_FP_ZERO_STATS) {
request->clear = true;
}
if (put->flags & DPIF_FP_PROBE) {
request->probe = true;
}
request->nlmsg_flags = put->flags & DPIF_FP_MODIFY ? 0 : NLM_F_CREATE;
}
static void
dpif_netlink_init_flow_del__(struct dpif_netlink *dpif,
const struct nlattr *key, size_t key_len,
const ovs_u128 *ufid, bool terse,
struct dpif_netlink_flow *request)
{
dpif_netlink_flow_init(request);
request->cmd = OVS_FLOW_CMD_DEL;
request->dp_ifindex = dpif->dp_ifindex;
request->key = key;
request->key_len = key_len;
dpif_netlink_flow_init_ufid(request, ufid, terse);
}
static void
dpif_netlink_init_flow_del(struct dpif_netlink *dpif,
const struct dpif_flow_del *del,
struct dpif_netlink_flow *request)
{
dpif_netlink_init_flow_del__(dpif, del->key, del->key_len,
del->ufid, del->terse, request);
}
struct dpif_netlink_flow_dump {
struct dpif_flow_dump up;
struct nl_dump nl_dump;
atomic_int status;
struct netdev_flow_dump **netdev_dumps;
int netdev_dumps_num; /* Number of netdev_flow_dumps */
struct ovs_mutex netdev_lock; /* Guards the following. */
int netdev_current_dump OVS_GUARDED; /* Shared current dump */
struct dpif_flow_dump_types types; /* Type of dump */
};
static struct dpif_netlink_flow_dump *
dpif_netlink_flow_dump_cast(struct dpif_flow_dump *dump)
{
return CONTAINER_OF(dump, struct dpif_netlink_flow_dump, up);
}
static void
start_netdev_dump(const struct dpif *dpif_,
struct dpif_netlink_flow_dump *dump)
{
ovs_mutex_init(&dump->netdev_lock);
if (!(dump->types.netdev_flows)) {
dump->netdev_dumps_num = 0;
dump->netdev_dumps = NULL;
return;
}
ovs_mutex_lock(&dump->netdev_lock);
dump->netdev_current_dump = 0;
dump->netdev_dumps
= netdev_ports_flow_dump_create(dpif_normalize_type(dpif_type(dpif_)),
&dump->netdev_dumps_num,
dump->up.terse);
ovs_mutex_unlock(&dump->netdev_lock);
}
static void
dpif_netlink_populate_flow_dump_types(struct dpif_netlink_flow_dump *dump,
struct dpif_flow_dump_types *types)
{
if (!types) {
dump->types.ovs_flows = true;
dump->types.netdev_flows = true;
} else {
memcpy(&dump->types, types, sizeof *types);
}
}
static struct dpif_flow_dump *
dpif_netlink_flow_dump_create(const struct dpif *dpif_, bool terse,
struct dpif_flow_dump_types *types)
{
const struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_netlink_flow_dump *dump;
struct dpif_netlink_flow request;
struct ofpbuf *buf;
dump = xmalloc(sizeof *dump);
dpif_flow_dump_init(&dump->up, dpif_);
dpif_netlink_populate_flow_dump_types(dump, types);
if (dump->types.ovs_flows) {
dpif_netlink_flow_init(&request);
request.cmd = OVS_FLOW_CMD_GET;
request.dp_ifindex = dpif->dp_ifindex;
request.ufid_present = false;
request.ufid_terse = terse;
buf = ofpbuf_new(1024);
dpif_netlink_flow_to_ofpbuf(&request, buf);
nl_dump_start(&dump->nl_dump, NETLINK_GENERIC, buf);
ofpbuf_delete(buf);
}
atomic_init(&dump->status, 0);
dump->up.terse = terse;
start_netdev_dump(dpif_, dump);
return &dump->up;
datapath: Change listing flows to use an iterator concept. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. In turn, that means that flow keys must become variable-length. This does not, however, fit in well with the ODP_FLOW_LIST ioctl in its current form, because that would require userspace to know how much space to allocate for each flow's key in advance, or to allocate as much space as could possibly be needed. Neither choice is very attractive. This commit prepares for a different solution, by replacing ODP_FLOW_LIST by a new ioctl ODP_FLOW_DUMP that retrieves a single flow from the datapath on each call. It is much cleaner to allocate the maximum amount of space for a single flow key than to do so for possibly a very large number of flow keys. As a side effect, this patch also fixes a race condition that sometimes made "ovs-dpctl dump-flows" print an error: previously, flows were listed and then their actions were retrieved, which left a window in which ovs-vswitchd could delete the flow. Now dumping a flow and its actions is a single step, closing that window. Dumping all of the flows in a datapath is no longer an atomic step, so now it is possible to miss some flows or see a single flow twice during iteration, if the flow table is modified by another process. It doesn't look like this should be a problem for ovs-vswitchd. It would be faster to retrieve a number of flows in batch instead of just one at a time, but that will naturally happen later when the kernel datapath interface is changed to use Netlink, so this patch does not bother with it. Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2010-12-28 10:39:52 -08:00
}
static int
dpif_netlink_flow_dump_destroy(struct dpif_flow_dump *dump_)
datapath: Change listing flows to use an iterator concept. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. In turn, that means that flow keys must become variable-length. This does not, however, fit in well with the ODP_FLOW_LIST ioctl in its current form, because that would require userspace to know how much space to allocate for each flow's key in advance, or to allocate as much space as could possibly be needed. Neither choice is very attractive. This commit prepares for a different solution, by replacing ODP_FLOW_LIST by a new ioctl ODP_FLOW_DUMP that retrieves a single flow from the datapath on each call. It is much cleaner to allocate the maximum amount of space for a single flow key than to do so for possibly a very large number of flow keys. As a side effect, this patch also fixes a race condition that sometimes made "ovs-dpctl dump-flows" print an error: previously, flows were listed and then their actions were retrieved, which left a window in which ovs-vswitchd could delete the flow. Now dumping a flow and its actions is a single step, closing that window. Dumping all of the flows in a datapath is no longer an atomic step, so now it is possible to miss some flows or see a single flow twice during iteration, if the flow table is modified by another process. It doesn't look like this should be a problem for ovs-vswitchd. It would be faster to retrieve a number of flows in batch instead of just one at a time, but that will naturally happen later when the kernel datapath interface is changed to use Netlink, so this patch does not bother with it. Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2010-12-28 10:39:52 -08:00
{
struct dpif_netlink_flow_dump *dump = dpif_netlink_flow_dump_cast(dump_);
unsigned int nl_status = 0;
int dump_status;
if (dump->types.ovs_flows) {
nl_status = nl_dump_done(&dump->nl_dump);
}
for (int i = 0; i < dump->netdev_dumps_num; i++) {
int err = netdev_flow_dump_destroy(dump->netdev_dumps[i]);
if (err != 0 && err != EOPNOTSUPP) {
VLOG_ERR("failed dumping netdev: %s", ovs_strerror(err));
}
}
free(dump->netdev_dumps);
ovs_mutex_destroy(&dump->netdev_lock);
/* No other thread has access to 'dump' at this point. */
atomic_read_relaxed(&dump->status, &dump_status);
free(dump);
return dump_status ? dump_status : nl_status;
}
struct dpif_netlink_flow_dump_thread {
struct dpif_flow_dump_thread up;
struct dpif_netlink_flow_dump *dump;
struct dpif_netlink_flow flow;
struct dpif_flow_stats stats;
struct ofpbuf nl_flows; /* Always used to store flows. */
struct ofpbuf *nl_actions; /* Used if kernel does not supply actions. */
int netdev_dump_idx; /* This thread current netdev dump index */
bool netdev_done; /* If we are finished dumping netdevs */
/* (Key/Mask/Actions) Buffers for netdev dumping */
struct odputil_keybuf keybuf[FLOW_DUMP_MAX_BATCH];
struct odputil_keybuf maskbuf[FLOW_DUMP_MAX_BATCH];
struct odputil_keybuf actbuf[FLOW_DUMP_MAX_BATCH];
};
static struct dpif_netlink_flow_dump_thread *
dpif_netlink_flow_dump_thread_cast(struct dpif_flow_dump_thread *thread)
{
return CONTAINER_OF(thread, struct dpif_netlink_flow_dump_thread, up);
}
static struct dpif_flow_dump_thread *
dpif_netlink_flow_dump_thread_create(struct dpif_flow_dump *dump_)
{
struct dpif_netlink_flow_dump *dump = dpif_netlink_flow_dump_cast(dump_);
struct dpif_netlink_flow_dump_thread *thread;
thread = xmalloc(sizeof *thread);
dpif_flow_dump_thread_init(&thread->up, &dump->up);
thread->dump = dump;
ofpbuf_init(&thread->nl_flows, NL_DUMP_BUFSIZE);
thread->nl_actions = NULL;
thread->netdev_dump_idx = 0;
thread->netdev_done = !(thread->netdev_dump_idx < dump->netdev_dumps_num);
return &thread->up;
}
static void
dpif_netlink_flow_dump_thread_destroy(struct dpif_flow_dump_thread *thread_)
{
struct dpif_netlink_flow_dump_thread *thread
= dpif_netlink_flow_dump_thread_cast(thread_);
ofpbuf_uninit(&thread->nl_flows);
ofpbuf_delete(thread->nl_actions);
free(thread);
}
static void
dpif_netlink_flow_to_dpif_flow(struct dpif_flow *dpif_flow,
const struct dpif_netlink_flow *datapath_flow)
{
dpif_flow->key = datapath_flow->key;
dpif_flow->key_len = datapath_flow->key_len;
dpif_flow->mask = datapath_flow->mask;
dpif_flow->mask_len = datapath_flow->mask_len;
dpif_flow->actions = datapath_flow->actions;
dpif_flow->actions_len = datapath_flow->actions_len;
dpif_flow->ufid_present = datapath_flow->ufid_present;
dpif_flow->pmd_id = PMD_ID_NULL;
if (datapath_flow->ufid_present) {
dpif_flow->ufid = datapath_flow->ufid;
} else {
ovs_assert(datapath_flow->key && datapath_flow->key_len);
odp_flow_key_hash(datapath_flow->key, datapath_flow->key_len,
&dpif_flow->ufid);
}
dpif_netlink_flow_get_stats(datapath_flow, &dpif_flow->stats);
dpif_flow->attrs.offloaded = false;
dpif_flow->attrs.dp_layer = "ovs";
dpif_flow->attrs.dp_extra_info = NULL;
}
/* The design is such that all threads are working together on the first dump
* to the last, in order (at first they all on dump 0).
* When the first thread finds that the given dump is finished,
* they all move to the next. If two or more threads find the same dump
* is finished at the same time, the first one will advance the shared
* netdev_current_dump and the others will catch up. */
static void
dpif_netlink_advance_netdev_dump(struct dpif_netlink_flow_dump_thread *thread)
{
struct dpif_netlink_flow_dump *dump = thread->dump;
ovs_mutex_lock(&dump->netdev_lock);
/* if we haven't finished (dumped everything) */
if (dump->netdev_current_dump < dump->netdev_dumps_num) {
/* if we are the first to find that current dump is finished
* advance it. */
if (thread->netdev_dump_idx == dump->netdev_current_dump) {
thread->netdev_dump_idx = ++dump->netdev_current_dump;
/* did we just finish the last dump? done. */
if (dump->netdev_current_dump == dump->netdev_dumps_num) {
thread->netdev_done = true;
}
} else {
/* otherwise, we are behind, catch up */
thread->netdev_dump_idx = dump->netdev_current_dump;
}
} else {
/* some other thread finished */
thread->netdev_done = true;
}
ovs_mutex_unlock(&dump->netdev_lock);
}
static int
dpif_netlink_netdev_match_to_dpif_flow(struct match *match,
struct ofpbuf *key_buf,
struct ofpbuf *mask_buf,
struct nlattr *actions,
struct dpif_flow_stats *stats,
struct dpif_flow_attrs *attrs,
ovs_u128 *ufid,
struct dpif_flow *flow,
bool terse)
{
memset(flow, 0, sizeof *flow);
if (!terse) {
struct odp_flow_key_parms odp_parms = {
.flow = &match->flow,
.mask = &match->wc.masks,
.support = {
.max_vlan_headers = 2,
.recirc = true,
.ct_state = true,
.ct_zone = true,
.ct_mark = true,
.ct_label = true,
},
};
size_t offset;
/* Key */
offset = key_buf->size;
flow->key = ofpbuf_tail(key_buf);
odp_flow_key_from_flow(&odp_parms, key_buf);
flow->key_len = key_buf->size - offset;
/* Mask */
offset = mask_buf->size;
flow->mask = ofpbuf_tail(mask_buf);
odp_parms.key_buf = key_buf;
odp_flow_key_from_mask(&odp_parms, mask_buf);
flow->mask_len = mask_buf->size - offset;
/* Actions */
flow->actions = nl_attr_get(actions);
flow->actions_len = nl_attr_get_size(actions);
}
/* Stats */
memcpy(&flow->stats, stats, sizeof *stats);
/* UFID */
flow->ufid_present = true;
flow->ufid = *ufid;
flow->pmd_id = PMD_ID_NULL;
memcpy(&flow->attrs, attrs, sizeof *attrs);
return 0;
}
static int
dpif_netlink_flow_dump_next(struct dpif_flow_dump_thread *thread_,
struct dpif_flow *flows, int max_flows)
{
struct dpif_netlink_flow_dump_thread *thread
= dpif_netlink_flow_dump_thread_cast(thread_);
struct dpif_netlink_flow_dump *dump = thread->dump;
struct dpif_netlink *dpif = dpif_netlink_cast(thread->up.dpif);
int n_flows;
ofpbuf_delete(thread->nl_actions);
thread->nl_actions = NULL;
n_flows = 0;
max_flows = MIN(max_flows, FLOW_DUMP_MAX_BATCH);
while (!thread->netdev_done && n_flows < max_flows) {
struct odputil_keybuf *maskbuf = &thread->maskbuf[n_flows];
struct odputil_keybuf *keybuf = &thread->keybuf[n_flows];
struct odputil_keybuf *actbuf = &thread->actbuf[n_flows];
struct ofpbuf key, mask, act;
struct dpif_flow *f = &flows[n_flows];
int cur = thread->netdev_dump_idx;
struct netdev_flow_dump *netdev_dump = dump->netdev_dumps[cur];
struct match match;
struct nlattr *actions;
struct dpif_flow_stats stats;
struct dpif_flow_attrs attrs;
ovs_u128 ufid;
bool has_next;
ofpbuf_use_stack(&key, keybuf, sizeof *keybuf);
ofpbuf_use_stack(&act, actbuf, sizeof *actbuf);
ofpbuf_use_stack(&mask, maskbuf, sizeof *maskbuf);
has_next = netdev_flow_dump_next(netdev_dump, &match,
&actions, &stats, &attrs,
&ufid,
&thread->nl_flows,
&act);
if (has_next) {
dpif_netlink_netdev_match_to_dpif_flow(&match,
&key, &mask,
actions,
&stats,
&attrs,
&ufid,
f,
dump->up.terse);
n_flows++;
} else {
dpif_netlink_advance_netdev_dump(thread);
}
}
if (!(dump->types.ovs_flows)) {
return n_flows;
}
while (!n_flows
|| (n_flows < max_flows && thread->nl_flows.size)) {
struct dpif_netlink_flow datapath_flow;
struct ofpbuf nl_flow;
int error;
/* Try to grab another flow. */
if (!nl_dump_next(&dump->nl_dump, &nl_flow, &thread->nl_flows)) {
break;
}
/* Convert the flow to our output format. */
error = dpif_netlink_flow_from_ofpbuf(&datapath_flow, &nl_flow);
if (error) {
atomic_store_relaxed(&dump->status, error);
break;
}
if (dump->up.terse || datapath_flow.actions) {
/* Common case: we don't want actions, or the flow includes
* actions. */
dpif_netlink_flow_to_dpif_flow(&flows[n_flows++], &datapath_flow);
} else {
/* Rare case: the flow does not include actions. Retrieve this
* individual flow again to get the actions. */
error = dpif_netlink_flow_get(dpif, &datapath_flow,
&datapath_flow, &thread->nl_actions);
if (error == ENOENT) {
VLOG_DBG("dumped flow disappeared on get");
continue;
} else if (error) {
VLOG_WARN("error fetching dumped flow: %s",
ovs_strerror(error));
atomic_store_relaxed(&dump->status, error);
break;
}
/* Save this flow. Then exit, because we only have one buffer to
* handle this case. */
dpif_netlink_flow_to_dpif_flow(&flows[n_flows++], &datapath_flow);
break;
}
}
return n_flows;
}
static void
dpif_netlink_encode_execute(int dp_ifindex, const struct dpif_execute *d_exec,
struct ofpbuf *buf)
{
struct ovs_header *k_exec;
size_t key_ofs;
ofpbuf_prealloc_tailroom(buf, (64
+ dp_packet_size(d_exec->packet)
+ ODP_KEY_METADATA_SIZE
+ d_exec->actions_len));
nl_msg_put_genlmsghdr(buf, 0, ovs_packet_family, NLM_F_REQUEST,
OVS_PACKET_CMD_EXECUTE, OVS_PACKET_VERSION);
k_exec = ofpbuf_put_uninit(buf, sizeof *k_exec);
k_exec->dp_ifindex = dp_ifindex;
nl_msg_put_unspec(buf, OVS_PACKET_ATTR_PACKET,
dp_packet_data(d_exec->packet),
dp_packet_size(d_exec->packet));
key_ofs = nl_msg_start_nested(buf, OVS_PACKET_ATTR_KEY);
userspace: Switching of L3 packets in L2 pipeline Ports have a new layer3 attribute if they send/receive L3 packets. The packet_type included in structs dp_packet and flow is considered in ofproto-dpif. The classical L2 match fields (dl_src, dl_dst, dl_type, and vlan_tci, vlan_vid, vlan_pcp) now have Ethernet as pre-requisite. A dummy ethernet header is pushed to L3 packets received from L3 ports before the the pipeline processing starts. The ethernet header is popped before sending a packet to a L3 port. For datapath ports that can receive L2 or L3 packets, the packet_type becomes part of the flow key for datapath flows and is handled appropriately in dpif-netdev. In the 'else' branch in flow_put_on_pmd() function, the additional check flow_equal(&match.flow, &netdev_flow->flow) was removed, as a) the dpcls lookup is sufficient to uniquely identify a flow and b) it caused false negatives because the flow in netdev->flow may not properly masked. In dpif_netdev_flow_put() we now use the same method for constructing the netdev_flow_key as the one used when adding the flow to the dplcs to make sure these always match. The function netdev_flow_key_from_flow() used so far was not only inefficient but sometimes caused mismatches and subsequent flow update failures. The kernel datapath does not support the packet_type match field. Instead it encodes the packet type implictly by the presence or absence of the Ethernet attribute in the flow key and mask. This patch filters the PACKET_TYPE attribute out of netlink flow key and mask to be sent to the kernel datapath. Signed-off-by: Lorand Jakab <lojakab@cisco.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Yi Yang <yi.y.yang@intel.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Co-authored-by: Zoltan Balogh <zoltan.balogh@ericsson.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2017-06-02 16:16:17 +00:00
odp_key_from_dp_packet(buf, d_exec->packet);
nl_msg_end_nested(buf, key_ofs);
nl_msg_put_unspec(buf, OVS_PACKET_ATTR_ACTIONS,
d_exec->actions, d_exec->actions_len);
if (d_exec->probe) {
nl_msg_put_flag(buf, OVS_PACKET_ATTR_PROBE);
}
if (d_exec->mtu) {
nl_msg_put_u16(buf, OVS_PACKET_ATTR_MRU, d_exec->mtu);
}
if (d_exec->hash) {
nl_msg_put_u64(buf, OVS_PACKET_ATTR_HASH, d_exec->hash);
}
}
/* Executes, against 'dpif', up to the first 'n_ops' operations in 'ops'.
* Returns the number actually executed (at least 1, if 'n_ops' is
* positive). */
static size_t
dpif_netlink_operate__(struct dpif_netlink *dpif,
struct dpif_op **ops, size_t n_ops)
{
struct op_auxdata {
struct nl_transaction txn;
struct ofpbuf request;
uint64_t request_stub[1024 / 8];
struct ofpbuf reply;
uint64_t reply_stub[1024 / 8];
} auxes[OPERATE_MAX_OPS];
struct nl_transaction *txnsp[OPERATE_MAX_OPS];
size_t i;
n_ops = MIN(n_ops, OPERATE_MAX_OPS);
for (i = 0; i < n_ops; i++) {
struct op_auxdata *aux = &auxes[i];
struct dpif_op *op = ops[i];
struct dpif_flow_put *put;
struct dpif_flow_del *del;
struct dpif_flow_get *get;
struct dpif_netlink_flow flow;
ofpbuf_use_stub(&aux->request,
aux->request_stub, sizeof aux->request_stub);
aux->txn.request = &aux->request;
ofpbuf_use_stub(&aux->reply, aux->reply_stub, sizeof aux->reply_stub);
aux->txn.reply = NULL;
switch (op->type) {
case DPIF_OP_FLOW_PUT:
put = &op->flow_put;
dpif_netlink_init_flow_put(dpif, put, &flow);
if (put->stats) {
flow.nlmsg_flags |= NLM_F_ECHO;
aux->txn.reply = &aux->reply;
}
dpif_netlink_flow_to_ofpbuf(&flow, &aux->request);
OVS_USDT_PROBE(dpif_netlink_operate__, op_flow_put,
dpif, put, &flow, &aux->request);
break;
case DPIF_OP_FLOW_DEL:
del = &op->flow_del;
dpif_netlink_init_flow_del(dpif, del, &flow);
if (del->stats) {
flow.nlmsg_flags |= NLM_F_ECHO;
aux->txn.reply = &aux->reply;
}
dpif_netlink_flow_to_ofpbuf(&flow, &aux->request);
OVS_USDT_PROBE(dpif_netlink_operate__, op_flow_del,
dpif, del, &flow, &aux->request);
break;
case DPIF_OP_EXECUTE:
/* Can't execute a packet that won't fit in a Netlink attribute. */
if (OVS_UNLIKELY(nl_attr_oversized(
dp_packet_size(op->execute.packet)))) {
/* Report an error immediately if this is the first operation.
* Otherwise the easiest thing to do is to postpone to the next
* call (when this will be the first operation). */
if (i == 0) {
VLOG_ERR_RL(&error_rl,
"dropping oversized %"PRIu32"-byte packet",
dp_packet_size(op->execute.packet));
op->error = ENOBUFS;
return 1;
}
n_ops = i;
} else {
dpif_netlink_encode_execute(dpif->dp_ifindex, &op->execute,
&aux->request);
OVS_USDT_PROBE(dpif_netlink_operate__, op_flow_execute,
dpif, &op->execute,
dp_packet_data(op->execute.packet),
dp_packet_size(op->execute.packet),
&aux->request);
}
break;
case DPIF_OP_FLOW_GET:
get = &op->flow_get;
dpif_netlink_init_flow_get(dpif, get, &flow);
aux->txn.reply = get->buffer;
dpif_netlink_flow_to_ofpbuf(&flow, &aux->request);
OVS_USDT_PROBE(dpif_netlink_operate__, op_flow_get,
dpif, get, &flow, &aux->request);
break;
default:
OVS_NOT_REACHED();
}
}
for (i = 0; i < n_ops; i++) {
txnsp[i] = &auxes[i].txn;
}
nl_transact_multiple(NETLINK_GENERIC, txnsp, n_ops);
for (i = 0; i < n_ops; i++) {
struct op_auxdata *aux = &auxes[i];
struct nl_transaction *txn = &auxes[i].txn;
struct dpif_op *op = ops[i];
struct dpif_flow_put *put;
struct dpif_flow_del *del;
struct dpif_flow_get *get;
op->error = txn->error;
switch (op->type) {
case DPIF_OP_FLOW_PUT:
put = &op->flow_put;
if (put->stats) {
if (!op->error) {
struct dpif_netlink_flow reply;
op->error = dpif_netlink_flow_from_ofpbuf(&reply,
txn->reply);
if (!op->error) {
dpif_netlink_flow_get_stats(&reply, put->stats);
}
}
}
break;
case DPIF_OP_FLOW_DEL:
del = &op->flow_del;
if (del->stats) {
if (!op->error) {
struct dpif_netlink_flow reply;
op->error = dpif_netlink_flow_from_ofpbuf(&reply,
txn->reply);
if (!op->error) {
dpif_netlink_flow_get_stats(&reply, del->stats);
}
}
}
break;
case DPIF_OP_EXECUTE:
break;
case DPIF_OP_FLOW_GET:
get = &op->flow_get;
if (!op->error) {
struct dpif_netlink_flow reply;
op->error = dpif_netlink_flow_from_ofpbuf(&reply, txn->reply);
if (!op->error) {
dpif_netlink_flow_to_dpif_flow(get->flow, &reply);
}
}
break;
default:
OVS_NOT_REACHED();
}
ofpbuf_uninit(&aux->request);
ofpbuf_uninit(&aux->reply);
}
return n_ops;
}
static int
parse_flow_get(struct dpif_netlink *dpif, struct dpif_flow_get *get)
{
const char *dpif_type_str = dpif_normalize_type(dpif_type(&dpif->dpif));
struct dpif_flow *dpif_flow = get->flow;
struct match match;
struct nlattr *actions;
struct dpif_flow_stats stats;
struct dpif_flow_attrs attrs;
struct ofpbuf buf;
uint64_t act_buf[1024 / 8];
struct odputil_keybuf maskbuf;
struct odputil_keybuf keybuf;
struct odputil_keybuf actbuf;
struct ofpbuf key, mask, act;
int err;
ofpbuf_use_stack(&buf, &act_buf, sizeof act_buf);
err = netdev_ports_flow_get(dpif_type_str, &match, &actions, get->ufid,
&stats, &attrs, &buf);
if (err) {
return err;
}
VLOG_DBG("found flow from netdev, translating to dpif flow");
ofpbuf_use_stack(&key, &keybuf, sizeof keybuf);
ofpbuf_use_stack(&act, &actbuf, sizeof actbuf);
ofpbuf_use_stack(&mask, &maskbuf, sizeof maskbuf);
dpif_netlink_netdev_match_to_dpif_flow(&match, &key, &mask, actions,
&stats, &attrs,
(ovs_u128 *) get->ufid,
dpif_flow,
false);
ofpbuf_put(get->buffer, nl_attr_get(actions), nl_attr_get_size(actions));
dpif_flow->actions = ofpbuf_at(get->buffer, 0, 0);
dpif_flow->actions_len = nl_attr_get_size(actions);
return 0;
}
static int
parse_flow_put(struct dpif_netlink *dpif, struct dpif_flow_put *put)
{
const char *dpif_type_str = dpif_normalize_type(dpif_type(&dpif->dpif));
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 20);
struct match match;
odp_port_t in_port;
const struct nlattr *nla;
size_t left;
struct netdev *dev;
struct offload_info info;
int err;
info.tc_modify_flow_deleted = false;
if (put->flags & DPIF_FP_PROBE) {
return EOPNOTSUPP;
}
err = parse_key_and_mask_to_match(put->key, put->key_len, put->mask,
put->mask_len, &match);
if (err) {
return err;
}
in_port = match.flow.in_port.odp_port;
dev = netdev_ports_get(in_port, dpif_type_str);
if (!dev) {
return EOPNOTSUPP;
}
netdev-offload-tc: Fix ignoring unknown tunnel keys. Current offloading code supports only limited number of tunnel keys and silently ignores everything it doesn't understand. This is causing, for example, offloaded ERSPAN tunnels to not work, because flow is offloaded, but ERSPAN options are not provided to TC. There is a number of tunnel keys, which are supported by the userspace, but silently ignored during offloading: OVS_TUNNEL_KEY_ATTR_DONT_FRAGMENT OVS_TUNNEL_KEY_ATTR_OAM OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS OVS_TUNNEL_KEY_ATTR_ERSPAN_OPTS OVS_TUNNEL_KEY_ATTR_CSUM is kind of supported, but only for actions and for some reason is set from the tunnel port instead of the provided action, and not currently supported for the tunnel key in the match. Addig a default case to fail offloading of unknown attributes. For now explicitly allowing incorrect behavior for the DONT_FRAGMENT flag, otherwise we'll break all tunnel offloading by default. VXLAN and ERSPAN options has to fail offloading, because the tunnel will not work otherwise. OAM is not a default configurations, so failing it as well. The missing DONT_FRAGMENT flag though should, probably, cause frequent flow revalidation, but that is not new with this patch. Same for the 'match' key, only clearing masks that was actually consumed, except for the DONT_FRAGMENT and CSUM flags, which are explicitly allowed and highlighted as broken. Also, destination port as well as CSUM configuration for unknown reason was not taken from the actions list and were passed via HW offload info instead of being consumed from the set() action. Reported-at: https://mail.openvswitch.org/pipermail/ovs-dev/2022-July/395522.html Reported-by: Eelco Chaudron <echaudro@redhat.com> Fixes: 8f283af89298 ("netdev-tc-offloads: Implement netdev flow put using tc interface") Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-14 16:46:02 +02:00
/* Check the output port for a tunnel. */
NL_ATTR_FOR_EACH(nla, left, put->actions, put->actions_len) {
if (nl_attr_type(nla) == OVS_ACTION_ATTR_OUTPUT) {
struct netdev *outdev;
odp_port_t out_port;
out_port = nl_attr_get_odp_port(nla);
outdev = netdev_ports_get(out_port, dpif_type_str);
if (!outdev) {
err = EOPNOTSUPP;
goto out;
}
netdev_close(outdev);
}
}
info.recirc_id_shared_with_tc = (dpif->user_features
& OVS_DP_F_TC_RECIRC_SHARING);
err = netdev_flow_put(dev, &match,
CONST_CAST(struct nlattr *, put->actions),
put->actions_len,
CONST_CAST(ovs_u128 *, put->ufid),
&info, put->stats);
if (!err) {
if (put->flags & DPIF_FP_MODIFY) {
struct dpif_op *opp;
struct dpif_op op;
op.type = DPIF_OP_FLOW_DEL;
op.flow_del.key = put->key;
op.flow_del.key_len = put->key_len;
op.flow_del.ufid = put->ufid;
op.flow_del.pmd_id = put->pmd_id;
op.flow_del.stats = NULL;
op.flow_del.terse = false;
opp = &op;
dpif_netlink_operate__(dpif, &opp, 1);
}
VLOG_DBG("added flow");
} else if (err != EEXIST) {
struct netdev *oor_netdev = NULL;
enum vlog_level level;
if (err == ENOSPC && netdev_is_offload_rebalance_policy_enabled()) {
/*
* We need to set OOR on the input netdev (i.e, 'dev') for the
* flow. But if the flow has a tunnel attribute (i.e, decap action,
* with a virtual device like a VxLAN interface as its in-port),
* then lookup and set OOR on the underlying tunnel (real) netdev.
*/
oor_netdev = flow_get_tunnel_netdev(&match.flow.tunnel);
if (!oor_netdev) {
/* Not a 'tunnel' flow */
oor_netdev = dev;
}
netdev_set_hw_info(oor_netdev, HW_INFO_TYPE_OOR, true);
}
level = (err == ENOSPC || err == EOPNOTSUPP) ? VLL_DBG : VLL_ERR;
VLOG_RL(&rl, level, "failed to offload flow: %s: %s",
ovs_strerror(err),
(oor_netdev ? oor_netdev->name : dev->name));
}
out:
if (err && err != EEXIST && (put->flags & DPIF_FP_MODIFY)) {
/* Modified rule can't be offloaded, try and delete from HW */
int del_err = 0;
if (!info.tc_modify_flow_deleted) {
del_err = netdev_flow_del(dev, put->ufid, put->stats);
}
if (!del_err) {
/* Delete from hw success, so old flow was offloaded.
* Change flags to create the flow in kernel */
put->flags &= ~DPIF_FP_MODIFY;
put->flags |= DPIF_FP_CREATE;
} else if (del_err != ENOENT) {
VLOG_ERR_RL(&rl, "failed to delete offloaded flow: %s",
ovs_strerror(del_err));
/* stop proccesing the flow in kernel */
err = 0;
}
}
netdev_close(dev);
return err;
}
static int
try_send_to_netdev(struct dpif_netlink *dpif, struct dpif_op *op)
{
int err = EOPNOTSUPP;
switch (op->type) {
case DPIF_OP_FLOW_PUT: {
struct dpif_flow_put *put = &op->flow_put;
if (!put->ufid) {
break;
}
err = parse_flow_put(dpif, put);
dpif-netlink: Fix dumping uninitialized netdev flow stats. dpif logging functions expects to be called after the operation. log_flow_del_message() dumps flow stats on success which are not initialized before the actual call to netdev_flow_del(): Conditional jump or move depends on uninitialised value(s) at 0x6090875: _itoa_word (_itoa.c:179) by 0x6093F0D: vfprintf (vfprintf.c:1642) by 0x60C090F: vsnprintf (vsnprintf.c:114) by 0xE5E7EC: ds_put_format_valist (dynamic-string.c:155) by 0xE5E755: ds_put_format (dynamic-string.c:142) by 0xE5A5E6: dpif_flow_stats_format (dpif.c:903) by 0xE5B708: log_flow_message (dpif.c:1763) by 0xE5BCA4: log_flow_del_message (dpif.c:1809) by 0xFA6076: try_send_to_netdev (dpif-netlink.c:2190) by 0xFA0D3C: dpif_netlink_operate (dpif-netlink.c:2248) by 0xE5AFAC: dpif_operate (dpif.c:1376) by 0xDF176E: push_dp_ops (ofproto-dpif-upcall.c:2367) by 0xDF04C8: push_ukey_ops (ofproto-dpif-upcall.c:2447) by 0xDF008F: revalidator_sweep__ (ofproto-dpif-upcall.c:2805) by 0xDF5DC6: revalidator_sweep (ofproto-dpif-upcall.c:2816) by 0xDF1E83: udpif_revalidator (ofproto-dpif-upcall.c:949) by 0xF3A3FE: ovsthread_wrapper (ovs-thread.c:383) by 0x565F6DA: start_thread (pthread_create.c:463) by 0x615988E: clone (clone.S:95) Uninitialised value was created by a stack allocation at 0xDEFC24: revalidator_sweep__ (ofproto-dpif-upcall.c:2733) Fixes: 3cd99886191e ("dpif-netlink: Use dpif logging functions") Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Acked-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Simon Horman <simon.horman@netronome.com>
2020-01-06 11:23:42 +01:00
log_flow_put_message(&dpif->dpif, &this_module, put, 0);
break;
}
case DPIF_OP_FLOW_DEL: {
struct dpif_flow_del *del = &op->flow_del;
if (!del->ufid) {
break;
}
err = netdev_ports_flow_del(
dpif_normalize_type(dpif_type(&dpif->dpif)),
del->ufid,
del->stats);
dpif-netlink: Fix dumping uninitialized netdev flow stats. dpif logging functions expects to be called after the operation. log_flow_del_message() dumps flow stats on success which are not initialized before the actual call to netdev_flow_del(): Conditional jump or move depends on uninitialised value(s) at 0x6090875: _itoa_word (_itoa.c:179) by 0x6093F0D: vfprintf (vfprintf.c:1642) by 0x60C090F: vsnprintf (vsnprintf.c:114) by 0xE5E7EC: ds_put_format_valist (dynamic-string.c:155) by 0xE5E755: ds_put_format (dynamic-string.c:142) by 0xE5A5E6: dpif_flow_stats_format (dpif.c:903) by 0xE5B708: log_flow_message (dpif.c:1763) by 0xE5BCA4: log_flow_del_message (dpif.c:1809) by 0xFA6076: try_send_to_netdev (dpif-netlink.c:2190) by 0xFA0D3C: dpif_netlink_operate (dpif-netlink.c:2248) by 0xE5AFAC: dpif_operate (dpif.c:1376) by 0xDF176E: push_dp_ops (ofproto-dpif-upcall.c:2367) by 0xDF04C8: push_ukey_ops (ofproto-dpif-upcall.c:2447) by 0xDF008F: revalidator_sweep__ (ofproto-dpif-upcall.c:2805) by 0xDF5DC6: revalidator_sweep (ofproto-dpif-upcall.c:2816) by 0xDF1E83: udpif_revalidator (ofproto-dpif-upcall.c:949) by 0xF3A3FE: ovsthread_wrapper (ovs-thread.c:383) by 0x565F6DA: start_thread (pthread_create.c:463) by 0x615988E: clone (clone.S:95) Uninitialised value was created by a stack allocation at 0xDEFC24: revalidator_sweep__ (ofproto-dpif-upcall.c:2733) Fixes: 3cd99886191e ("dpif-netlink: Use dpif logging functions") Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Acked-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Simon Horman <simon.horman@netronome.com>
2020-01-06 11:23:42 +01:00
log_flow_del_message(&dpif->dpif, &this_module, del, 0);
break;
}
case DPIF_OP_FLOW_GET: {
struct dpif_flow_get *get = &op->flow_get;
if (!op->flow_get.ufid) {
break;
}
err = parse_flow_get(dpif, get);
dpif-netlink: Fix dumping uninitialized netdev flow stats. dpif logging functions expects to be called after the operation. log_flow_del_message() dumps flow stats on success which are not initialized before the actual call to netdev_flow_del(): Conditional jump or move depends on uninitialised value(s) at 0x6090875: _itoa_word (_itoa.c:179) by 0x6093F0D: vfprintf (vfprintf.c:1642) by 0x60C090F: vsnprintf (vsnprintf.c:114) by 0xE5E7EC: ds_put_format_valist (dynamic-string.c:155) by 0xE5E755: ds_put_format (dynamic-string.c:142) by 0xE5A5E6: dpif_flow_stats_format (dpif.c:903) by 0xE5B708: log_flow_message (dpif.c:1763) by 0xE5BCA4: log_flow_del_message (dpif.c:1809) by 0xFA6076: try_send_to_netdev (dpif-netlink.c:2190) by 0xFA0D3C: dpif_netlink_operate (dpif-netlink.c:2248) by 0xE5AFAC: dpif_operate (dpif.c:1376) by 0xDF176E: push_dp_ops (ofproto-dpif-upcall.c:2367) by 0xDF04C8: push_ukey_ops (ofproto-dpif-upcall.c:2447) by 0xDF008F: revalidator_sweep__ (ofproto-dpif-upcall.c:2805) by 0xDF5DC6: revalidator_sweep (ofproto-dpif-upcall.c:2816) by 0xDF1E83: udpif_revalidator (ofproto-dpif-upcall.c:949) by 0xF3A3FE: ovsthread_wrapper (ovs-thread.c:383) by 0x565F6DA: start_thread (pthread_create.c:463) by 0x615988E: clone (clone.S:95) Uninitialised value was created by a stack allocation at 0xDEFC24: revalidator_sweep__ (ofproto-dpif-upcall.c:2733) Fixes: 3cd99886191e ("dpif-netlink: Use dpif logging functions") Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Acked-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Simon Horman <simon.horman@netronome.com>
2020-01-06 11:23:42 +01:00
log_flow_get_message(&dpif->dpif, &this_module, get, 0);
break;
}
case DPIF_OP_EXECUTE:
default:
break;
}
return err;
}
static void
dpif_netlink_operate_chunks(struct dpif_netlink *dpif, struct dpif_op **ops,
size_t n_ops)
{
while (n_ops > 0) {
size_t chunk = dpif_netlink_operate__(dpif, ops, n_ops);
ops += chunk;
n_ops -= chunk;
}
}
static void
revalidator: Rebalance offloaded flows based on the pps rate This is the third patch in the patch-set to support dynamic rebalancing of offloaded flows. The dynamic rebalancing functionality is implemented in this patch. The ukeys that are not scheduled for deletion are obtained and passed as input to the rebalancing routine. The rebalancing is done in the context of revalidation leader thread, after all other revalidator threads are done with gathering rebalancing data for flows. For each netdev that is in OOR state, a list of flows - both offloaded and non-offloaded (pending) - is obtained using the ukeys. For each netdev that is in OOR state, the flows are grouped and sorted into offloaded and pending flows. The offloaded flows are sorted in descending order of pps-rate, while pending flows are sorted in ascending order of pps-rate. The rebalancing is done in two phases. In the first phase, we try to offload all pending flows and if that succeeds, the OOR state on the device is cleared. If some (or none) of the pending flows could not be offloaded, then we start replacing an offloaded flow that has a lower pps-rate than a pending flow, until there are no more pending flows with a higher rate than an offloaded flow. The flows that are replaced from the device are added into kernel datapath. A new OVS configuration parameter "offload-rebalance", is added to ovsdb. The default value of this is "false". To enable this feature, set the value of this parameter to "true", which provides packets-per-second rate based policy to dynamically offload and un-offload flows. Note: This option can be enabled only when 'hw-offload' policy is enabled. It also requires 'tc-policy' to be set to 'skip_sw'; otherwise, flow offload errors (specifically ENOSPC error this feature depends on) reported by an offloaded device are supressed by TC-Flower kernel module. Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Co-authored-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Reviewed-by: Sathya Perla <sathya.perla@broadcom.com> Reviewed-by: Ben Pfaff <blp@ovn.org> Signed-off-by: Simon Horman <simon.horman@netronome.com>
2018-10-18 21:43:14 +05:30
dpif_netlink_operate(struct dpif *dpif_, struct dpif_op **ops, size_t n_ops,
enum dpif_offload_type offload_type)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_op *new_ops[OPERATE_MAX_OPS];
int count = 0;
int i = 0;
int err = 0;
revalidator: Rebalance offloaded flows based on the pps rate This is the third patch in the patch-set to support dynamic rebalancing of offloaded flows. The dynamic rebalancing functionality is implemented in this patch. The ukeys that are not scheduled for deletion are obtained and passed as input to the rebalancing routine. The rebalancing is done in the context of revalidation leader thread, after all other revalidator threads are done with gathering rebalancing data for flows. For each netdev that is in OOR state, a list of flows - both offloaded and non-offloaded (pending) - is obtained using the ukeys. For each netdev that is in OOR state, the flows are grouped and sorted into offloaded and pending flows. The offloaded flows are sorted in descending order of pps-rate, while pending flows are sorted in ascending order of pps-rate. The rebalancing is done in two phases. In the first phase, we try to offload all pending flows and if that succeeds, the OOR state on the device is cleared. If some (or none) of the pending flows could not be offloaded, then we start replacing an offloaded flow that has a lower pps-rate than a pending flow, until there are no more pending flows with a higher rate than an offloaded flow. The flows that are replaced from the device are added into kernel datapath. A new OVS configuration parameter "offload-rebalance", is added to ovsdb. The default value of this is "false". To enable this feature, set the value of this parameter to "true", which provides packets-per-second rate based policy to dynamically offload and un-offload flows. Note: This option can be enabled only when 'hw-offload' policy is enabled. It also requires 'tc-policy' to be set to 'skip_sw'; otherwise, flow offload errors (specifically ENOSPC error this feature depends on) reported by an offloaded device are supressed by TC-Flower kernel module. Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Co-authored-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Reviewed-by: Sathya Perla <sathya.perla@broadcom.com> Reviewed-by: Ben Pfaff <blp@ovn.org> Signed-off-by: Simon Horman <simon.horman@netronome.com>
2018-10-18 21:43:14 +05:30
if (offload_type == DPIF_OFFLOAD_ALWAYS && !netdev_is_flow_api_enabled()) {
VLOG_DBG("Invalid offload_type: %d", offload_type);
return;
}
if (offload_type != DPIF_OFFLOAD_NEVER && netdev_is_flow_api_enabled()) {
while (n_ops > 0) {
count = 0;
while (n_ops > 0 && count < OPERATE_MAX_OPS) {
struct dpif_op *op = ops[i++];
err = try_send_to_netdev(dpif, op);
if (err && err != EEXIST) {
revalidator: Rebalance offloaded flows based on the pps rate This is the third patch in the patch-set to support dynamic rebalancing of offloaded flows. The dynamic rebalancing functionality is implemented in this patch. The ukeys that are not scheduled for deletion are obtained and passed as input to the rebalancing routine. The rebalancing is done in the context of revalidation leader thread, after all other revalidator threads are done with gathering rebalancing data for flows. For each netdev that is in OOR state, a list of flows - both offloaded and non-offloaded (pending) - is obtained using the ukeys. For each netdev that is in OOR state, the flows are grouped and sorted into offloaded and pending flows. The offloaded flows are sorted in descending order of pps-rate, while pending flows are sorted in ascending order of pps-rate. The rebalancing is done in two phases. In the first phase, we try to offload all pending flows and if that succeeds, the OOR state on the device is cleared. If some (or none) of the pending flows could not be offloaded, then we start replacing an offloaded flow that has a lower pps-rate than a pending flow, until there are no more pending flows with a higher rate than an offloaded flow. The flows that are replaced from the device are added into kernel datapath. A new OVS configuration parameter "offload-rebalance", is added to ovsdb. The default value of this is "false". To enable this feature, set the value of this parameter to "true", which provides packets-per-second rate based policy to dynamically offload and un-offload flows. Note: This option can be enabled only when 'hw-offload' policy is enabled. It also requires 'tc-policy' to be set to 'skip_sw'; otherwise, flow offload errors (specifically ENOSPC error this feature depends on) reported by an offloaded device are supressed by TC-Flower kernel module. Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Co-authored-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Reviewed-by: Sathya Perla <sathya.perla@broadcom.com> Reviewed-by: Ben Pfaff <blp@ovn.org> Signed-off-by: Simon Horman <simon.horman@netronome.com>
2018-10-18 21:43:14 +05:30
if (offload_type == DPIF_OFFLOAD_ALWAYS) {
/* We got an error while offloading an op. Since
* OFFLOAD_ALWAYS is specified, we stop further
* processing and return to the caller without
* invoking kernel datapath as fallback. But the
* interface requires us to process all n_ops; so
* return the same error in the remaining ops too.
*/
op->error = err;
n_ops--;
while (n_ops > 0) {
op = ops[i++];
op->error = err;
n_ops--;
}
return;
}
new_ops[count++] = op;
} else {
op->error = err;
}
n_ops--;
}
dpif_netlink_operate_chunks(dpif, new_ops, count);
}
revalidator: Rebalance offloaded flows based on the pps rate This is the third patch in the patch-set to support dynamic rebalancing of offloaded flows. The dynamic rebalancing functionality is implemented in this patch. The ukeys that are not scheduled for deletion are obtained and passed as input to the rebalancing routine. The rebalancing is done in the context of revalidation leader thread, after all other revalidator threads are done with gathering rebalancing data for flows. For each netdev that is in OOR state, a list of flows - both offloaded and non-offloaded (pending) - is obtained using the ukeys. For each netdev that is in OOR state, the flows are grouped and sorted into offloaded and pending flows. The offloaded flows are sorted in descending order of pps-rate, while pending flows are sorted in ascending order of pps-rate. The rebalancing is done in two phases. In the first phase, we try to offload all pending flows and if that succeeds, the OOR state on the device is cleared. If some (or none) of the pending flows could not be offloaded, then we start replacing an offloaded flow that has a lower pps-rate than a pending flow, until there are no more pending flows with a higher rate than an offloaded flow. The flows that are replaced from the device are added into kernel datapath. A new OVS configuration parameter "offload-rebalance", is added to ovsdb. The default value of this is "false". To enable this feature, set the value of this parameter to "true", which provides packets-per-second rate based policy to dynamically offload and un-offload flows. Note: This option can be enabled only when 'hw-offload' policy is enabled. It also requires 'tc-policy' to be set to 'skip_sw'; otherwise, flow offload errors (specifically ENOSPC error this feature depends on) reported by an offloaded device are supressed by TC-Flower kernel module. Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Co-authored-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Reviewed-by: Sathya Perla <sathya.perla@broadcom.com> Reviewed-by: Ben Pfaff <blp@ovn.org> Signed-off-by: Simon Horman <simon.horman@netronome.com>
2018-10-18 21:43:14 +05:30
} else if (offload_type != DPIF_OFFLOAD_ALWAYS) {
dpif_netlink_operate_chunks(dpif, ops, n_ops);
}
}
2014-10-23 08:27:34 -07:00
#if _WIN32
static void
dpif_netlink_handler_uninit(struct dpif_handler *handler)
{
vport_delete_sock_pool(handler);
}
static int
dpif_netlink_handler_init(struct dpif_handler *handler)
{
return vport_create_sock_pool(handler);
}
#else
static int
dpif_netlink_handler_init(struct dpif_handler *handler)
{
handler->epoll_fd = epoll_create(10);
return handler->epoll_fd < 0 ? errno : 0;
}
static void
dpif_netlink_handler_uninit(struct dpif_handler *handler)
{
close(handler->epoll_fd);
}
#endif
handlers: Create additional handler threads when using CPU isolation. Additional threads are required to service upcalls when we have CPU isolation (in per-cpu dispatch mode). The reason additional threads are required is because it creates a more fair distribution. With more threads we decrease the load of each thread as more threads would decrease the number of cores each threads is assigned. Adding additional threads also increases the chance OVS utilizes all cores available to use. Some RPS schemas might make some handler threads get all the workload while others get no workload. This tends to happen when the handler thread count is low. An example would be an RPS that sends traffic on all even cores on a system with only the lower half of the cores available for OVS to use. In this example we have as many handlers threads as there are available cores. In this case 50% of the handler threads get all the workload while the other 50% get no workload. Not only that, but OVS is only utilizing half of the cores that it can use. This is the worst case scenario. The ideal scenario is to have as many threads as there are cores - in this case we guarantee that all cores OVS can use are utilized But, adding as many threads are there are cores could have a performance hit when the number of active cores (which all threads have to share) is very low. For this reason we avoid creating as many threads as there are cores and instead meet somewhere in the middle. The formula used to calculate the number of handler threads to create is as follows: handlers_n = min(next_prime(active_cores+1), total_cores) Assume default behavior when total_cores <= 2, that is do not create additional threads when we have less than 2 total cores on the system Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:14 -04:00
/* Returns true if num is a prime number,
* otherwise, return false.
*/
static bool
is_prime(uint32_t num)
{
if (num == 2) {
return true;
}
if (num < 2) {
return false;
}
if (num % 2 == 0) {
return false;
}
for (uint64_t i = 3; i * i <= num; i += 2) {
if (num % i == 0) {
return false;
}
}
return true;
}
/* Returns start if start is a prime number. Otherwise returns the next
* prime greater than start. Search is limited by UINT32_MAX.
*
* Returns 0 if no prime has been found between start and UINT32_MAX.
*/
static uint32_t
next_prime(uint32_t start)
{
if (start <= 2) {
return 2;
}
for (uint32_t i = start; i < UINT32_MAX; i++) {
if (is_prime(i)) {
return i;
}
}
return 0;
}
/* Calculates and returns the number of handler threads needed based
* the following formula:
*
* handlers_n = min(next_prime(active_cores + 1), total_cores)
*/
static uint32_t
dpif_netlink_calculate_n_handlers(void)
{
uint32_t total_cores = count_total_cores();
uint32_t n_handlers = count_cpu_cores();
uint32_t next_prime_num;
/* If not all cores are available to OVS, create additional handler
* threads to ensure more fair distribution of load between them.
*/
if (n_handlers < total_cores && total_cores > 2) {
next_prime_num = next_prime(n_handlers + 1);
n_handlers = MIN(next_prime_num, total_cores);
}
return MAX(n_handlers, 1);
handlers: Create additional handler threads when using CPU isolation. Additional threads are required to service upcalls when we have CPU isolation (in per-cpu dispatch mode). The reason additional threads are required is because it creates a more fair distribution. With more threads we decrease the load of each thread as more threads would decrease the number of cores each threads is assigned. Adding additional threads also increases the chance OVS utilizes all cores available to use. Some RPS schemas might make some handler threads get all the workload while others get no workload. This tends to happen when the handler thread count is low. An example would be an RPS that sends traffic on all even cores on a system with only the lower half of the cores available for OVS to use. In this example we have as many handlers threads as there are available cores. In this case 50% of the handler threads get all the workload while the other 50% get no workload. Not only that, but OVS is only utilizing half of the cores that it can use. This is the worst case scenario. The ideal scenario is to have as many threads as there are cores - in this case we guarantee that all cores OVS can use are utilized But, adding as many threads are there are cores could have a performance hit when the number of active cores (which all threads have to share) is very low. For this reason we avoid creating as many threads as there are cores and instead meet somewhere in the middle. The formula used to calculate the number of handler threads to create is as follows: handlers_n = min(next_prime(active_cores+1), total_cores) Assume default behavior when total_cores <= 2, that is do not create additional threads when we have less than 2 total cores on the system Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:14 -04:00
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static int
dpif_netlink_refresh_handlers_cpu_dispatch(struct dpif_netlink *dpif)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
int handler_id;
int error = 0;
uint32_t n_handlers;
uint32_t *upcall_pids;
handlers: Create additional handler threads when using CPU isolation. Additional threads are required to service upcalls when we have CPU isolation (in per-cpu dispatch mode). The reason additional threads are required is because it creates a more fair distribution. With more threads we decrease the load of each thread as more threads would decrease the number of cores each threads is assigned. Adding additional threads also increases the chance OVS utilizes all cores available to use. Some RPS schemas might make some handler threads get all the workload while others get no workload. This tends to happen when the handler thread count is low. An example would be an RPS that sends traffic on all even cores on a system with only the lower half of the cores available for OVS to use. In this example we have as many handlers threads as there are available cores. In this case 50% of the handler threads get all the workload while the other 50% get no workload. Not only that, but OVS is only utilizing half of the cores that it can use. This is the worst case scenario. The ideal scenario is to have as many threads as there are cores - in this case we guarantee that all cores OVS can use are utilized But, adding as many threads are there are cores could have a performance hit when the number of active cores (which all threads have to share) is very low. For this reason we avoid creating as many threads as there are cores and instead meet somewhere in the middle. The formula used to calculate the number of handler threads to create is as follows: handlers_n = min(next_prime(active_cores+1), total_cores) Assume default behavior when total_cores <= 2, that is do not create additional threads when we have less than 2 total cores on the system Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:14 -04:00
n_handlers = dpif_netlink_calculate_n_handlers();
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (dpif->n_handlers != n_handlers) {
VLOG_DBG("Dispatch mode(per-cpu): initializing %d handlers",
n_handlers);
destroy_all_handlers(dpif);
upcall_pids = xzalloc(n_handlers * sizeof *upcall_pids);
dpif->handlers = xzalloc(n_handlers * sizeof *dpif->handlers);
for (handler_id = 0; handler_id < n_handlers; handler_id++) {
struct dpif_handler *handler = &dpif->handlers[handler_id];
error = create_nl_sock(dpif, &handler->sock);
if (error) {
VLOG_ERR("Dispatch mode(per-cpu): Cannot create socket for"
"handler %d", handler_id);
continue;
}
upcall_pids[handler_id] = nl_sock_pid(handler->sock);
VLOG_DBG("Dispatch mode(per-cpu): "
"handler %d has Netlink PID of %u",
handler_id, upcall_pids[handler_id]);
}
dpif->n_handlers = n_handlers;
error = dpif_netlink_set_handler_pids(&dpif->dpif, upcall_pids,
n_handlers);
free(upcall_pids);
}
return error;
}
/* Synchronizes 'channels' in 'dpif->handlers' with the set of vports
* currently in 'dpif' in the kernel, by adding a new set of channels for
* any kernel vport that lacks one and deleting any channels that have no
* backing kernel vports. */
static int
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
dpif_netlink_refresh_handlers_vport_dispatch(struct dpif_netlink *dpif,
uint32_t n_handlers)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
unsigned long int *keep_channels;
struct dpif_netlink_vport vport;
size_t keep_channels_nbits;
struct nl_dump dump;
uint64_t reply_stub[NL_DUMP_BUFSIZE / 8];
struct ofpbuf buf;
int retval = 0;
size_t i;
2014-10-23 08:27:34 -07:00
ovs_assert(!WINDOWS || n_handlers <= 1);
ovs_assert(!WINDOWS || dpif->n_handlers <= 1);
if (dpif->n_handlers != n_handlers) {
destroy_all_channels(dpif);
dpif->handlers = xzalloc(n_handlers * sizeof *dpif->handlers);
for (i = 0; i < n_handlers; i++) {
2014-10-23 08:27:34 -07:00
int error;
struct dpif_handler *handler = &dpif->handlers[i];
2014-10-23 08:27:34 -07:00
error = dpif_netlink_handler_init(handler);
if (error) {
size_t j;
for (j = 0; j < i; j++) {
struct dpif_handler *tmp = &dpif->handlers[j];
2014-10-23 08:27:34 -07:00
dpif_netlink_handler_uninit(tmp);
}
free(dpif->handlers);
dpif->handlers = NULL;
2014-10-23 08:27:34 -07:00
return error;
}
}
dpif->n_handlers = n_handlers;
}
for (i = 0; i < n_handlers; i++) {
struct dpif_handler *handler = &dpif->handlers[i];
handler->event_offset = handler->n_events = 0;
}
keep_channels_nbits = dpif->uc_array_size;
keep_channels = bitmap_allocate(keep_channels_nbits);
ofpbuf_use_stub(&buf, reply_stub, sizeof reply_stub);
dpif_netlink_port_dump_start__(dpif, &dump);
while (!dpif_netlink_port_dump_next__(dpif, &dump, &vport, &buf)) {
uint32_t port_no = odp_to_u32(vport.port_no);
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
uint32_t upcall_pid;
int error;
if (port_no >= dpif->uc_array_size
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
|| !vport_get_pid(dpif, port_no, &upcall_pid)) {
struct nl_sock *sock;
error = create_nl_sock(dpif, &sock);
if (error) {
goto error;
}
error = vport_add_channel(dpif, vport.port_no, sock);
if (error) {
VLOG_INFO("%s: could not add channels for port %s",
dpif_name(&dpif->dpif), vport.name);
nl_sock_destroy(sock);
retval = error;
goto error;
}
upcall_pid = nl_sock_pid(sock);
}
/* Configure the vport to deliver misses to 'sock'. */
if (vport.upcall_pids[0] == 0
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
|| vport.n_upcall_pids != 1
|| upcall_pid != vport.upcall_pids[0]) {
struct dpif_netlink_vport vport_request;
dpif_netlink_vport_init(&vport_request);
vport_request.cmd = OVS_VPORT_CMD_SET;
vport_request.dp_ifindex = dpif->dp_ifindex;
vport_request.port_no = vport.port_no;
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
vport_request.n_upcall_pids = 1;
vport_request.upcall_pids = &upcall_pid;
error = dpif_netlink_vport_transact(&vport_request, NULL, NULL);
if (error) {
VLOG_WARN_RL(&error_rl,
"%s: failed to set upcall pid on port: %s",
dpif_name(&dpif->dpif), ovs_strerror(error));
if (error != ENODEV && error != ENOENT) {
retval = error;
} else {
/* The vport isn't really there, even though the dump says
* it is. Probably we just hit a race after a port
* disappeared. */
}
goto error;
}
}
if (port_no < keep_channels_nbits) {
bitmap_set1(keep_channels, port_no);
}
continue;
error:
vport_del_channels(dpif, vport.port_no);
}
nl_dump_done(&dump);
ofpbuf_uninit(&buf);
/* Discard any saved channels that we didn't reuse. */
for (i = 0; i < keep_channels_nbits; i++) {
if (!bitmap_is_set(keep_channels, i)) {
vport_del_channels(dpif, u32_to_odp(i));
}
}
free(keep_channels);
return retval;
}
static int
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
dpif_netlink_recv_set_vport_dispatch(struct dpif_netlink *dpif, bool enable)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
if ((dpif->handlers != NULL) == enable) {
return 0;
} else if (!enable) {
destroy_all_channels(dpif);
return 0;
} else {
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
return dpif_netlink_refresh_handlers_vport_dispatch(dpif, 1);
}
}
static int
dpif_netlink_recv_set_cpu_dispatch(struct dpif_netlink *dpif, bool enable)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
if ((dpif->handlers != NULL) == enable) {
return 0;
} else if (!enable) {
destroy_all_handlers(dpif);
return 0;
} else {
return dpif_netlink_refresh_handlers_cpu_dispatch(dpif);
}
}
static int
dpif_netlink_recv_set(struct dpif *dpif_, bool enable)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
int error;
fat_rwlock_wrlock(&dpif->upcall_lock);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (dpif_netlink_upcall_per_cpu(dpif)) {
error = dpif_netlink_recv_set_cpu_dispatch(dpif, enable);
} else {
error = dpif_netlink_recv_set_vport_dispatch(dpif, enable);
}
fat_rwlock_unlock(&dpif->upcall_lock);
return error;
}
static int
dpif_netlink_handlers_set(struct dpif *dpif_, uint32_t n_handlers)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
int error = 0;
2014-10-23 08:27:34 -07:00
#ifdef _WIN32
/* Multiple upcall handlers will be supported once kernel datapath supports
* it. */
if (n_handlers > 1) {
return error;
}
#endif
fat_rwlock_wrlock(&dpif->upcall_lock);
if (dpif->handlers) {
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (dpif_netlink_upcall_per_cpu(dpif)) {
error = dpif_netlink_refresh_handlers_cpu_dispatch(dpif);
} else {
error = dpif_netlink_refresh_handlers_vport_dispatch(dpif,
n_handlers);
}
}
fat_rwlock_unlock(&dpif->upcall_lock);
return error;
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static bool
dpif_netlink_number_handlers_required(struct dpif *dpif_, uint32_t *n_handlers)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
if (dpif_netlink_upcall_per_cpu(dpif)) {
handlers: Create additional handler threads when using CPU isolation. Additional threads are required to service upcalls when we have CPU isolation (in per-cpu dispatch mode). The reason additional threads are required is because it creates a more fair distribution. With more threads we decrease the load of each thread as more threads would decrease the number of cores each threads is assigned. Adding additional threads also increases the chance OVS utilizes all cores available to use. Some RPS schemas might make some handler threads get all the workload while others get no workload. This tends to happen when the handler thread count is low. An example would be an RPS that sends traffic on all even cores on a system with only the lower half of the cores available for OVS to use. In this example we have as many handlers threads as there are available cores. In this case 50% of the handler threads get all the workload while the other 50% get no workload. Not only that, but OVS is only utilizing half of the cores that it can use. This is the worst case scenario. The ideal scenario is to have as many threads as there are cores - in this case we guarantee that all cores OVS can use are utilized But, adding as many threads are there are cores could have a performance hit when the number of active cores (which all threads have to share) is very low. For this reason we avoid creating as many threads as there are cores and instead meet somewhere in the middle. The formula used to calculate the number of handler threads to create is as follows: handlers_n = min(next_prime(active_cores+1), total_cores) Assume default behavior when total_cores <= 2, that is do not create additional threads when we have less than 2 total cores on the system Fixes: b1e517bd2f81 ("dpif-netlink: Introduce per-cpu upcall dispatch.") Signed-off-by: Michael Santana <msantana@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-08-09 03:18:14 -04:00
*n_handlers = dpif_netlink_calculate_n_handlers();
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
return true;
}
return false;
}
static int
dpif_netlink_queue_to_priority(const struct dpif *dpif OVS_UNUSED,
uint32_t queue_id, uint32_t *priority)
{
if (queue_id < 0xf000) {
*priority = TC_H_MAKE(1 << 16, queue_id + 1);
return 0;
} else {
return EINVAL;
}
}
static int
parse_odp_packet(struct ofpbuf *buf, struct dpif_upcall *upcall,
int *dp_ifindex)
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
{
static const struct nl_policy ovs_packet_policy[] = {
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
/* Always present. */
[OVS_PACKET_ATTR_PACKET] = { .type = NL_A_UNSPEC,
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
.min_len = ETH_HEADER_LEN },
[OVS_PACKET_ATTR_KEY] = { .type = NL_A_NESTED },
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
/* OVS_PACKET_CMD_ACTION only. */
[OVS_PACKET_ATTR_USERDATA] = { .type = NL_A_UNSPEC, .optional = true },
[OVS_PACKET_ATTR_EGRESS_TUN_KEY] = { .type = NL_A_NESTED, .optional = true },
Extend sFlow agent to report tunnel and MPLS structures Packets are still sampled at ingress only, so the egress tunnel and/or MPLS structures are only included when there is just 1 output port. The actions are either provided by the datapath in the sample upcall or looked up in the userspace cache. The former is preferred because it is more reliable and does not present any new demands or constraints on the userspace cache, however the code falls back on the userspace lookup so that this solution can work with existing kernel datapath modules. If the lookup fails it is not critical: the compiled user-action-cookie is still available and provides the essential output port and output VLAN forwarding information just as before. The openvswitch actions can express almost any tunneling/mangling so the only totally faithful representation would be to somehow encode the whole list of flow actions in the sFlow output. However the standard sFlow tunnel structures can express most common real-world scenarios, so in parsing the actions we look for those and skip the encoding if we see anything unusual. For example, a single set(tunnel()) or tnl_push() is interpreted, but if a second such action is encountered then the egress tunnel reporting is suppressed. The sFlow standard allows "best effort" encoding so that if a field is not knowable or too onerous to look up then it can be left out. This is often the case for the layer-4 source port or even the src ip address of a tunnel. The assumption is that monitoring is enabled everywhere so a missing field can typically be seen at ingress to the next switch in the path. This patch also adds unit tests to check the sFlow encoding of set(tunnel()), tnl_push() and push_mpls() actions. The netlink attribute to request that actions be included in the upcall from the datapath is inserted for sFlow sampling only. To make that option be explicit would require further changes to the printing and parsing of actions in lib/odp-util.c, and to scripts in the test suite. Further enhancements to report on 802.1AD QinQ, 64-bit tunnel IDs, and NAT transformations can follow in future patches that make only incremental changes. Signed-off-by: Neil McKee <neil.mckee@inmon.com> [blp@nicira.com made stylistic and semantic changes] Signed-off-by: Ben Pfaff <blp@nicira.com>
2015-07-17 21:37:02 -07:00
[OVS_PACKET_ATTR_ACTIONS] = { .type = NL_A_NESTED, .optional = true },
[OVS_PACKET_ATTR_MRU] = { .type = NL_A_U16, .optional = true },
[OVS_PACKET_ATTR_HASH] = { .type = NL_A_U64, .optional = true }
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
};
struct ofpbuf b = ofpbuf_const_initializer(buf->data, buf->size);
struct nlmsghdr *nlmsg = ofpbuf_try_pull(&b, sizeof *nlmsg);
struct genlmsghdr *genl = ofpbuf_try_pull(&b, sizeof *genl);
struct ovs_header *ovs_header = ofpbuf_try_pull(&b, sizeof *ovs_header);
struct nlattr *a[ARRAY_SIZE(ovs_packet_policy)];
if (!nlmsg || !genl || !ovs_header
|| nlmsg->nlmsg_type != ovs_packet_family
|| !nl_policy_parse(&b, 0, ovs_packet_policy, a,
ARRAY_SIZE(ovs_packet_policy))) {
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
return EINVAL;
}
int type = (genl->cmd == OVS_PACKET_CMD_MISS ? DPIF_UC_MISS
: genl->cmd == OVS_PACKET_CMD_ACTION ? DPIF_UC_ACTION
: -1);
if (type < 0) {
return EINVAL;
}
/* (Re)set ALL fields of '*upcall' on successful return. */
upcall->type = type;
upcall->key = CONST_CAST(struct nlattr *,
nl_attr_get(a[OVS_PACKET_ATTR_KEY]));
upcall->key_len = nl_attr_get_size(a[OVS_PACKET_ATTR_KEY]);
odp_flow_key_hash(upcall->key, upcall->key_len, &upcall->ufid);
upcall->userdata = a[OVS_PACKET_ATTR_USERDATA];
upcall->out_tun_key = a[OVS_PACKET_ATTR_EGRESS_TUN_KEY];
Extend sFlow agent to report tunnel and MPLS structures Packets are still sampled at ingress only, so the egress tunnel and/or MPLS structures are only included when there is just 1 output port. The actions are either provided by the datapath in the sample upcall or looked up in the userspace cache. The former is preferred because it is more reliable and does not present any new demands or constraints on the userspace cache, however the code falls back on the userspace lookup so that this solution can work with existing kernel datapath modules. If the lookup fails it is not critical: the compiled user-action-cookie is still available and provides the essential output port and output VLAN forwarding information just as before. The openvswitch actions can express almost any tunneling/mangling so the only totally faithful representation would be to somehow encode the whole list of flow actions in the sFlow output. However the standard sFlow tunnel structures can express most common real-world scenarios, so in parsing the actions we look for those and skip the encoding if we see anything unusual. For example, a single set(tunnel()) or tnl_push() is interpreted, but if a second such action is encountered then the egress tunnel reporting is suppressed. The sFlow standard allows "best effort" encoding so that if a field is not knowable or too onerous to look up then it can be left out. This is often the case for the layer-4 source port or even the src ip address of a tunnel. The assumption is that monitoring is enabled everywhere so a missing field can typically be seen at ingress to the next switch in the path. This patch also adds unit tests to check the sFlow encoding of set(tunnel()), tnl_push() and push_mpls() actions. The netlink attribute to request that actions be included in the upcall from the datapath is inserted for sFlow sampling only. To make that option be explicit would require further changes to the printing and parsing of actions in lib/odp-util.c, and to scripts in the test suite. Further enhancements to report on 802.1AD QinQ, 64-bit tunnel IDs, and NAT transformations can follow in future patches that make only incremental changes. Signed-off-by: Neil McKee <neil.mckee@inmon.com> [blp@nicira.com made stylistic and semantic changes] Signed-off-by: Ben Pfaff <blp@nicira.com>
2015-07-17 21:37:02 -07:00
upcall->actions = a[OVS_PACKET_ATTR_ACTIONS];
upcall->mru = a[OVS_PACKET_ATTR_MRU];
upcall->hash = a[OVS_PACKET_ATTR_HASH];
/* Allow overwriting the netlink attribute header without reallocating. */
dp_packet_use_stub(&upcall->packet,
CONST_CAST(struct nlattr *,
nl_attr_get(a[OVS_PACKET_ATTR_PACKET])) - 1,
nl_attr_get_size(a[OVS_PACKET_ATTR_PACKET]) +
sizeof(struct nlattr));
dp_packet_set_data(&upcall->packet,
(char *)dp_packet_data(&upcall->packet) + sizeof(struct nlattr));
dp_packet_set_size(&upcall->packet, nl_attr_get_size(a[OVS_PACKET_ATTR_PACKET]));
userspace: Add packet_type in dp_packet and flow This commit adds a packet_type attribute to the structs dp_packet and flow to explicitly carry the type of the packet as prepration for the introduction of the so-called packet type-aware pipeline (PTAP) in OVS. The packet_type is a big-endian 32 bit integer with the encoding as specified in OpenFlow verion 1.5. The upper 16 bits contain the packet type name space. Pre-defined values are defined in openflow-common.h: enum ofp_header_type_namespaces { OFPHTN_ONF = 0, /* ONF namespace. */ OFPHTN_ETHERTYPE = 1, /* ns_type is an Ethertype. */ OFPHTN_IP_PROTO = 2, /* ns_type is a IP protocol number. */ OFPHTN_UDP_TCP_PORT = 3, /* ns_type is a TCP or UDP port. */ OFPHTN_IPV4_OPTION = 4, /* ns_type is an IPv4 option number. */ }; The lower 16 bits specify the actual type in the context of the name space. Only name spaces 0 and 1 will be supported for now. For name space OFPHTN_ONF the relevant packet type is 0 (Ethernet). This is the default packet_type in OVS and the only one supported so far. Packets of type (OFPHTN_ONF, 0) are called Ethernet packets. In name space OFPHTN_ETHERTYPE the type is the Ethertype of the packet. A packet of type (OFPHTN_ETHERTYPE, <Ethertype>) is a standard L2 packet whith the Ethernet header (and any VLAN tags) removed to expose the L3 (or L2.5) payload of the packet. These will simply be called L3 packets. The Ethernet address fields dl_src and dl_dst in struct flow are not applicable for an L3 packet and must be zero. However, to maintain compatibility with the large code base, we have chosen to copy the Ethertype of an L3 packet into the the dl_type field of struct flow. This does not mean that it will be possible to match on dl_type for L3 packets with PTAP later on. Matching must be done on packet_type instead. New dp_packets are initialized with packet_type Ethernet. Ports that receive L3 packets will have to explicitly adjust the packet_type. Signed-off-by: Jean Tourrilhes <jt@labs.hpe.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Co-authored-by: Zoltan Balogh <zoltan.balogh@ericsson.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2017-04-25 16:29:59 +00:00
if (nl_attr_find__(upcall->key, upcall->key_len, OVS_KEY_ATTR_ETHERNET)) {
/* Ethernet frame */
upcall->packet.packet_type = htonl(PT_ETH);
} else {
/* Non-Ethernet packet. Get the Ethertype from the NL attributes */
ovs_be16 ethertype = 0;
const struct nlattr *et_nla = nl_attr_find__(upcall->key,
upcall->key_len,
OVS_KEY_ATTR_ETHERTYPE);
if (et_nla) {
ethertype = nl_attr_get_be16(et_nla);
}
upcall->packet.packet_type = PACKET_TYPE_BE(OFPHTN_ETHERTYPE,
ntohs(ethertype));
dp_packet_set_l3(&upcall->packet, dp_packet_data(&upcall->packet));
}
*dp_ifindex = ovs_header->dp_ifindex;
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
return 0;
}
2014-10-23 08:27:34 -07:00
#ifdef _WIN32
#define PACKET_RECV_BATCH_SIZE 50
static int
dpif_netlink_recv_windows(struct dpif_netlink *dpif, uint32_t handler_id,
struct dpif_upcall *upcall, struct ofpbuf *buf)
OVS_REQ_RDLOCK(dpif->upcall_lock)
{
struct dpif_handler *handler;
int read_tries = 0;
struct dpif_windows_vport_sock *sock_pool;
uint32_t i;
if (!dpif->handlers) {
return EAGAIN;
}
/* Only one handler is supported currently. */
if (handler_id >= 1) {
return EAGAIN;
}
if (handler_id >= dpif->n_handlers) {
return EAGAIN;
}
handler = &dpif->handlers[handler_id];
sock_pool = handler->vport_sock_pool;
for (i = 0; i < VPORT_SOCK_POOL_SIZE; i++) {
for (;;) {
int dp_ifindex;
int error;
if (++read_tries > PACKET_RECV_BATCH_SIZE) {
return EAGAIN;
}
error = nl_sock_recv(sock_pool[i].nl_sock, buf, NULL, false);
2014-10-23 08:27:34 -07:00
if (error == ENOBUFS) {
/* ENOBUFS typically means that we've received so many
* packets that the buffer overflowed. Try again
* immediately because there's almost certainly a packet
* waiting for us. */
/* XXX: report_loss(dpif, ch, idx, handler_id); */
continue;
}
/* XXX: ch->last_poll = time_msec(); */
if (error) {
if (error == EAGAIN) {
break;
}
return error;
}
error = parse_odp_packet(buf, upcall, &dp_ifindex);
2014-10-23 08:27:34 -07:00
if (!error && dp_ifindex == dpif->dp_ifindex) {
return 0;
} else if (error) {
return error;
}
}
}
return EAGAIN;
}
#else
datapath: Report kernel's flow key when passing packets up to userspace. One of the goals for Open vSwitch is to decouple kernel and userspace software, so that either one can be upgraded or rolled back independent of the other. To do this in full generality, it must be possible to change the kernel's idea of the flow key separately from the userspace version. This commit takes one step in that direction by making the kernel report its idea of the flow that a packet belongs to whenever it passes a packet up to userspace. This means that userspace can intelligently figure out what to do: - If userspace's notion of the flow for the packet matches the kernel's, then nothing special is necessary. - If the kernel has a more specific notion for the flow than userspace, for example if the kernel decoded IPv6 headers but userspace stopped at the Ethernet type (because it does not understand IPv6), then again nothing special is necessary: userspace can still set up the flow in the usual way. - If userspace has a more specific notion for the flow than the kernel, for example if userspace decoded an IPv6 header but the kernel stopped at the Ethernet type, then userspace can forward the packet manually, without setting up a flow in the kernel. (This case is bad from a performance point of view, but at least it is correct.) This commit does not actually make userspace flexible enough to handle changes in the kernel flow key structure, although userspace does now have enough information to do that intelligently. This will have to wait for later commits. This commit is bigger than it would otherwise be because it is rolled together with changing "struct odp_msg" to a sequence of Netlink attributes. The alternative, to do each of those changes in a separate patch, seemed like overkill because it meant that either we would have to introduce and then kill off Netlink attributes for in_port and tun_id, if Netlink conversion went first, or shove yet another variable-length header into the stuff already after odp_msg, if adding the flow key to odp_msg went first. This commit will slow down performance of checksumming packets sent up to userspace. I'm not entirely pleased with how I did it. I considered a couple of alternatives, but none of them seemed that much better. Suggestions welcome. Not changing anything wasn't an option, unfortunately. At any rate some slowdown will become unavoidable when OVS actually starts using Netlink instead of just Netlink framing. (Actually, I thought of one option where we could avoid that: make userspace do the checksum instead, by passing csum_start and csum_offset as part of what goes to userspace. But that's not perfect either.) Signed-off-by: Ben Pfaff <blp@nicira.com> Acked-by: Jesse Gross <jesse@nicira.com>
2011-01-24 14:59:57 -08:00
static int
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
dpif_netlink_recv_cpu_dispatch(struct dpif_netlink *dpif, uint32_t handler_id,
struct dpif_upcall *upcall, struct ofpbuf *buf)
OVS_REQ_RDLOCK(dpif->upcall_lock)
{
struct dpif_handler *handler;
int read_tries = 0;
if (!dpif->handlers || handler_id >= dpif->n_handlers) {
return EAGAIN;
}
handler = &dpif->handlers[handler_id];
for (;;) {
int dp_ifindex;
int error;
if (++read_tries > 50) {
return EAGAIN;
}
error = nl_sock_recv(handler->sock, buf, NULL, false);
if (error == ENOBUFS) {
/* ENOBUFS typically means that we've received so many
* packets that the buffer overflowed. Try again
* immediately because there's almost certainly a packet
* waiting for us. */
report_loss(dpif, NULL, 0, handler_id);
continue;
}
if (error) {
if (error == EAGAIN) {
break;
}
return error;
}
error = parse_odp_packet(buf, upcall, &dp_ifindex);
if (!error && dp_ifindex == dpif->dp_ifindex) {
return 0;
} else if (error) {
return error;
}
}
return EAGAIN;
}
static int
dpif_netlink_recv_vport_dispatch(struct dpif_netlink *dpif,
uint32_t handler_id,
struct dpif_upcall *upcall,
struct ofpbuf *buf)
OVS_REQ_RDLOCK(dpif->upcall_lock)
{
struct dpif_handler *handler;
int read_tries = 0;
if (!dpif->handlers || handler_id >= dpif->n_handlers) {
return EAGAIN;
}
handler = &dpif->handlers[handler_id];
if (handler->event_offset >= handler->n_events) {
int retval;
handler->event_offset = handler->n_events = 0;
do {
retval = epoll_wait(handler->epoll_fd, handler->epoll_events,
dpif->uc_array_size, 0);
} while (retval < 0 && errno == EINTR);
2014-10-23 08:27:34 -07:00
if (retval < 0) {
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
VLOG_WARN_RL(&rl, "epoll_wait failed (%s)", ovs_strerror(errno));
} else if (retval > 0) {
handler->n_events = retval;
}
}
while (handler->event_offset < handler->n_events) {
int idx = handler->epoll_events[handler->event_offset].data.u32;
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
struct dpif_channel *ch = &dpif->channels[idx];
handler->event_offset++;
for (;;) {
int dp_ifindex;
int error;
if (++read_tries > 50) {
return EAGAIN;
}
error = nl_sock_recv(ch->sock, buf, NULL, false);
if (error == ENOBUFS) {
/* ENOBUFS typically means that we've received so many
* packets that the buffer overflowed. Try again
* immediately because there's almost certainly a packet
* waiting for us. */
report_loss(dpif, ch, idx, handler_id);
continue;
}
ch->last_poll = time_msec();
if (error) {
if (error == EAGAIN) {
break;
}
return error;
}
error = parse_odp_packet(buf, upcall, &dp_ifindex);
if (!error && dp_ifindex == dpif->dp_ifindex) {
return 0;
} else if (error) {
return error;
}
}
}
return EAGAIN;
}
2014-10-23 08:27:34 -07:00
#endif
static int
dpif_netlink_recv(struct dpif *dpif_, uint32_t handler_id,
struct dpif_upcall *upcall, struct ofpbuf *buf)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
int error;
fat_rwlock_rdlock(&dpif->upcall_lock);
2014-10-23 08:27:34 -07:00
#ifdef _WIN32
error = dpif_netlink_recv_windows(dpif, handler_id, upcall, buf);
#else
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (dpif_netlink_upcall_per_cpu(dpif)) {
error = dpif_netlink_recv_cpu_dispatch(dpif, handler_id, upcall, buf);
} else {
error = dpif_netlink_recv_vport_dispatch(dpif,
handler_id, upcall, buf);
}
2014-10-23 08:27:34 -07:00
#endif
fat_rwlock_unlock(&dpif->upcall_lock);
return error;
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
#ifdef _WIN32
static void
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
dpif_netlink_recv_wait_windows(struct dpif_netlink *dpif, uint32_t handler_id)
OVS_REQ_RDLOCK(dpif->upcall_lock)
{
2014-10-23 08:27:34 -07:00
uint32_t i;
struct dpif_windows_vport_sock *sock_pool =
dpif->handlers[handler_id].vport_sock_pool;
/* Only one handler is supported currently. */
if (handler_id >= 1) {
return;
}
for (i = 0; i < VPORT_SOCK_POOL_SIZE; i++) {
nl_sock_wait(sock_pool[i].nl_sock, POLLIN);
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
}
#else
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static void
dpif_netlink_recv_wait_vport_dispatch(struct dpif_netlink *dpif,
uint32_t handler_id)
OVS_REQ_RDLOCK(dpif->upcall_lock)
{
if (dpif->handlers && handler_id < dpif->n_handlers) {
struct dpif_handler *handler = &dpif->handlers[handler_id];
poll_fd_wait(handler->epoll_fd, POLLIN);
}
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static void
dpif_netlink_recv_wait_cpu_dispatch(struct dpif_netlink *dpif,
uint32_t handler_id)
OVS_REQ_RDLOCK(dpif->upcall_lock)
{
if (dpif->handlers && handler_id < dpif->n_handlers) {
struct dpif_handler *handler = &dpif->handlers[handler_id];
poll_fd_wait(nl_sock_fd(handler->sock), POLLIN);
}
}
#endif
static void
dpif_netlink_recv_wait(struct dpif *dpif_, uint32_t handler_id)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
fat_rwlock_rdlock(&dpif->upcall_lock);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
#ifdef _WIN32
dpif_netlink_recv_wait_windows(dpif, handler_id);
#else
if (dpif_netlink_upcall_per_cpu(dpif)) {
dpif_netlink_recv_wait_cpu_dispatch(dpif, handler_id);
} else {
dpif_netlink_recv_wait_vport_dispatch(dpif, handler_id);
}
#endif
fat_rwlock_unlock(&dpif->upcall_lock);
}
static void
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
dpif_netlink_recv_purge_vport_dispatch(struct dpif_netlink *dpif)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
if (dpif->handlers) {
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
size_t i;
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
if (!dpif->channels[0].sock) {
return;
}
for (i = 0; i < dpif->uc_array_size; i++ ) {
dpif-netlink: don't allocate per thread netlink sockets When using the kernel datapath, OVS allocates a pool of sockets to handle netlink events. The number of sockets is: ports * n-handler-threads, where n-handler-threads is user configurable and defaults to 3/4*number of cores. This because vswitchd starts n-handler-threads threads, each one with a netlink socket for every port of the switch. Every thread then, starts listening on events on its set of sockets with epoll(). On setup with lot of CPUs and ports, the number of sockets easily hits the process file descriptor limit, and ovs-vswitchd will exit with -EMFILE. Change the number of allocated sockets to just one per port by moving the socket array from a per handler structure to a per datapath one, and let all the handlers share the same sockets by using EPOLLEXCLUSIVE epoll flag which avoids duplicate events, on systems that support it. The patch was tested on a 56 core machine running Linux 4.18 and latest Open vSwitch. A bridge was created with 2000+ ports, some of them being veth interfaces with the peer outside the bridge. The latency of the upcall is measured by setting a single 'action=controller,local' OpenFlow rule to force all the packets going to the slow path and then to the local port. A tool[1] injects some packets to the veth outside the bridge, and measures the delay until the packet is captured on the local port. The rx timestamp is get from the socket ancillary data in the attribute SO_TIMESTAMPNS, to avoid having the scheduler delay in the measured time. The first test measures the average latency for an upcall generated from a single port. To measure it 100k packets, one every msec, are sent to a single port and the latencies are measured. The second test is meant to check latency fairness among ports, namely if latency is equal between ports or if some ports have lower priority. The previous test is repeated for every port, the average of the average latencies and the standard deviation between averages is measured. The third test serves to measure responsiveness under load. Heavy traffic is sent through all ports, latency and packet loss is measured on a single idle port. The fourth test is all about fairness. Heavy traffic is injected in all ports but one, latency and packet loss is measured on the single idle port. This is the test setup: # nproc 56 # ovs-vsctl show |grep -c Port 2223 # ovs-ofctl dump-flows ovs_upc_br cookie=0x0, duration=4.827s, table=0, n_packets=0, n_bytes=0, actions=CONTROLLER:65535,LOCAL # uname -a Linux fc28 4.18.7-200.fc28.x86_64 #1 SMP Mon Sep 10 15:44:45 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux And these are the results of the tests: Stock OVS Patched netlink sockets in use by vswitchd lsof -p $(pidof ovs-vswitchd) \ |grep -c GENERIC 91187 2227 Test 1 one port latency min/avg/max/mdev (us) 2.7/6.6/238.7/1.8 1.6/6.8/160.6/1.7 Test 2 all port avg latency/mdev (us) 6.51/0.97 6.86/0.17 Test 3 single port latency under load avg/mdev (us) 7.5/5.9 3.8/4.8 packet loss 95 % 62 % Test 4 idle port latency under load min/avg/max/mdev (us) 0.8/1.5/210.5/0.9 1.0/2.1/344.5/1.2 packet loss 94 % 4 % CPU and RAM usage seems not to be affected, the resource usage of vswitchd idle with 2000+ ports is unchanged: # ps u $(pidof ovs-vswitchd) USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND openvsw+ 5430 54.3 0.3 4263964 510968 pts/1 RLl+ 16:20 0:50 ovs-vswitchd Additionally, to check if vswitchd is thread safe with this patch, the following test was run for circa 48 hours: on a 56 core machine, a bridge with kernel datapath is filled with 2200 dummy interfaces and 22 veth, then 22 traffic generators are run in parallel piping traffic into the veths peers outside the bridge. To generate as many upcalls as possible, all packets were forced to the slowpath with an openflow rule like 'action=controller,local' and packet size was set to 64 byte. Also, to avoid overflowing the FDB early and slowing down the upcall processing, generated mac addresses were restricted to a small interval. vswitchd ran without problems for 48+ hours, obviously with all the handler threads with almost 99% CPU usage. [1] https://github.com/teknoraver/network-tools/blob/master/weed.c Signed-off-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org> Acked-by: Flavio Leitner <fbl@sysclose.org>
2018-09-25 10:51:05 +02:00
nl_sock_drain(dpif->channels[i].sock);
}
}
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
static void
dpif_netlink_recv_purge_cpu_dispatch(struct dpif_netlink *dpif)
OVS_REQ_WRLOCK(dpif->upcall_lock)
{
int handler_id;
if (dpif->handlers) {
for (handler_id = 0; handler_id < dpif->n_handlers; handler_id++) {
struct dpif_handler *handler = &dpif->handlers[handler_id];
nl_sock_drain(handler->sock);
}
}
}
static void
dpif_netlink_recv_purge(struct dpif *dpif_)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
fat_rwlock_wrlock(&dpif->upcall_lock);
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (dpif_netlink_upcall_per_cpu(dpif)) {
dpif_netlink_recv_purge_cpu_dispatch(dpif);
} else {
dpif_netlink_recv_purge_vport_dispatch(dpif);
}
fat_rwlock_unlock(&dpif->upcall_lock);
}
static char *
dpif_netlink_get_datapath_version(void)
{
char *version_str = NULL;
#ifdef __linux__
#define MAX_VERSION_STR_SIZE 80
#define LINUX_DATAPATH_VERSION_FILE "/sys/module/openvswitch/version"
FILE *f;
f = fopen(LINUX_DATAPATH_VERSION_FILE, "r");
if (f) {
char *newline;
char version[MAX_VERSION_STR_SIZE];
if (fgets(version, MAX_VERSION_STR_SIZE, f)) {
newline = strchr(version, '\n');
if (newline) {
*newline = '\0';
}
version_str = xstrdup(version);
}
fclose(f);
}
#endif
return version_str;
}
struct dpif_netlink_ct_dump_state {
struct ct_dpif_dump_state up;
struct nl_ct_dump_state *nl_ct_dump;
};
static int
dpif_netlink_ct_dump_start(struct dpif *dpif OVS_UNUSED,
struct ct_dpif_dump_state **dump_,
const uint16_t *zone, int *ptot_bkts)
{
struct dpif_netlink_ct_dump_state *dump;
int err;
dump = xzalloc(sizeof *dump);
err = nl_ct_dump_start(&dump->nl_ct_dump, zone, ptot_bkts);
if (err) {
free(dump);
return err;
}
*dump_ = &dump->up;
return 0;
}
static int
dpif_netlink_ct_dump_next(struct dpif *dpif OVS_UNUSED,
struct ct_dpif_dump_state *dump_,
struct ct_dpif_entry *entry)
{
struct dpif_netlink_ct_dump_state *dump;
INIT_CONTAINER(dump, dump_, up);
return nl_ct_dump_next(dump->nl_ct_dump, entry);
}
static int
dpif_netlink_ct_dump_done(struct dpif *dpif OVS_UNUSED,
struct ct_dpif_dump_state *dump_)
{
struct dpif_netlink_ct_dump_state *dump;
INIT_CONTAINER(dump, dump_, up);
int err = nl_ct_dump_done(dump->nl_ct_dump);
free(dump);
return err;
}
static int
dpif_netlink_ct_flush(struct dpif *dpif OVS_UNUSED, const uint16_t *zone,
const struct ct_dpif_tuple *tuple)
{
if (tuple) {
return nl_ct_flush_tuple(tuple, zone ? *zone : 0);
} else if (zone) {
return nl_ct_flush_zone(*zone);
} else {
return nl_ct_flush();
}
}
static int
dpif_netlink_ct_set_limits(struct dpif *dpif OVS_UNUSED,
const struct ovs_list *zone_limits)
{
if (ovs_ct_limit_family < 0) {
return EOPNOTSUPP;
}
struct ofpbuf *request = ofpbuf_new(NL_DUMP_BUFSIZE);
nl_msg_put_genlmsghdr(request, 0, ovs_ct_limit_family,
NLM_F_REQUEST | NLM_F_ECHO, OVS_CT_LIMIT_CMD_SET,
OVS_CT_LIMIT_VERSION);
struct ovs_header *ovs_header;
ovs_header = ofpbuf_put_uninit(request, sizeof *ovs_header);
ovs_header->dp_ifindex = 0;
size_t opt_offset;
opt_offset = nl_msg_start_nested(request, OVS_CT_LIMIT_ATTR_ZONE_LIMIT);
if (!ovs_list_is_empty(zone_limits)) {
struct ct_dpif_zone_limit *zone_limit;
LIST_FOR_EACH (zone_limit, node, zone_limits) {
dpif-netlink: Fix send of uninitialized memory in ct limit requests. ct limit requests never initializes the whole 'struct ovs_zone_limit' sending uninitialized stack memory to kernel: Syscall param sendmsg(msg.msg_iov[0]) points to uninitialised byte(s) at 0x5E23867: sendmsg (in /usr/lib64/libpthread-2.28.so) by 0x54F761: nl_sock_transact_multiple__ (netlink-socket.c:858) by 0x54FB6E: nl_sock_transact_multiple.part.9 (netlink-socket.c:1079) by 0x54FCC0: nl_sock_transact_multiple (netlink-socket.c:1044) by 0x54FCC0: nl_sock_transact (netlink-socket.c:1108) by 0x550B6F: nl_transact (netlink-socket.c:1804) by 0x53BEA2: dpif_netlink_ct_get_limits (dpif-netlink.c:3052) by 0x588B57: dpctl_ct_get_limits (dpctl.c:2178) by 0x586FF2: dpctl_unixctl_handler (dpctl.c:2870) by 0x52C241: process_command (unixctl.c:310) by 0x52C241: run_connection (unixctl.c:344) by 0x52C241: unixctl_server_run (unixctl.c:395) by 0x407526: main (ovs-vswitchd.c:128) Address 0x10b87480 is 32 bytes inside a block of size 4,096 alloc'd at 0x4C30F0B: malloc (vg_replace_malloc.c:307) by 0x52CDE4: xmalloc (util.c:138) by 0x4F7E07: ofpbuf_init (ofpbuf.c:123) by 0x4F7E07: ofpbuf_new (ofpbuf.c:151) by 0x53BDE3: dpif_netlink_ct_get_limits (dpif-netlink.c:3025) by 0x588B57: dpctl_ct_get_limits (dpctl.c:2178) by 0x586FF2: dpctl_unixctl_handler (dpctl.c:2870) by 0x52C241: process_command (unixctl.c:310) by 0x52C241: run_connection (unixctl.c:344) by 0x52C241: unixctl_server_run (unixctl.c:395) by 0x407526: main (ovs-vswitchd.c:128) Uninitialised value was created by a stack allocation at 0x46AAA0: ct_dpif_get_limits (ct-dpif.c:197) Fix that by using designated initializers that will clear all the non-specified fields. Fixes: 906ff9d229ee ("dpif-netlink: Implement conntrack zone limit") Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Acked-by: Mark D. Gray <mark.d.gray@redhat.com>
2021-04-04 19:31:46 +02:00
struct ovs_zone_limit req_zone_limit = {
.zone_id = zone_limit->zone,
.limit = zone_limit->limit,
};
nl_msg_put(request, &req_zone_limit, sizeof req_zone_limit);
}
}
nl_msg_end_nested(request, opt_offset);
int err = nl_transact(NETLINK_GENERIC, request, NULL);
ofpbuf_delete(request);
return err;
}
static int
dpif_netlink_zone_limits_from_ofpbuf(const struct ofpbuf *buf,
struct ovs_list *zone_limits)
{
static const struct nl_policy ovs_ct_limit_policy[] = {
[OVS_CT_LIMIT_ATTR_ZONE_LIMIT] = { .type = NL_A_NESTED,
.optional = true },
};
struct ofpbuf b = ofpbuf_const_initializer(buf->data, buf->size);
struct nlmsghdr *nlmsg = ofpbuf_try_pull(&b, sizeof *nlmsg);
struct genlmsghdr *genl = ofpbuf_try_pull(&b, sizeof *genl);
struct ovs_header *ovs_header = ofpbuf_try_pull(&b, sizeof *ovs_header);
struct nlattr *attr[ARRAY_SIZE(ovs_ct_limit_policy)];
if (!nlmsg || !genl || !ovs_header
|| nlmsg->nlmsg_type != ovs_ct_limit_family
|| !nl_policy_parse(&b, 0, ovs_ct_limit_policy, attr,
ARRAY_SIZE(ovs_ct_limit_policy))) {
return EINVAL;
}
if (!attr[OVS_CT_LIMIT_ATTR_ZONE_LIMIT]) {
return EINVAL;
}
int rem = NLA_ALIGN(
nl_attr_get_size(attr[OVS_CT_LIMIT_ATTR_ZONE_LIMIT]));
const struct ovs_zone_limit *zone_limit =
nl_attr_get(attr[OVS_CT_LIMIT_ATTR_ZONE_LIMIT]);
while (rem >= sizeof *zone_limit) {
if (zone_limit->zone_id >= OVS_ZONE_LIMIT_DEFAULT_ZONE &&
zone_limit->zone_id <= UINT16_MAX) {
ct_dpif_push_zone_limit(zone_limits, zone_limit->zone_id,
zone_limit->limit, zone_limit->count);
}
rem -= NLA_ALIGN(sizeof *zone_limit);
zone_limit = ALIGNED_CAST(struct ovs_zone_limit *,
(unsigned char *) zone_limit + NLA_ALIGN(sizeof *zone_limit));
}
return 0;
}
static int
dpif_netlink_ct_get_limits(struct dpif *dpif OVS_UNUSED,
const struct ovs_list *zone_limits_request,
struct ovs_list *zone_limits_reply)
{
if (ovs_ct_limit_family < 0) {
return EOPNOTSUPP;
}
struct ofpbuf *request = ofpbuf_new(NL_DUMP_BUFSIZE);
nl_msg_put_genlmsghdr(request, 0, ovs_ct_limit_family,
NLM_F_REQUEST | NLM_F_ECHO, OVS_CT_LIMIT_CMD_GET,
OVS_CT_LIMIT_VERSION);
struct ovs_header *ovs_header;
ovs_header = ofpbuf_put_uninit(request, sizeof *ovs_header);
ovs_header->dp_ifindex = 0;
if (!ovs_list_is_empty(zone_limits_request)) {
size_t opt_offset = nl_msg_start_nested(request,
OVS_CT_LIMIT_ATTR_ZONE_LIMIT);
struct ct_dpif_zone_limit *zone_limit;
LIST_FOR_EACH (zone_limit, node, zone_limits_request) {
struct ovs_zone_limit req_zone_limit = {
.zone_id = zone_limit->zone,
};
nl_msg_put(request, &req_zone_limit, sizeof req_zone_limit);
}
nl_msg_end_nested(request, opt_offset);
}
struct ofpbuf *reply;
int err = nl_transact(NETLINK_GENERIC, request, &reply);
if (err) {
goto out;
}
err = dpif_netlink_zone_limits_from_ofpbuf(reply, zone_limits_reply);
out:
ofpbuf_delete(request);
ofpbuf_delete(reply);
return err;
}
static int
dpif_netlink_ct_del_limits(struct dpif *dpif OVS_UNUSED,
const struct ovs_list *zone_limits)
{
if (ovs_ct_limit_family < 0) {
return EOPNOTSUPP;
}
struct ofpbuf *request = ofpbuf_new(NL_DUMP_BUFSIZE);
nl_msg_put_genlmsghdr(request, 0, ovs_ct_limit_family,
NLM_F_REQUEST | NLM_F_ECHO, OVS_CT_LIMIT_CMD_DEL,
OVS_CT_LIMIT_VERSION);
struct ovs_header *ovs_header;
ovs_header = ofpbuf_put_uninit(request, sizeof *ovs_header);
ovs_header->dp_ifindex = 0;
if (!ovs_list_is_empty(zone_limits)) {
size_t opt_offset =
nl_msg_start_nested(request, OVS_CT_LIMIT_ATTR_ZONE_LIMIT);
struct ct_dpif_zone_limit *zone_limit;
LIST_FOR_EACH (zone_limit, node, zone_limits) {
dpif-netlink: Fix send of uninitialized memory in ct limit requests. ct limit requests never initializes the whole 'struct ovs_zone_limit' sending uninitialized stack memory to kernel: Syscall param sendmsg(msg.msg_iov[0]) points to uninitialised byte(s) at 0x5E23867: sendmsg (in /usr/lib64/libpthread-2.28.so) by 0x54F761: nl_sock_transact_multiple__ (netlink-socket.c:858) by 0x54FB6E: nl_sock_transact_multiple.part.9 (netlink-socket.c:1079) by 0x54FCC0: nl_sock_transact_multiple (netlink-socket.c:1044) by 0x54FCC0: nl_sock_transact (netlink-socket.c:1108) by 0x550B6F: nl_transact (netlink-socket.c:1804) by 0x53BEA2: dpif_netlink_ct_get_limits (dpif-netlink.c:3052) by 0x588B57: dpctl_ct_get_limits (dpctl.c:2178) by 0x586FF2: dpctl_unixctl_handler (dpctl.c:2870) by 0x52C241: process_command (unixctl.c:310) by 0x52C241: run_connection (unixctl.c:344) by 0x52C241: unixctl_server_run (unixctl.c:395) by 0x407526: main (ovs-vswitchd.c:128) Address 0x10b87480 is 32 bytes inside a block of size 4,096 alloc'd at 0x4C30F0B: malloc (vg_replace_malloc.c:307) by 0x52CDE4: xmalloc (util.c:138) by 0x4F7E07: ofpbuf_init (ofpbuf.c:123) by 0x4F7E07: ofpbuf_new (ofpbuf.c:151) by 0x53BDE3: dpif_netlink_ct_get_limits (dpif-netlink.c:3025) by 0x588B57: dpctl_ct_get_limits (dpctl.c:2178) by 0x586FF2: dpctl_unixctl_handler (dpctl.c:2870) by 0x52C241: process_command (unixctl.c:310) by 0x52C241: run_connection (unixctl.c:344) by 0x52C241: unixctl_server_run (unixctl.c:395) by 0x407526: main (ovs-vswitchd.c:128) Uninitialised value was created by a stack allocation at 0x46AAA0: ct_dpif_get_limits (ct-dpif.c:197) Fix that by using designated initializers that will clear all the non-specified fields. Fixes: 906ff9d229ee ("dpif-netlink: Implement conntrack zone limit") Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Acked-by: Mark D. Gray <mark.d.gray@redhat.com>
2021-04-04 19:31:46 +02:00
struct ovs_zone_limit req_zone_limit = {
.zone_id = zone_limit->zone,
};
nl_msg_put(request, &req_zone_limit, sizeof req_zone_limit);
}
nl_msg_end_nested(request, opt_offset);
}
int err = nl_transact(NETLINK_GENERIC, request, NULL);
ofpbuf_delete(request);
return err;
}
#define NL_TP_NAME_PREFIX "ovs_tp_"
struct dpif_netlink_timeout_policy_protocol {
uint16_t l3num;
uint8_t l4num;
};
enum OVS_PACKED_ENUM dpif_netlink_support_timeout_policy_protocol {
DPIF_NL_TP_AF_INET_TCP,
DPIF_NL_TP_AF_INET_UDP,
DPIF_NL_TP_AF_INET_ICMP,
DPIF_NL_TP_AF_INET6_TCP,
DPIF_NL_TP_AF_INET6_UDP,
DPIF_NL_TP_AF_INET6_ICMPV6,
DPIF_NL_TP_MAX
};
#define DPIF_NL_ALL_TP ((1UL << DPIF_NL_TP_MAX) - 1)
static struct dpif_netlink_timeout_policy_protocol tp_protos[] = {
[DPIF_NL_TP_AF_INET_TCP] = { .l3num = AF_INET, .l4num = IPPROTO_TCP },
[DPIF_NL_TP_AF_INET_UDP] = { .l3num = AF_INET, .l4num = IPPROTO_UDP },
[DPIF_NL_TP_AF_INET_ICMP] = { .l3num = AF_INET, .l4num = IPPROTO_ICMP },
[DPIF_NL_TP_AF_INET6_TCP] = { .l3num = AF_INET6, .l4num = IPPROTO_TCP },
[DPIF_NL_TP_AF_INET6_UDP] = { .l3num = AF_INET6, .l4num = IPPROTO_UDP },
[DPIF_NL_TP_AF_INET6_ICMPV6] = { .l3num = AF_INET6,
.l4num = IPPROTO_ICMPV6 },
};
static void
dpif_netlink_format_tp_name(uint32_t id, uint16_t l3num, uint8_t l4num,
char **tp_name)
{
struct ds ds = DS_EMPTY_INITIALIZER;
ds_put_format(&ds, "%s%"PRIu32"_", NL_TP_NAME_PREFIX, id);
ct_dpif_format_ipproto(&ds, l4num);
if (l3num == AF_INET) {
ds_put_cstr(&ds, "4");
} else if (l3num == AF_INET6 && l4num != IPPROTO_ICMPV6) {
ds_put_cstr(&ds, "6");
}
ovs_assert(ds.length < CTNL_TIMEOUT_NAME_MAX);
*tp_name = ds_steal_cstr(&ds);
}
static int
dpif_netlink_ct_get_timeout_policy_name(struct dpif *dpif OVS_UNUSED,
uint32_t tp_id, uint16_t dl_type,
uint8_t nw_proto, char **tp_name,
bool *is_generic)
{
dpif_netlink_format_tp_name(tp_id,
dl_type == ETH_TYPE_IP ? AF_INET : AF_INET6,
nw_proto, tp_name);
*is_generic = false;
return 0;
}
static int
dpif_netlink_ct_get_features(struct dpif *dpif OVS_UNUSED,
enum ct_features *features)
{
if (features != NULL) {
#ifndef _WIN32
*features = CONNTRACK_F_ZERO_SNAT;
#else
*features = 0;
#endif
}
return 0;
}
#define CT_DPIF_NL_TP_TCP_MAPPINGS \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, SYN_SENT, SYN_SENT) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, SYN_RECV, SYN_RECV) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, ESTABLISHED, ESTABLISHED) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, FIN_WAIT, FIN_WAIT) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, CLOSE_WAIT, CLOSE_WAIT) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, LAST_ACK, LAST_ACK) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, TIME_WAIT, TIME_WAIT) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, CLOSE, CLOSE) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, SYN_SENT2, SYN_SENT2) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, RETRANSMIT, RETRANS) \
CT_DPIF_NL_TP_MAPPING(TCP, TCP, UNACK, UNACK)
#define CT_DPIF_NL_TP_UDP_MAPPINGS \
CT_DPIF_NL_TP_MAPPING(UDP, UDP, SINGLE, UNREPLIED) \
CT_DPIF_NL_TP_MAPPING(UDP, UDP, MULTIPLE, REPLIED)
#define CT_DPIF_NL_TP_ICMP_MAPPINGS \
CT_DPIF_NL_TP_MAPPING(ICMP, ICMP, FIRST, TIMEOUT)
#define CT_DPIF_NL_TP_ICMPV6_MAPPINGS \
CT_DPIF_NL_TP_MAPPING(ICMP, ICMPV6, FIRST, TIMEOUT)
#define CT_DPIF_NL_TP_MAPPING(PROTO1, PROTO2, ATTR1, ATTR2) \
if (tp->present & (1 << CT_DPIF_TP_ATTR_##PROTO1##_##ATTR1)) { \
nl_tp->present |= 1 << CTA_TIMEOUT_##PROTO2##_##ATTR2; \
nl_tp->attrs[CTA_TIMEOUT_##PROTO2##_##ATTR2] = \
tp->attrs[CT_DPIF_TP_ATTR_##PROTO1##_##ATTR1]; \
}
static void
dpif_netlink_get_nl_tp_tcp_attrs(const struct ct_dpif_timeout_policy *tp,
struct nl_ct_timeout_policy *nl_tp)
{
CT_DPIF_NL_TP_TCP_MAPPINGS
}
static void
dpif_netlink_get_nl_tp_udp_attrs(const struct ct_dpif_timeout_policy *tp,
struct nl_ct_timeout_policy *nl_tp)
{
CT_DPIF_NL_TP_UDP_MAPPINGS
}
static void
dpif_netlink_get_nl_tp_icmp_attrs(const struct ct_dpif_timeout_policy *tp,
struct nl_ct_timeout_policy *nl_tp)
{
CT_DPIF_NL_TP_ICMP_MAPPINGS
}
static void
dpif_netlink_get_nl_tp_icmpv6_attrs(const struct ct_dpif_timeout_policy *tp,
struct nl_ct_timeout_policy *nl_tp)
{
CT_DPIF_NL_TP_ICMPV6_MAPPINGS
}
#undef CT_DPIF_NL_TP_MAPPING
static void
dpif_netlink_get_nl_tp_attrs(const struct ct_dpif_timeout_policy *tp,
uint8_t l4num, struct nl_ct_timeout_policy *nl_tp)
{
nl_tp->present = 0;
if (l4num == IPPROTO_TCP) {
dpif_netlink_get_nl_tp_tcp_attrs(tp, nl_tp);
} else if (l4num == IPPROTO_UDP) {
dpif_netlink_get_nl_tp_udp_attrs(tp, nl_tp);
} else if (l4num == IPPROTO_ICMP) {
dpif_netlink_get_nl_tp_icmp_attrs(tp, nl_tp);
} else if (l4num == IPPROTO_ICMPV6) {
dpif_netlink_get_nl_tp_icmpv6_attrs(tp, nl_tp);
}
}
#define CT_DPIF_NL_TP_MAPPING(PROTO1, PROTO2, ATTR1, ATTR2) \
if (nl_tp->present & (1 << CTA_TIMEOUT_##PROTO2##_##ATTR2)) { \
if (tp->present & (1 << CT_DPIF_TP_ATTR_##PROTO1##_##ATTR1)) { \
if (tp->attrs[CT_DPIF_TP_ATTR_##PROTO1##_##ATTR1] != \
nl_tp->attrs[CTA_TIMEOUT_##PROTO2##_##ATTR2]) { \
VLOG_WARN_RL(&error_rl, "Inconsistent timeout policy %s " \
"attribute %s=%"PRIu32" while %s=%"PRIu32, \
nl_tp->name, "CTA_TIMEOUT_"#PROTO2"_"#ATTR2, \
nl_tp->attrs[CTA_TIMEOUT_##PROTO2##_##ATTR2], \
"CT_DPIF_TP_ATTR_"#PROTO1"_"#ATTR1, \
tp->attrs[CT_DPIF_TP_ATTR_##PROTO1##_##ATTR1]); \
} \
} else { \
tp->present |= 1 << CT_DPIF_TP_ATTR_##PROTO1##_##ATTR1; \
tp->attrs[CT_DPIF_TP_ATTR_##PROTO1##_##ATTR1] = \
nl_tp->attrs[CTA_TIMEOUT_##PROTO2##_##ATTR2]; \
} \
}
static void
dpif_netlink_set_ct_dpif_tp_tcp_attrs(const struct nl_ct_timeout_policy *nl_tp,
struct ct_dpif_timeout_policy *tp)
{
CT_DPIF_NL_TP_TCP_MAPPINGS
}
static void
dpif_netlink_set_ct_dpif_tp_udp_attrs(const struct nl_ct_timeout_policy *nl_tp,
struct ct_dpif_timeout_policy *tp)
{
CT_DPIF_NL_TP_UDP_MAPPINGS
}
static void
dpif_netlink_set_ct_dpif_tp_icmp_attrs(
const struct nl_ct_timeout_policy *nl_tp,
struct ct_dpif_timeout_policy *tp)
{
CT_DPIF_NL_TP_ICMP_MAPPINGS
}
static void
dpif_netlink_set_ct_dpif_tp_icmpv6_attrs(
const struct nl_ct_timeout_policy *nl_tp,
struct ct_dpif_timeout_policy *tp)
{
CT_DPIF_NL_TP_ICMPV6_MAPPINGS
}
#undef CT_DPIF_NL_TP_MAPPING
static void
dpif_netlink_set_ct_dpif_tp_attrs(const struct nl_ct_timeout_policy *nl_tp,
struct ct_dpif_timeout_policy *tp)
{
if (nl_tp->l4num == IPPROTO_TCP) {
dpif_netlink_set_ct_dpif_tp_tcp_attrs(nl_tp, tp);
} else if (nl_tp->l4num == IPPROTO_UDP) {
dpif_netlink_set_ct_dpif_tp_udp_attrs(nl_tp, tp);
} else if (nl_tp->l4num == IPPROTO_ICMP) {
dpif_netlink_set_ct_dpif_tp_icmp_attrs(nl_tp, tp);
} else if (nl_tp->l4num == IPPROTO_ICMPV6) {
dpif_netlink_set_ct_dpif_tp_icmpv6_attrs(nl_tp, tp);
}
}
#ifdef _WIN32
static int
dpif_netlink_ct_set_timeout_policy(struct dpif *dpif OVS_UNUSED,
const struct ct_dpif_timeout_policy *tp)
{
return EOPNOTSUPP;
}
static int
dpif_netlink_ct_get_timeout_policy(struct dpif *dpif OVS_UNUSED,
uint32_t tp_id,
struct ct_dpif_timeout_policy *tp)
{
return EOPNOTSUPP;
}
static int
dpif_netlink_ct_del_timeout_policy(struct dpif *dpif OVS_UNUSED,
uint32_t tp_id)
{
return EOPNOTSUPP;
}
static int
dpif_netlink_ct_timeout_policy_dump_start(struct dpif *dpif OVS_UNUSED,
void **statep)
{
return EOPNOTSUPP;
}
static int
dpif_netlink_ct_timeout_policy_dump_next(struct dpif *dpif OVS_UNUSED,
void *state,
struct ct_dpif_timeout_policy **tp)
{
return EOPNOTSUPP;
}
static int
dpif_netlink_ct_timeout_policy_dump_done(struct dpif *dpif OVS_UNUSED,
void *state)
{
return EOPNOTSUPP;
}
#else
static int
dpif_netlink_ct_set_timeout_policy(struct dpif *dpif OVS_UNUSED,
const struct ct_dpif_timeout_policy *tp)
{
int err = 0;
for (int i = 0; i < ARRAY_SIZE(tp_protos); ++i) {
struct nl_ct_timeout_policy nl_tp;
char *nl_tp_name;
dpif_netlink_format_tp_name(tp->id, tp_protos[i].l3num,
tp_protos[i].l4num, &nl_tp_name);
ovs_strlcpy(nl_tp.name, nl_tp_name, sizeof nl_tp.name);
free(nl_tp_name);
nl_tp.l3num = tp_protos[i].l3num;
nl_tp.l4num = tp_protos[i].l4num;
dpif_netlink_get_nl_tp_attrs(tp, tp_protos[i].l4num, &nl_tp);
err = nl_ct_set_timeout_policy(&nl_tp);
if (err) {
VLOG_WARN_RL(&error_rl, "failed to add timeout policy %s (%s)",
nl_tp.name, ovs_strerror(err));
goto out;
}
}
out:
return err;
}
static int
dpif_netlink_ct_get_timeout_policy(struct dpif *dpif OVS_UNUSED,
uint32_t tp_id,
struct ct_dpif_timeout_policy *tp)
{
int err = 0;
tp->id = tp_id;
tp->present = 0;
for (int i = 0; i < ARRAY_SIZE(tp_protos); ++i) {
struct nl_ct_timeout_policy nl_tp;
char *nl_tp_name;
dpif_netlink_format_tp_name(tp_id, tp_protos[i].l3num,
tp_protos[i].l4num, &nl_tp_name);
err = nl_ct_get_timeout_policy(nl_tp_name, &nl_tp);
if (err) {
VLOG_WARN_RL(&error_rl, "failed to get timeout policy %s (%s)",
nl_tp_name, ovs_strerror(err));
free(nl_tp_name);
goto out;
}
free(nl_tp_name);
dpif_netlink_set_ct_dpif_tp_attrs(&nl_tp, tp);
}
out:
return err;
}
/* Returns 0 if all the sub timeout policies are deleted or not exist in the
* kernel. Returns 1 if any sub timeout policy deletion failed. */
static int
dpif_netlink_ct_del_timeout_policy(struct dpif *dpif OVS_UNUSED,
uint32_t tp_id)
{
int ret = 0;
for (int i = 0; i < ARRAY_SIZE(tp_protos); ++i) {
char *nl_tp_name;
dpif_netlink_format_tp_name(tp_id, tp_protos[i].l3num,
tp_protos[i].l4num, &nl_tp_name);
int err = nl_ct_del_timeout_policy(nl_tp_name);
if (err == ENOENT) {
err = 0;
}
if (err) {
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(6, 6);
VLOG_INFO_RL(&rl, "failed to delete timeout policy %s (%s)",
nl_tp_name, ovs_strerror(err));
ret = 1;
}
free(nl_tp_name);
}
return ret;
}
struct dpif_netlink_ct_timeout_policy_dump_state {
struct nl_ct_timeout_policy_dump_state *nl_dump_state;
struct hmap tp_dump_map;
};
struct dpif_netlink_tp_dump_node {
struct hmap_node hmap_node; /* node in tp_dump_map. */
struct ct_dpif_timeout_policy *tp;
uint32_t l3_l4_present;
};
static struct dpif_netlink_tp_dump_node *
get_dpif_netlink_tp_dump_node_by_tp_id(uint32_t tp_id,
struct hmap *tp_dump_map)
{
struct dpif_netlink_tp_dump_node *tp_dump_node;
HMAP_FOR_EACH_WITH_HASH (tp_dump_node, hmap_node, hash_int(tp_id, 0),
tp_dump_map) {
if (tp_dump_node->tp->id == tp_id) {
return tp_dump_node;
}
}
return NULL;
}
static void
update_dpif_netlink_tp_dump_node(
const struct nl_ct_timeout_policy *nl_tp,
struct dpif_netlink_tp_dump_node *tp_dump_node)
{
dpif_netlink_set_ct_dpif_tp_attrs(nl_tp, tp_dump_node->tp);
for (int i = 0; i < DPIF_NL_TP_MAX; ++i) {
if (nl_tp->l3num == tp_protos[i].l3num &&
nl_tp->l4num == tp_protos[i].l4num) {
tp_dump_node->l3_l4_present |= 1 << i;
break;
}
}
}
static int
dpif_netlink_ct_timeout_policy_dump_start(struct dpif *dpif OVS_UNUSED,
void **statep)
{
struct dpif_netlink_ct_timeout_policy_dump_state *dump_state;
*statep = dump_state = xzalloc(sizeof *dump_state);
int err = nl_ct_timeout_policy_dump_start(&dump_state->nl_dump_state);
if (err) {
free(dump_state);
return err;
}
hmap_init(&dump_state->tp_dump_map);
return 0;
}
static void
get_and_cleanup_tp_dump_node(struct hmap *hmap,
struct dpif_netlink_tp_dump_node *tp_dump_node,
struct ct_dpif_timeout_policy *tp)
{
hmap_remove(hmap, &tp_dump_node->hmap_node);
*tp = *tp_dump_node->tp;
free(tp_dump_node->tp);
free(tp_dump_node);
}
static int
dpif_netlink_ct_timeout_policy_dump_next(struct dpif *dpif OVS_UNUSED,
void *state,
struct ct_dpif_timeout_policy *tp)
{
struct dpif_netlink_ct_timeout_policy_dump_state *dump_state = state;
struct dpif_netlink_tp_dump_node *tp_dump_node;
int err;
/* Dumps all the timeout policies in the kernel. */
do {
struct nl_ct_timeout_policy nl_tp;
uint32_t tp_id;
err = nl_ct_timeout_policy_dump_next(dump_state->nl_dump_state,
&nl_tp);
if (err) {
break;
}
/* We only interest in OVS installed timeout policies. */
if (!ovs_scan(nl_tp.name, NL_TP_NAME_PREFIX"%"PRIu32, &tp_id)) {
continue;
}
tp_dump_node = get_dpif_netlink_tp_dump_node_by_tp_id(
tp_id, &dump_state->tp_dump_map);
if (!tp_dump_node) {
tp_dump_node = xzalloc(sizeof *tp_dump_node);
tp_dump_node->tp = xzalloc(sizeof *tp_dump_node->tp);
tp_dump_node->tp->id = tp_id;
hmap_insert(&dump_state->tp_dump_map, &tp_dump_node->hmap_node,
hash_int(tp_id, 0));
}
update_dpif_netlink_tp_dump_node(&nl_tp, tp_dump_node);
/* Returns one ct_dpif_timeout_policy if we gather all the L3/L4
* sub-pieces. */
if (tp_dump_node->l3_l4_present == DPIF_NL_ALL_TP) {
get_and_cleanup_tp_dump_node(&dump_state->tp_dump_map,
tp_dump_node, tp);
break;
}
} while (true);
/* Dump the incomplete timeout policies. */
if (err == EOF) {
if (!hmap_is_empty(&dump_state->tp_dump_map)) {
struct hmap_node *hmap_node = hmap_first(&dump_state->tp_dump_map);
tp_dump_node = CONTAINER_OF(hmap_node,
struct dpif_netlink_tp_dump_node,
hmap_node);
get_and_cleanup_tp_dump_node(&dump_state->tp_dump_map,
tp_dump_node, tp);
return 0;
}
}
return err;
}
static int
dpif_netlink_ct_timeout_policy_dump_done(struct dpif *dpif OVS_UNUSED,
void *state)
{
struct dpif_netlink_ct_timeout_policy_dump_state *dump_state = state;
struct dpif_netlink_tp_dump_node *tp_dump_node;
int err = nl_ct_timeout_policy_dump_done(dump_state->nl_dump_state);
HMAP_FOR_EACH_POP (tp_dump_node, hmap_node, &dump_state->tp_dump_map) {
free(tp_dump_node->tp);
free(tp_dump_node);
}
hmap_destroy(&dump_state->tp_dump_map);
free(dump_state);
return err;
}
#endif
/* Meters */
/* Set of supported meter flags */
#define DP_SUPPORTED_METER_FLAGS_MASK \
(OFPMF13_STATS | OFPMF13_PKTPS | OFPMF13_KBPS | OFPMF13_BURST)
/* Meter support was introduced in Linux 4.15. In some versions of
* Linux 4.15, 4.16, and 4.17, there was a bug that never set the id
* when the meter was created, so all meters essentially had an id of
* zero. Check for that condition and disable meters on those kernels. */
static bool probe_broken_meters(struct dpif *);
static void
dpif_netlink_meter_init(struct dpif_netlink *dpif, struct ofpbuf *buf,
void *stub, size_t size, uint32_t command)
{
ofpbuf_use_stub(buf, stub, size);
nl_msg_put_genlmsghdr(buf, 0, ovs_meter_family, NLM_F_REQUEST | NLM_F_ECHO,
command, OVS_METER_VERSION);
struct ovs_header *ovs_header;
ovs_header = ofpbuf_put_uninit(buf, sizeof *ovs_header);
ovs_header->dp_ifindex = dpif->dp_ifindex;
}
/* Execute meter 'request' in the kernel datapath. If the command
* fails, returns a positive errno value. Otherwise, stores the reply
* in '*replyp', parses the policy according to 'reply_policy' into the
* array of Netlink attribute in 'a', and returns 0. On success, the
* caller is responsible for calling ofpbuf_delete() on '*replyp'
* ('replyp' will contain pointers into 'a'). */
static int
dpif_netlink_meter_transact(struct ofpbuf *request, struct ofpbuf **replyp,
const struct nl_policy *reply_policy,
struct nlattr **a, size_t size_a)
{
int error = nl_transact(NETLINK_GENERIC, request, replyp);
ofpbuf_uninit(request);
if (error) {
return error;
}
struct nlmsghdr *nlmsg = ofpbuf_try_pull(*replyp, sizeof *nlmsg);
struct genlmsghdr *genl = ofpbuf_try_pull(*replyp, sizeof *genl);
struct ovs_header *ovs_header = ofpbuf_try_pull(*replyp,
sizeof *ovs_header);
if (!nlmsg || !genl || !ovs_header
|| nlmsg->nlmsg_type != ovs_meter_family
|| !nl_policy_parse(*replyp, 0, reply_policy, a, size_a)) {
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
VLOG_DBG_RL(&rl,
"Kernel module response to meter tranaction is invalid");
return EINVAL;
}
return 0;
}
static void
dpif_netlink_meter_get_features(const struct dpif *dpif_,
struct ofputil_meter_features *features)
{
if (probe_broken_meters(CONST_CAST(struct dpif *, dpif_))) {
return;
}
struct ofpbuf buf, *msg;
uint64_t stub[1024 / 8];
static const struct nl_policy ovs_meter_features_policy[] = {
[OVS_METER_ATTR_MAX_METERS] = { .type = NL_A_U32 },
[OVS_METER_ATTR_MAX_BANDS] = { .type = NL_A_U32 },
[OVS_METER_ATTR_BANDS] = { .type = NL_A_NESTED, .optional = true },
};
struct nlattr *a[ARRAY_SIZE(ovs_meter_features_policy)];
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
dpif_netlink_meter_init(dpif, &buf, stub, sizeof stub,
OVS_METER_CMD_FEATURES);
if (dpif_netlink_meter_transact(&buf, &msg, ovs_meter_features_policy, a,
ARRAY_SIZE(ovs_meter_features_policy))) {
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
VLOG_INFO_RL(&rl,
"dpif_netlink_meter_transact OVS_METER_CMD_FEATURES failed");
return;
}
features->max_meters = nl_attr_get_u32(a[OVS_METER_ATTR_MAX_METERS]);
features->max_bands = nl_attr_get_u32(a[OVS_METER_ATTR_MAX_BANDS]);
/* Bands is a nested attribute of zero or more nested
* band attributes. */
if (a[OVS_METER_ATTR_BANDS]) {
const struct nlattr *nla;
size_t left;
NL_NESTED_FOR_EACH (nla, left, a[OVS_METER_ATTR_BANDS]) {
const struct nlattr *band_nla;
size_t band_left;
NL_NESTED_FOR_EACH (band_nla, band_left, nla) {
if (nl_attr_type(band_nla) == OVS_BAND_ATTR_TYPE) {
if (nl_attr_get_size(band_nla) == sizeof(uint32_t)) {
switch (nl_attr_get_u32(band_nla)) {
case OVS_METER_BAND_TYPE_DROP:
features->band_types |= 1 << OFPMBT13_DROP;
break;
}
}
}
}
}
}
features->capabilities = DP_SUPPORTED_METER_FLAGS_MASK;
ofpbuf_delete(msg);
}
static int
dpif_netlink_meter_set__(struct dpif *dpif_, ofproto_meter_id meter_id,
struct ofputil_meter_config *config)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct ofpbuf buf, *msg;
uint64_t stub[1024 / 8];
static const struct nl_policy ovs_meter_set_response_policy[] = {
[OVS_METER_ATTR_ID] = { .type = NL_A_U32 },
};
struct nlattr *a[ARRAY_SIZE(ovs_meter_set_response_policy)];
if (config->flags & ~DP_SUPPORTED_METER_FLAGS_MASK) {
return EBADF; /* Unsupported flags set */
}
for (size_t i = 0; i < config->n_bands; i++) {
switch (config->bands[i].type) {
case OFPMBT13_DROP:
break;
default:
return ENODEV; /* Unsupported band type */
}
}
dpif_netlink_meter_init(dpif, &buf, stub, sizeof stub, OVS_METER_CMD_SET);
nl_msg_put_u32(&buf, OVS_METER_ATTR_ID, meter_id.uint32);
if (config->flags & OFPMF13_KBPS) {
nl_msg_put_flag(&buf, OVS_METER_ATTR_KBPS);
}
size_t bands_offset = nl_msg_start_nested(&buf, OVS_METER_ATTR_BANDS);
/* Bands */
for (size_t i = 0; i < config->n_bands; ++i) {
struct ofputil_meter_band * band = &config->bands[i];
uint32_t band_type;
size_t band_offset = nl_msg_start_nested(&buf, OVS_BAND_ATTR_UNSPEC);
switch (band->type) {
case OFPMBT13_DROP:
band_type = OVS_METER_BAND_TYPE_DROP;
break;
default:
band_type = OVS_METER_BAND_TYPE_UNSPEC;
}
nl_msg_put_u32(&buf, OVS_BAND_ATTR_TYPE, band_type);
nl_msg_put_u32(&buf, OVS_BAND_ATTR_RATE, band->rate);
nl_msg_put_u32(&buf, OVS_BAND_ATTR_BURST,
config->flags & OFPMF13_BURST ?
band->burst_size : band->rate);
nl_msg_end_nested(&buf, band_offset);
}
nl_msg_end_nested(&buf, bands_offset);
int error = dpif_netlink_meter_transact(&buf, &msg,
ovs_meter_set_response_policy, a,
ARRAY_SIZE(ovs_meter_set_response_policy));
if (error) {
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
VLOG_INFO_RL(&rl,
"dpif_netlink_meter_transact OVS_METER_CMD_SET failed");
return error;
}
if (nl_attr_get_u32(a[OVS_METER_ATTR_ID]) != meter_id.uint32) {
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
VLOG_INFO_RL(&rl,
"Kernel returned a different meter id than requested");
}
ofpbuf_delete(msg);
return 0;
}
static int
dpif_netlink_meter_set(struct dpif *dpif_, ofproto_meter_id meter_id,
struct ofputil_meter_config *config)
{
int err;
if (probe_broken_meters(dpif_)) {
return ENOMEM;
}
err = dpif_netlink_meter_set__(dpif_, meter_id, config);
if (!err && netdev_is_flow_api_enabled()) {
meter_offload_set(meter_id, config);
}
return err;
}
/* Retrieve statistics and/or delete meter 'meter_id'. Statistics are
* stored in 'stats', if it is not null. If 'command' is
* OVS_METER_CMD_DEL, the meter is deleted and statistics are optionally
* retrieved. If 'command' is OVS_METER_CMD_GET, then statistics are
* simply retrieved. */
static int
dpif_netlink_meter_get_stats(const struct dpif *dpif_,
ofproto_meter_id meter_id,
struct ofputil_meter_stats *stats,
uint16_t max_bands,
enum ovs_meter_cmd command)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct ofpbuf buf, *msg;
uint64_t stub[1024 / 8];
static const struct nl_policy ovs_meter_stats_policy[] = {
[OVS_METER_ATTR_ID] = { .type = NL_A_U32, .optional = true},
[OVS_METER_ATTR_STATS] = { NL_POLICY_FOR(struct ovs_flow_stats),
.optional = true},
[OVS_METER_ATTR_BANDS] = { .type = NL_A_NESTED, .optional = true },
};
struct nlattr *a[ARRAY_SIZE(ovs_meter_stats_policy)];
dpif_netlink_meter_init(dpif, &buf, stub, sizeof stub, command);
nl_msg_put_u32(&buf, OVS_METER_ATTR_ID, meter_id.uint32);
int error = dpif_netlink_meter_transact(&buf, &msg,
ovs_meter_stats_policy, a,
ARRAY_SIZE(ovs_meter_stats_policy));
if (error) {
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
VLOG_INFO_RL(&rl, "dpif_netlink_meter_transact %s failed",
command == OVS_METER_CMD_GET ? "get" : "del");
return error;
}
if (stats
&& a[OVS_METER_ATTR_ID]
&& a[OVS_METER_ATTR_STATS]
&& nl_attr_get_u32(a[OVS_METER_ATTR_ID]) == meter_id.uint32) {
/* return stats */
const struct ovs_flow_stats *stat;
const struct nlattr *nla;
size_t left;
stat = nl_attr_get(a[OVS_METER_ATTR_STATS]);
stats->packet_in_count = get_32aligned_u64(&stat->n_packets);
stats->byte_in_count = get_32aligned_u64(&stat->n_bytes);
if (a[OVS_METER_ATTR_BANDS]) {
size_t n_bands = 0;
NL_NESTED_FOR_EACH (nla, left, a[OVS_METER_ATTR_BANDS]) {
const struct nlattr *band_nla;
band_nla = nl_attr_find_nested(nla, OVS_BAND_ATTR_STATS);
if (band_nla && nl_attr_get_size(band_nla) \
== sizeof(struct ovs_flow_stats)) {
stat = nl_attr_get(band_nla);
if (n_bands < max_bands) {
stats->bands[n_bands].packet_count
= get_32aligned_u64(&stat->n_packets);
stats->bands[n_bands].byte_count
= get_32aligned_u64(&stat->n_bytes);
++n_bands;
}
} else {
stats->bands[n_bands].packet_count = 0;
stats->bands[n_bands].byte_count = 0;
++n_bands;
}
}
stats->n_bands = n_bands;
} else {
/* For a non-existent meter, return 0 stats. */
stats->n_bands = 0;
}
}
ofpbuf_delete(msg);
return error;
}
static int
dpif_netlink_meter_get(const struct dpif *dpif, ofproto_meter_id meter_id,
struct ofputil_meter_stats *stats, uint16_t max_bands)
{
int err;
err = dpif_netlink_meter_get_stats(dpif, meter_id, stats, max_bands,
OVS_METER_CMD_GET);
if (!err && netdev_is_flow_api_enabled()) {
meter_offload_get(meter_id, stats);
}
return err;
}
static int
dpif_netlink_meter_del(struct dpif *dpif, ofproto_meter_id meter_id,
struct ofputil_meter_stats *stats, uint16_t max_bands)
{
int err;
err = dpif_netlink_meter_get_stats(dpif, meter_id, stats,
max_bands, OVS_METER_CMD_DEL);
if (!err && netdev_is_flow_api_enabled()) {
meter_offload_del(meter_id, stats);
}
return err;
}
static bool
probe_broken_meters__(struct dpif *dpif)
{
/* This test is destructive if a probe occurs while ovs-vswitchd is
* running (e.g., an ovs-dpctl meter command is called), so choose a
* random high meter id to make this less likely to occur. */
ofproto_meter_id id1 = { 54545401 };
ofproto_meter_id id2 = { 54545402 };
struct ofputil_meter_band band = {OFPMBT13_DROP, 0, 1, 0};
struct ofputil_meter_config config1 = { 1, OFPMF13_KBPS, 1, &band};
struct ofputil_meter_config config2 = { 2, OFPMF13_KBPS, 1, &band};
/* Try adding two meters and make sure that they both come back with
* the proper meter id. Use the "__" version so that we don't cause
* a recurve deadlock. */
dpif_netlink_meter_set__(dpif, id1, &config1);
dpif_netlink_meter_set__(dpif, id2, &config2);
if (dpif_netlink_meter_get(dpif, id1, NULL, 0)
|| dpif_netlink_meter_get(dpif, id2, NULL, 0)) {
VLOG_INFO("The kernel module has a broken meter implementation.");
return true;
}
dpif_netlink_meter_del(dpif, id1, NULL, 0);
dpif_netlink_meter_del(dpif, id2, NULL, 0);
return false;
}
static bool
probe_broken_meters(struct dpif *dpif)
{
/* This is a once-only test because currently OVS only has at most a single
* Netlink capable datapath on any given platform. */
static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;
static bool broken_meters = false;
if (ovsthread_once_start(&once)) {
broken_meters = probe_broken_meters__(dpif);
ovsthread_once_done(&once);
}
return broken_meters;
}
static int
dpif_netlink_cache_get_supported_levels(struct dpif *dpif_, uint32_t *levels)
{
struct dpif_netlink_dp dp;
struct ofpbuf *buf;
int error;
/* If available, in the kernel we support one level of cache.
* Unfortunately, there is no way to detect if the older kernel module has
* the cache feature. For now, we only report the cache information if the
* kernel module reports the OVS_DP_ATTR_MASKS_CACHE_SIZE attribute. */
*levels = 0;
error = dpif_netlink_dp_get(dpif_, &dp, &buf);
if (!error) {
if (dp.cache_size != UINT32_MAX) {
*levels = 1;
}
ofpbuf_delete(buf);
}
return error;
}
static int
dpif_netlink_cache_get_name(struct dpif *dpif_ OVS_UNUSED, uint32_t level,
const char **name)
{
if (level != 0) {
return EINVAL;
}
*name = "masks-cache";
return 0;
}
static int
dpif_netlink_cache_get_size(struct dpif *dpif_, uint32_t level, uint32_t *size)
{
struct dpif_netlink_dp dp;
struct ofpbuf *buf;
int error;
if (level != 0) {
return EINVAL;
}
error = dpif_netlink_dp_get(dpif_, &dp, &buf);
if (!error) {
ofpbuf_delete(buf);
if (dp.cache_size == UINT32_MAX) {
return EOPNOTSUPP;
}
*size = dp.cache_size;
}
return error;
}
static int
dpif_netlink_cache_set_size(struct dpif *dpif_, uint32_t level, uint32_t size)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_netlink_dp request, reply;
struct ofpbuf *bufp;
int error;
size = ROUND_UP_POW2(size);
if (level != 0) {
return EINVAL;
}
dpif_netlink_dp_init(&request);
request.cmd = OVS_DP_CMD_SET;
request.name = dpif_->base_name;
request.dp_ifindex = dpif->dp_ifindex;
request.cache_size = size;
/* We need to set the dpif user_features, as the kernel module assumes the
* OVS_DP_ATTR_USER_FEATURES attribute is always present. If not, it will
* reset all the features. */
request.user_features = dpif->user_features;
error = dpif_netlink_dp_transact(&request, &reply, &bufp);
if (!error) {
ofpbuf_delete(bufp);
if (reply.cache_size != size) {
return EINVAL;
}
}
return error;
}
const struct dpif_class dpif_netlink_class = {
"system",
vswitchd: Always cleanup userspace datapath. 'netdev' datapath is implemented within ovs-vswitchd process and can not exist without it, so it should be gracefully terminated with a full cleanup of resources upon ovs-vswitchd exit. This change forces dpif cleanup for 'netdev' datapath regardless of passing '--cleanup' to 'ovs-appctl exit'. Such solution allowes to not pass this additional option everytime for userspace datapath installations and also allowes to not terminate system datapath in setups where both datapaths runs at the same time. The main part is that dpif_port_del() will lead to netdev_close() and subsequent netdev_class->destroy(dev) which will stop HW NICs and free their resources. For vhost-user interfaces it will invoke vhost driver unregistering with a properly closed vhost-user connection. For upcoming AF_XDP netdev this will allow to gracefully destroy xdp sockets and unload xdp programs from linux interfaces. Another important thing is that port deletion will also trigger flushing of flows offloaded to HW NICs. Exception made for 'internal' ports that could have user ip/route configuration. These ports will not be removed without '--cleanup'. This change fixes OVS disappearing from the DPDK point of view (keeping HW NICs improperly configured, sudden closing of vhost-user connections) and will help with linux devices clearing with upcoming AF_XDP netdev support. Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Tested-by: William Tu <u9012063@gmail.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Ben Pfaff <blp@ovn.org>
2019-06-24 17:20:17 +03:00
false, /* cleanup_required */
false, /* synced_dp_layers */
NULL, /* init */
dpif_netlink_enumerate,
NULL,
dpif_netlink_open,
dpif_netlink_close,
dpif_netlink_destroy,
dpif_netlink_run,
NULL, /* wait */
dpif_netlink_get_stats,
dpif_netlink_set_features,
dpif_netlink_port_add,
dpif_netlink_port_del,
NULL, /* port_set_config */
dpif_netlink_port_query_by_number,
dpif_netlink_port_query_by_name,
dpif_netlink_port_get_pid,
dpif_netlink_port_dump_start,
dpif_netlink_port_dump_next,
dpif_netlink_port_dump_done,
dpif_netlink_port_poll,
dpif_netlink_port_poll_wait,
dpif_netlink_flow_flush,
dpif_netlink_flow_dump_create,
dpif_netlink_flow_dump_destroy,
dpif_netlink_flow_dump_thread_create,
dpif_netlink_flow_dump_thread_destroy,
dpif_netlink_flow_dump_next,
dpif_netlink_operate,
NULL, /* offload_stats_get */
dpif_netlink_recv_set,
dpif_netlink_handlers_set,
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
dpif_netlink_number_handlers_required,
NULL, /* set_config */
dpif_netlink_queue_to_priority,
dpif_netlink_recv,
dpif_netlink_recv_wait,
dpif_netlink_recv_purge,
NULL, /* register_dp_purge_cb */
NULL, /* register_upcall_cb */
NULL, /* enable_upcall */
NULL, /* disable_upcall */
dpif_netlink_get_datapath_version, /* get_datapath_version */
dpif_netlink_ct_dump_start,
dpif_netlink_ct_dump_next,
dpif_netlink_ct_dump_done,
NULL, /* ct_exp_dump_start */
NULL, /* ct_exp_dump_next */
NULL, /* ct_exp_dump_done */
dpif_netlink_ct_flush,
NULL, /* ct_set_maxconns */
NULL, /* ct_get_maxconns */
NULL, /* ct_get_nconns */
NULL, /* ct_set_tcp_seq_chk */
NULL, /* ct_get_tcp_seq_chk */
NULL, /* ct_set_sweep_interval */
NULL, /* ct_get_sweep_interval */
dpif_netlink_ct_set_limits,
dpif_netlink_ct_get_limits,
dpif_netlink_ct_del_limits,
dpif_netlink_ct_set_timeout_policy,
dpif_netlink_ct_get_timeout_policy,
dpif_netlink_ct_del_timeout_policy,
dpif_netlink_ct_timeout_policy_dump_start,
dpif_netlink_ct_timeout_policy_dump_next,
dpif_netlink_ct_timeout_policy_dump_done,
dpif_netlink_ct_get_timeout_policy_name,
dpif_netlink_ct_get_features,
NULL, /* ipf_set_enabled */
NULL, /* ipf_set_min_frag */
NULL, /* ipf_set_max_nfrags */
NULL, /* ipf_get_status */
NULL, /* ipf_dump_start */
NULL, /* ipf_dump_next */
NULL, /* ipf_dump_done */
dpif_netlink_meter_get_features,
dpif_netlink_meter_set,
dpif_netlink_meter_get,
dpif_netlink_meter_del,
userspace: Avoid dp_hash recirculation for balance-tcp bond mode. Problem: In OVS, flows with output over a bond interface of type “balance-tcp” gets translated by the ofproto layer into "HASH" and "RECIRC" datapath actions. After recirculation, the packet is forwarded to the bond member port based on 8-bits of the datapath hash value computed through dp_hash. This causes performance degradation in the following ways: 1. The recirculation of the packet implies another lookup of the packet’s flow key in the exact match cache (EMC) and potentially Megaflow classifier (DPCLS). This is the biggest cost factor. 2. The recirculated packets have a new “RSS” hash and compete with the original packets for the scarce number of EMC slots. This implies more EMC misses and potentially EMC thrashing causing costly DPCLS lookups. 3. The 256 extra megaflow entries per bond for dp_hash bond selection put additional load on the revalidation threads. Owing to this performance degradation, deployments stick to “balance-slb” bond mode even though it does not do active-active load balancing for VXLAN- and GRE-tunnelled traffic because all tunnel packet have the same source MAC address. Proposed optimization: This proposal introduces a new load-balancing output action instead of recirculation. Maintain one table per-bond (could just be an array of uint16's) and program it the same way internal flows are created today for each possible hash value (256 entries) from ofproto layer. Use this table to load-balance flows as part of output action processing. Currently xlate_normal() -> output_normal() -> bond_update_post_recirc_rules() -> bond_may_recirc() and compose_output_action__() generate 'dp_hash(hash_l4(0))' and 'recirc(<RecircID>)' actions. In this case the RecircID identifies the bond. For the recirculated packets the ofproto layer installs megaflow entries that match on RecircID and masked dp_hash and send them to the corresponding output port. Instead, we will now generate action as 'lb_output(<bond id>)' This combines hash computation (only if needed, else re-use RSS hash) and inline load-balancing over the bond. This action is used *only* for balance-tcp bonds in userspace datapath (the OVS kernel datapath remains unchanged). Example: Current scheme: With 8 UDP flows (with random UDP src port): flow-dump from pmd on cpu core: 2 recirc_id(0),in_port(7),<...> actions:hash(hash_l4(0)),recirc(0x1) recirc_id(0x1),dp_hash(0xf8e02b7e/0xff),<...> actions:2 recirc_id(0x1),dp_hash(0xb236c260/0xff),<...> actions:1 recirc_id(0x1),dp_hash(0x7d89eb18/0xff),<...> actions:1 recirc_id(0x1),dp_hash(0xa78d75df/0xff),<...> actions:2 recirc_id(0x1),dp_hash(0xb58d846f/0xff),<...> actions:2 recirc_id(0x1),dp_hash(0x24534406/0xff),<...> actions:1 recirc_id(0x1),dp_hash(0x3cf32550/0xff),<...> actions:1 New scheme: We can do with a single flow entry (for any number of new flows): in_port(7),<...> actions:lb_output(1) A new CLI has been added to dump datapath bond cache as given below. # ovs-appctl dpif-netdev/bond-show [dp] Bond cache: bond-id 1 : bucket 0 - slave 2 bucket 1 - slave 1 bucket 2 - slave 2 bucket 3 - slave 1 Co-authored-by: Manohar Krishnappa Chidambaraswamy <manukc@gmail.com> Signed-off-by: Manohar Krishnappa Chidambaraswamy <manukc@gmail.com> Signed-off-by: Vishal Deep Ajmera <vishal.deep.ajmera@ericsson.com> Tested-by: Matteo Croce <mcroce@redhat.com> Tested-by: Adrian Moreno <amorenoz@redhat.com> Acked-by: Eelco Chaudron <echaudro@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-05-22 10:50:05 +02:00
NULL, /* bond_add */
NULL, /* bond_del */
NULL, /* bond_stats_get */
dpif_netlink_cache_get_supported_levels,
dpif_netlink_cache_get_name,
dpif_netlink_cache_get_size,
dpif_netlink_cache_set_size,
};
static int
dpif_netlink_init(void)
{
static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;
static int error;
if (ovsthread_once_start(&once)) {
error = nl_lookup_genl_family(OVS_DATAPATH_FAMILY,
&ovs_datapath_family);
if (error) {
VLOG_INFO("Generic Netlink family '%s' does not exist. "
"The Open vSwitch kernel module is probably not loaded.",
OVS_DATAPATH_FAMILY);
}
if (!error) {
error = nl_lookup_genl_family(OVS_VPORT_FAMILY, &ovs_vport_family);
}
if (!error) {
error = nl_lookup_genl_family(OVS_FLOW_FAMILY, &ovs_flow_family);
}
if (!error) {
error = nl_lookup_genl_family(OVS_PACKET_FAMILY,
&ovs_packet_family);
}
if (!error) {
error = nl_lookup_genl_mcgroup(OVS_VPORT_FAMILY, OVS_VPORT_MCGROUP,
&ovs_vport_mcgroup);
}
if (!error) {
if (nl_lookup_genl_family(OVS_METER_FAMILY, &ovs_meter_family)) {
VLOG_INFO("The kernel module does not support meters.");
}
}
if (nl_lookup_genl_family(OVS_CT_LIMIT_FAMILY,
&ovs_ct_limit_family) < 0) {
VLOG_INFO("Generic Netlink family '%s' does not exist. "
"Please update the Open vSwitch kernel module to enable "
"the conntrack limit feature.", OVS_CT_LIMIT_FAMILY);
}
ovs_tunnels_out_of_tree = dpif_netlink_rtnl_probe_oot_tunnels();
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
unixctl_command_register("dpif-netlink/dispatch-mode", "", 0, 0,
dpif_netlink_unixctl_dispatch_mode, NULL);
ovsthread_once_done(&once);
}
return error;
}
bool
dpif_netlink_is_internal_device(const char *name)
{
struct dpif_netlink_vport reply;
struct ofpbuf *buf;
int error;
error = dpif_netlink_vport_get(name, &reply, &buf);
if (!error) {
ofpbuf_delete(buf);
} else if (error != ENODEV && error != ENOENT) {
VLOG_WARN_RL(&error_rl, "%s: vport query failed (%s)",
name, ovs_strerror(error));
}
return reply.type == OVS_VPORT_TYPE_INTERNAL;
}
/* Parses the contents of 'buf', which contains a "struct ovs_header" followed
* by Netlink attributes, into 'vport'. Returns 0 if successful, otherwise a
* positive errno value.
*
* 'vport' will contain pointers into 'buf', so the caller should not free
* 'buf' while 'vport' is still in use. */
static int
dpif_netlink_vport_from_ofpbuf(struct dpif_netlink_vport *vport,
const struct ofpbuf *buf)
{
static const struct nl_policy ovs_vport_policy[] = {
[OVS_VPORT_ATTR_PORT_NO] = { .type = NL_A_U32 },
[OVS_VPORT_ATTR_TYPE] = { .type = NL_A_U32 },
[OVS_VPORT_ATTR_NAME] = { .type = NL_A_STRING, .max_len = IFNAMSIZ },
[OVS_VPORT_ATTR_UPCALL_PID] = { .type = NL_A_UNSPEC },
2011-11-07 09:21:17 -08:00
[OVS_VPORT_ATTR_STATS] = { NL_POLICY_FOR(struct ovs_vport_stats),
.optional = true },
[OVS_VPORT_ATTR_OPTIONS] = { .type = NL_A_NESTED, .optional = true },
[OVS_VPORT_ATTR_NETNSID] = { .type = NL_A_U32, .optional = true },
[OVS_VPORT_ATTR_UPCALL_STATS] = { .type = NL_A_NESTED,
.optional = true },
};
dpif_netlink_vport_init(vport);
struct ofpbuf b = ofpbuf_const_initializer(buf->data, buf->size);
struct nlmsghdr *nlmsg = ofpbuf_try_pull(&b, sizeof *nlmsg);
struct genlmsghdr *genl = ofpbuf_try_pull(&b, sizeof *genl);
struct ovs_header *ovs_header = ofpbuf_try_pull(&b, sizeof *ovs_header);
struct nlattr *a[ARRAY_SIZE(ovs_vport_policy)];
if (!nlmsg || !genl || !ovs_header
|| nlmsg->nlmsg_type != ovs_vport_family
|| !nl_policy_parse(&b, 0, ovs_vport_policy, a,
ARRAY_SIZE(ovs_vport_policy))) {
return EINVAL;
}
vport->cmd = genl->cmd;
vport->dp_ifindex = ovs_header->dp_ifindex;
vport->port_no = nl_attr_get_odp_port(a[OVS_VPORT_ATTR_PORT_NO]);
vport->type = nl_attr_get_u32(a[OVS_VPORT_ATTR_TYPE]);
vport->name = nl_attr_get_string(a[OVS_VPORT_ATTR_NAME]);
if (a[OVS_VPORT_ATTR_UPCALL_PID]) {
vport->n_upcall_pids = nl_attr_get_size(a[OVS_VPORT_ATTR_UPCALL_PID])
/ (sizeof *vport->upcall_pids);
vport->upcall_pids = nl_attr_get(a[OVS_VPORT_ATTR_UPCALL_PID]);
}
if (a[OVS_VPORT_ATTR_STATS]) {
vport->stats = nl_attr_get(a[OVS_VPORT_ATTR_STATS]);
}
if (a[OVS_VPORT_ATTR_UPCALL_STATS]) {
const struct nlattr *nla;
size_t left;
NL_NESTED_FOR_EACH (nla, left, a[OVS_VPORT_ATTR_UPCALL_STATS]) {
if (nl_attr_type(nla) == OVS_VPORT_UPCALL_ATTR_SUCCESS) {
vport->upcall_success = nl_attr_get_u64(nla);
} else if (nl_attr_type(nla) == OVS_VPORT_UPCALL_ATTR_FAIL) {
vport->upcall_fail = nl_attr_get_u64(nla);
}
}
} else {
vport->upcall_success = UINT64_MAX;
vport->upcall_fail = UINT64_MAX;
}
if (a[OVS_VPORT_ATTR_OPTIONS]) {
vport->options = nl_attr_get(a[OVS_VPORT_ATTR_OPTIONS]);
vport->options_len = nl_attr_get_size(a[OVS_VPORT_ATTR_OPTIONS]);
}
if (a[OVS_VPORT_ATTR_NETNSID]) {
netnsid_set(&vport->netnsid,
nl_attr_get_u32(a[OVS_VPORT_ATTR_NETNSID]));
} else {
netnsid_set_local(&vport->netnsid);
}
return 0;
}
/* Appends to 'buf' (which must initially be empty) a "struct ovs_header"
* followed by Netlink attributes corresponding to 'vport'. */
static void
dpif_netlink_vport_to_ofpbuf(const struct dpif_netlink_vport *vport,
struct ofpbuf *buf)
{
struct ovs_header *ovs_header;
nl_msg_put_genlmsghdr(buf, 0, ovs_vport_family, NLM_F_REQUEST | NLM_F_ECHO,
vport->cmd, OVS_VPORT_VERSION);
ovs_header = ofpbuf_put_uninit(buf, sizeof *ovs_header);
ovs_header->dp_ifindex = vport->dp_ifindex;
if (vport->port_no != ODPP_NONE) {
nl_msg_put_odp_port(buf, OVS_VPORT_ATTR_PORT_NO, vport->port_no);
}
if (vport->type != OVS_VPORT_TYPE_UNSPEC) {
nl_msg_put_u32(buf, OVS_VPORT_ATTR_TYPE, vport->type);
}
if (vport->name) {
nl_msg_put_string(buf, OVS_VPORT_ATTR_NAME, vport->name);
}
if (vport->upcall_pids) {
nl_msg_put_unspec(buf, OVS_VPORT_ATTR_UPCALL_PID,
vport->upcall_pids,
vport->n_upcall_pids * sizeof *vport->upcall_pids);
}
if (vport->stats) {
nl_msg_put_unspec(buf, OVS_VPORT_ATTR_STATS,
vport->stats, sizeof *vport->stats);
}
if (vport->options) {
nl_msg_put_nested(buf, OVS_VPORT_ATTR_OPTIONS,
vport->options, vport->options_len);
}
}
/* Clears 'vport' to "empty" values. */
void
dpif_netlink_vport_init(struct dpif_netlink_vport *vport)
{
memset(vport, 0, sizeof *vport);
vport->port_no = ODPP_NONE;
}
/* Executes 'request' in the kernel datapath. If the command fails, returns a
* positive errno value. Otherwise, if 'reply' and 'bufp' are null, returns 0
* without doing anything else. If 'reply' and 'bufp' are nonnull, then the
* result of the command is expected to be an ovs_vport also, which is decoded
* and stored in '*reply' and '*bufp'. The caller must free '*bufp' when the
* reply is no longer needed ('reply' will contain pointers into '*bufp'). */
int
dpif_netlink_vport_transact(const struct dpif_netlink_vport *request,
struct dpif_netlink_vport *reply,
struct ofpbuf **bufp)
{
struct ofpbuf *request_buf;
int error;
ovs_assert((reply != NULL) == (bufp != NULL));
error = dpif_netlink_init();
if (error) {
if (reply) {
*bufp = NULL;
dpif_netlink_vport_init(reply);
}
return error;
}
request_buf = ofpbuf_new(1024);
dpif_netlink_vport_to_ofpbuf(request, request_buf);
error = nl_transact(NETLINK_GENERIC, request_buf, bufp);
ofpbuf_delete(request_buf);
if (reply) {
if (!error) {
error = dpif_netlink_vport_from_ofpbuf(reply, *bufp);
}
if (error) {
dpif_netlink_vport_init(reply);
ofpbuf_delete(*bufp);
*bufp = NULL;
}
}
return error;
}
/* Obtains information about the kernel vport named 'name' and stores it into
* '*reply' and '*bufp'. The caller must free '*bufp' when the reply is no
* longer needed ('reply' will contain pointers into '*bufp'). */
int
dpif_netlink_vport_get(const char *name, struct dpif_netlink_vport *reply,
struct ofpbuf **bufp)
{
struct dpif_netlink_vport request;
dpif_netlink_vport_init(&request);
request.cmd = OVS_VPORT_CMD_GET;
request.name = name;
return dpif_netlink_vport_transact(&request, reply, bufp);
}
/* Parses the contents of 'buf', which contains a "struct ovs_header" followed
* by Netlink attributes, into 'dp'. Returns 0 if successful, otherwise a
* positive errno value.
*
* 'dp' will contain pointers into 'buf', so the caller should not free 'buf'
* while 'dp' is still in use. */
static int
dpif_netlink_dp_from_ofpbuf(struct dpif_netlink_dp *dp, const struct ofpbuf *buf)
{
static const struct nl_policy ovs_datapath_policy[] = {
[OVS_DP_ATTR_NAME] = { .type = NL_A_STRING, .max_len = IFNAMSIZ },
2011-11-07 09:21:17 -08:00
[OVS_DP_ATTR_STATS] = { NL_POLICY_FOR(struct ovs_dp_stats),
.optional = true },
[OVS_DP_ATTR_MEGAFLOW_STATS] = {
NL_POLICY_FOR(struct ovs_dp_megaflow_stats),
.optional = true },
[OVS_DP_ATTR_USER_FEATURES] = {
.type = NL_A_U32,
.optional = true },
[OVS_DP_ATTR_MASKS_CACHE_SIZE] = {
.type = NL_A_U32,
.optional = true },
};
dpif_netlink_dp_init(dp);
struct ofpbuf b = ofpbuf_const_initializer(buf->data, buf->size);
struct nlmsghdr *nlmsg = ofpbuf_try_pull(&b, sizeof *nlmsg);
struct genlmsghdr *genl = ofpbuf_try_pull(&b, sizeof *genl);
struct ovs_header *ovs_header = ofpbuf_try_pull(&b, sizeof *ovs_header);
struct nlattr *a[ARRAY_SIZE(ovs_datapath_policy)];
if (!nlmsg || !genl || !ovs_header
|| nlmsg->nlmsg_type != ovs_datapath_family
|| !nl_policy_parse(&b, 0, ovs_datapath_policy, a,
ARRAY_SIZE(ovs_datapath_policy))) {
return EINVAL;
}
dp->cmd = genl->cmd;
dp->dp_ifindex = ovs_header->dp_ifindex;
dp->name = nl_attr_get_string(a[OVS_DP_ATTR_NAME]);
if (a[OVS_DP_ATTR_STATS]) {
dp->stats = nl_attr_get(a[OVS_DP_ATTR_STATS]);
}
if (a[OVS_DP_ATTR_MEGAFLOW_STATS]) {
dp->megaflow_stats = nl_attr_get(a[OVS_DP_ATTR_MEGAFLOW_STATS]);
}
if (a[OVS_DP_ATTR_USER_FEATURES]) {
dp->user_features = nl_attr_get_u32(a[OVS_DP_ATTR_USER_FEATURES]);
}
if (a[OVS_DP_ATTR_MASKS_CACHE_SIZE]) {
dp->cache_size = nl_attr_get_u32(a[OVS_DP_ATTR_MASKS_CACHE_SIZE]);
} else {
dp->cache_size = UINT32_MAX;
}
return 0;
}
/* Appends to 'buf' the Generic Netlink message described by 'dp'. */
static void
dpif_netlink_dp_to_ofpbuf(const struct dpif_netlink_dp *dp, struct ofpbuf *buf)
{
struct ovs_header *ovs_header;
nl_msg_put_genlmsghdr(buf, 0, ovs_datapath_family,
NLM_F_REQUEST | NLM_F_ECHO, dp->cmd,
OVS_DATAPATH_VERSION);
ovs_header = ofpbuf_put_uninit(buf, sizeof *ovs_header);
ovs_header->dp_ifindex = dp->dp_ifindex;
if (dp->name) {
nl_msg_put_string(buf, OVS_DP_ATTR_NAME, dp->name);
}
if (dp->upcall_pid) {
nl_msg_put_u32(buf, OVS_DP_ATTR_UPCALL_PID, *dp->upcall_pid);
}
if (dp->user_features) {
nl_msg_put_u32(buf, OVS_DP_ATTR_USER_FEATURES, dp->user_features);
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (dp->upcall_pids) {
nl_msg_put_unspec(buf, OVS_DP_ATTR_PER_CPU_PIDS, dp->upcall_pids,
sizeof *dp->upcall_pids * dp->n_upcall_pids);
}
if (dp->cache_size != UINT32_MAX) {
nl_msg_put_u32(buf, OVS_DP_ATTR_MASKS_CACHE_SIZE, dp->cache_size);
}
/* Skip OVS_DP_ATTR_STATS since we never have a reason to serialize it. */
}
/* Clears 'dp' to "empty" values. */
static void
dpif_netlink_dp_init(struct dpif_netlink_dp *dp)
{
memset(dp, 0, sizeof *dp);
dp->cache_size = UINT32_MAX;
}
static void
dpif_netlink_dp_dump_start(struct nl_dump *dump)
{
struct dpif_netlink_dp request;
struct ofpbuf *buf;
dpif_netlink_dp_init(&request);
request.cmd = OVS_DP_CMD_GET;
buf = ofpbuf_new(1024);
dpif_netlink_dp_to_ofpbuf(&request, buf);
nl_dump_start(dump, NETLINK_GENERIC, buf);
ofpbuf_delete(buf);
}
/* Executes 'request' in the kernel datapath. If the command fails, returns a
* positive errno value. Otherwise, if 'reply' and 'bufp' are null, returns 0
* without doing anything else. If 'reply' and 'bufp' are nonnull, then the
* result of the command is expected to be of the same form, which is decoded
* and stored in '*reply' and '*bufp'. The caller must free '*bufp' when the
* reply is no longer needed ('reply' will contain pointers into '*bufp'). */
static int
dpif_netlink_dp_transact(const struct dpif_netlink_dp *request,
struct dpif_netlink_dp *reply, struct ofpbuf **bufp)
{
struct ofpbuf *request_buf;
int error;
ovs_assert((reply != NULL) == (bufp != NULL));
request_buf = ofpbuf_new(1024);
dpif_netlink_dp_to_ofpbuf(request, request_buf);
error = nl_transact(NETLINK_GENERIC, request_buf, bufp);
ofpbuf_delete(request_buf);
if (reply) {
dpif_netlink_dp_init(reply);
if (!error) {
error = dpif_netlink_dp_from_ofpbuf(reply, *bufp);
}
if (error) {
ofpbuf_delete(*bufp);
*bufp = NULL;
}
}
return error;
}
/* Obtains information about 'dpif_' and stores it into '*reply' and '*bufp'.
* The caller must free '*bufp' when the reply is no longer needed ('reply'
* will contain pointers into '*bufp'). */
static int
dpif_netlink_dp_get(const struct dpif *dpif_, struct dpif_netlink_dp *reply,
struct ofpbuf **bufp)
{
struct dpif_netlink *dpif = dpif_netlink_cast(dpif_);
struct dpif_netlink_dp request;
dpif_netlink_dp_init(&request);
request.cmd = OVS_DP_CMD_GET;
request.dp_ifindex = dpif->dp_ifindex;
return dpif_netlink_dp_transact(&request, reply, bufp);
}
/* Parses the contents of 'buf', which contains a "struct ovs_header" followed
* by Netlink attributes, into 'flow'. Returns 0 if successful, otherwise a
* positive errno value.
*
* 'flow' will contain pointers into 'buf', so the caller should not free 'buf'
* while 'flow' is still in use. */
static int
dpif_netlink_flow_from_ofpbuf(struct dpif_netlink_flow *flow,
const struct ofpbuf *buf)
{
static const struct nl_policy ovs_flow_policy[__OVS_FLOW_ATTR_MAX] = {
[OVS_FLOW_ATTR_KEY] = { .type = NL_A_NESTED, .optional = true },
[OVS_FLOW_ATTR_MASK] = { .type = NL_A_NESTED, .optional = true },
[OVS_FLOW_ATTR_ACTIONS] = { .type = NL_A_NESTED, .optional = true },
2011-11-07 09:21:17 -08:00
[OVS_FLOW_ATTR_STATS] = { NL_POLICY_FOR(struct ovs_flow_stats),
.optional = true },
[OVS_FLOW_ATTR_TCP_FLAGS] = { .type = NL_A_U8, .optional = true },
[OVS_FLOW_ATTR_USED] = { .type = NL_A_U64, .optional = true },
[OVS_FLOW_ATTR_UFID] = { .type = NL_A_U128, .optional = true },
/* The kernel never uses OVS_FLOW_ATTR_CLEAR. */
/* The kernel never uses OVS_FLOW_ATTR_PROBE. */
/* The kernel never uses OVS_FLOW_ATTR_UFID_FLAGS. */
};
dpif_netlink_flow_init(flow);
struct ofpbuf b = ofpbuf_const_initializer(buf->data, buf->size);
struct nlmsghdr *nlmsg = ofpbuf_try_pull(&b, sizeof *nlmsg);
struct genlmsghdr *genl = ofpbuf_try_pull(&b, sizeof *genl);
struct ovs_header *ovs_header = ofpbuf_try_pull(&b, sizeof *ovs_header);
struct nlattr *a[ARRAY_SIZE(ovs_flow_policy)];
if (!nlmsg || !genl || !ovs_header
|| nlmsg->nlmsg_type != ovs_flow_family
|| !nl_policy_parse(&b, 0, ovs_flow_policy, a,
ARRAY_SIZE(ovs_flow_policy))) {
return EINVAL;
}
if (!a[OVS_FLOW_ATTR_KEY] && !a[OVS_FLOW_ATTR_UFID]) {
return EINVAL;
}
flow->nlmsg_flags = nlmsg->nlmsg_flags;
flow->dp_ifindex = ovs_header->dp_ifindex;
if (a[OVS_FLOW_ATTR_KEY]) {
flow->key = nl_attr_get(a[OVS_FLOW_ATTR_KEY]);
flow->key_len = nl_attr_get_size(a[OVS_FLOW_ATTR_KEY]);
}
if (a[OVS_FLOW_ATTR_UFID]) {
flow->ufid = nl_attr_get_u128(a[OVS_FLOW_ATTR_UFID]);
flow->ufid_present = true;
}
if (a[OVS_FLOW_ATTR_MASK]) {
flow->mask = nl_attr_get(a[OVS_FLOW_ATTR_MASK]);
flow->mask_len = nl_attr_get_size(a[OVS_FLOW_ATTR_MASK]);
}
if (a[OVS_FLOW_ATTR_ACTIONS]) {
flow->actions = nl_attr_get(a[OVS_FLOW_ATTR_ACTIONS]);
flow->actions_len = nl_attr_get_size(a[OVS_FLOW_ATTR_ACTIONS]);
}
if (a[OVS_FLOW_ATTR_STATS]) {
flow->stats = nl_attr_get(a[OVS_FLOW_ATTR_STATS]);
}
if (a[OVS_FLOW_ATTR_TCP_FLAGS]) {
flow->tcp_flags = nl_attr_get(a[OVS_FLOW_ATTR_TCP_FLAGS]);
}
if (a[OVS_FLOW_ATTR_USED]) {
flow->used = nl_attr_get(a[OVS_FLOW_ATTR_USED]);
}
return 0;
}
userspace: Switching of L3 packets in L2 pipeline Ports have a new layer3 attribute if they send/receive L3 packets. The packet_type included in structs dp_packet and flow is considered in ofproto-dpif. The classical L2 match fields (dl_src, dl_dst, dl_type, and vlan_tci, vlan_vid, vlan_pcp) now have Ethernet as pre-requisite. A dummy ethernet header is pushed to L3 packets received from L3 ports before the the pipeline processing starts. The ethernet header is popped before sending a packet to a L3 port. For datapath ports that can receive L2 or L3 packets, the packet_type becomes part of the flow key for datapath flows and is handled appropriately in dpif-netdev. In the 'else' branch in flow_put_on_pmd() function, the additional check flow_equal(&match.flow, &netdev_flow->flow) was removed, as a) the dpcls lookup is sufficient to uniquely identify a flow and b) it caused false negatives because the flow in netdev->flow may not properly masked. In dpif_netdev_flow_put() we now use the same method for constructing the netdev_flow_key as the one used when adding the flow to the dplcs to make sure these always match. The function netdev_flow_key_from_flow() used so far was not only inefficient but sometimes caused mismatches and subsequent flow update failures. The kernel datapath does not support the packet_type match field. Instead it encodes the packet type implictly by the presence or absence of the Ethernet attribute in the flow key and mask. This patch filters the PACKET_TYPE attribute out of netlink flow key and mask to be sent to the kernel datapath. Signed-off-by: Lorand Jakab <lojakab@cisco.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Yi Yang <yi.y.yang@intel.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Co-authored-by: Zoltan Balogh <zoltan.balogh@ericsson.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2017-06-02 16:16:17 +00:00
/*
* If PACKET_TYPE attribute is present in 'data', it filters PACKET_TYPE out.
* If the flow is not Ethernet, the OVS_KEY_ATTR_PACKET_TYPE is converted to
* OVS_KEY_ATTR_ETHERTYPE. Puts 'data' to 'buf'.
userspace: Switching of L3 packets in L2 pipeline Ports have a new layer3 attribute if they send/receive L3 packets. The packet_type included in structs dp_packet and flow is considered in ofproto-dpif. The classical L2 match fields (dl_src, dl_dst, dl_type, and vlan_tci, vlan_vid, vlan_pcp) now have Ethernet as pre-requisite. A dummy ethernet header is pushed to L3 packets received from L3 ports before the the pipeline processing starts. The ethernet header is popped before sending a packet to a L3 port. For datapath ports that can receive L2 or L3 packets, the packet_type becomes part of the flow key for datapath flows and is handled appropriately in dpif-netdev. In the 'else' branch in flow_put_on_pmd() function, the additional check flow_equal(&match.flow, &netdev_flow->flow) was removed, as a) the dpcls lookup is sufficient to uniquely identify a flow and b) it caused false negatives because the flow in netdev->flow may not properly masked. In dpif_netdev_flow_put() we now use the same method for constructing the netdev_flow_key as the one used when adding the flow to the dplcs to make sure these always match. The function netdev_flow_key_from_flow() used so far was not only inefficient but sometimes caused mismatches and subsequent flow update failures. The kernel datapath does not support the packet_type match field. Instead it encodes the packet type implictly by the presence or absence of the Ethernet attribute in the flow key and mask. This patch filters the PACKET_TYPE attribute out of netlink flow key and mask to be sent to the kernel datapath. Signed-off-by: Lorand Jakab <lojakab@cisco.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Yi Yang <yi.y.yang@intel.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Co-authored-by: Zoltan Balogh <zoltan.balogh@ericsson.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2017-06-02 16:16:17 +00:00
*/
static void
put_exclude_packet_type(struct ofpbuf *buf, uint16_t type,
const struct nlattr *data, uint16_t data_len)
{
const struct nlattr *packet_type;
packet_type = nl_attr_find__(data, data_len, OVS_KEY_ATTR_PACKET_TYPE);
if (packet_type) {
/* exclude PACKET_TYPE Netlink attribute. */
ovs_assert(NLA_ALIGN(packet_type->nla_len) == NL_A_U32_SIZE);
size_t packet_type_len = NL_A_U32_SIZE;
size_t first_chunk_size = (uint8_t *)packet_type - (uint8_t *)data;
size_t second_chunk_size = data_len - first_chunk_size
- packet_type_len;
struct nlattr *next_attr = nl_attr_next(packet_type);
size_t ofs;
userspace: Switching of L3 packets in L2 pipeline Ports have a new layer3 attribute if they send/receive L3 packets. The packet_type included in structs dp_packet and flow is considered in ofproto-dpif. The classical L2 match fields (dl_src, dl_dst, dl_type, and vlan_tci, vlan_vid, vlan_pcp) now have Ethernet as pre-requisite. A dummy ethernet header is pushed to L3 packets received from L3 ports before the the pipeline processing starts. The ethernet header is popped before sending a packet to a L3 port. For datapath ports that can receive L2 or L3 packets, the packet_type becomes part of the flow key for datapath flows and is handled appropriately in dpif-netdev. In the 'else' branch in flow_put_on_pmd() function, the additional check flow_equal(&match.flow, &netdev_flow->flow) was removed, as a) the dpcls lookup is sufficient to uniquely identify a flow and b) it caused false negatives because the flow in netdev->flow may not properly masked. In dpif_netdev_flow_put() we now use the same method for constructing the netdev_flow_key as the one used when adding the flow to the dplcs to make sure these always match. The function netdev_flow_key_from_flow() used so far was not only inefficient but sometimes caused mismatches and subsequent flow update failures. The kernel datapath does not support the packet_type match field. Instead it encodes the packet type implictly by the presence or absence of the Ethernet attribute in the flow key and mask. This patch filters the PACKET_TYPE attribute out of netlink flow key and mask to be sent to the kernel datapath. Signed-off-by: Lorand Jakab <lojakab@cisco.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Yi Yang <yi.y.yang@intel.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Co-authored-by: Zoltan Balogh <zoltan.balogh@ericsson.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2017-06-02 16:16:17 +00:00
ofs = nl_msg_start_nested(buf, type);
nl_msg_put(buf, data, first_chunk_size);
nl_msg_put(buf, next_attr, second_chunk_size);
if (!nl_attr_find__(data, data_len, OVS_KEY_ATTR_ETHERNET)) {
ovs_be16 pt = pt_ns_type_be(nl_attr_get_be32(packet_type));
const struct nlattr *nla;
nla = nl_attr_find(buf, ofs + NLA_HDRLEN, OVS_KEY_ATTR_ETHERTYPE);
if (nla) {
ovs_be16 *ethertype;
ethertype = CONST_CAST(ovs_be16 *, nl_attr_get(nla));
*ethertype = pt;
} else {
nl_msg_put_be16(buf, OVS_KEY_ATTR_ETHERTYPE, pt);
}
}
nl_msg_end_nested(buf, ofs);
userspace: Switching of L3 packets in L2 pipeline Ports have a new layer3 attribute if they send/receive L3 packets. The packet_type included in structs dp_packet and flow is considered in ofproto-dpif. The classical L2 match fields (dl_src, dl_dst, dl_type, and vlan_tci, vlan_vid, vlan_pcp) now have Ethernet as pre-requisite. A dummy ethernet header is pushed to L3 packets received from L3 ports before the the pipeline processing starts. The ethernet header is popped before sending a packet to a L3 port. For datapath ports that can receive L2 or L3 packets, the packet_type becomes part of the flow key for datapath flows and is handled appropriately in dpif-netdev. In the 'else' branch in flow_put_on_pmd() function, the additional check flow_equal(&match.flow, &netdev_flow->flow) was removed, as a) the dpcls lookup is sufficient to uniquely identify a flow and b) it caused false negatives because the flow in netdev->flow may not properly masked. In dpif_netdev_flow_put() we now use the same method for constructing the netdev_flow_key as the one used when adding the flow to the dplcs to make sure these always match. The function netdev_flow_key_from_flow() used so far was not only inefficient but sometimes caused mismatches and subsequent flow update failures. The kernel datapath does not support the packet_type match field. Instead it encodes the packet type implictly by the presence or absence of the Ethernet attribute in the flow key and mask. This patch filters the PACKET_TYPE attribute out of netlink flow key and mask to be sent to the kernel datapath. Signed-off-by: Lorand Jakab <lojakab@cisco.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Yi Yang <yi.y.yang@intel.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Co-authored-by: Zoltan Balogh <zoltan.balogh@ericsson.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2017-06-02 16:16:17 +00:00
} else {
nl_msg_put_unspec(buf, type, data, data_len);
}
}
/* Appends to 'buf' (which must initially be empty) a "struct ovs_header"
* followed by Netlink attributes corresponding to 'flow'. */
static void
dpif_netlink_flow_to_ofpbuf(const struct dpif_netlink_flow *flow,
struct ofpbuf *buf)
{
struct ovs_header *ovs_header;
nl_msg_put_genlmsghdr(buf, 0, ovs_flow_family,
NLM_F_REQUEST | flow->nlmsg_flags,
flow->cmd, OVS_FLOW_VERSION);
ovs_header = ofpbuf_put_uninit(buf, sizeof *ovs_header);
ovs_header->dp_ifindex = flow->dp_ifindex;
if (flow->ufid_present) {
nl_msg_put_u128(buf, OVS_FLOW_ATTR_UFID, flow->ufid);
}
if (flow->ufid_terse) {
nl_msg_put_u32(buf, OVS_FLOW_ATTR_UFID_FLAGS,
OVS_UFID_F_OMIT_KEY | OVS_UFID_F_OMIT_MASK
| OVS_UFID_F_OMIT_ACTIONS);
}
if (!flow->ufid_terse || !flow->ufid_present) {
if (flow->key_len) {
userspace: Switching of L3 packets in L2 pipeline Ports have a new layer3 attribute if they send/receive L3 packets. The packet_type included in structs dp_packet and flow is considered in ofproto-dpif. The classical L2 match fields (dl_src, dl_dst, dl_type, and vlan_tci, vlan_vid, vlan_pcp) now have Ethernet as pre-requisite. A dummy ethernet header is pushed to L3 packets received from L3 ports before the the pipeline processing starts. The ethernet header is popped before sending a packet to a L3 port. For datapath ports that can receive L2 or L3 packets, the packet_type becomes part of the flow key for datapath flows and is handled appropriately in dpif-netdev. In the 'else' branch in flow_put_on_pmd() function, the additional check flow_equal(&match.flow, &netdev_flow->flow) was removed, as a) the dpcls lookup is sufficient to uniquely identify a flow and b) it caused false negatives because the flow in netdev->flow may not properly masked. In dpif_netdev_flow_put() we now use the same method for constructing the netdev_flow_key as the one used when adding the flow to the dplcs to make sure these always match. The function netdev_flow_key_from_flow() used so far was not only inefficient but sometimes caused mismatches and subsequent flow update failures. The kernel datapath does not support the packet_type match field. Instead it encodes the packet type implictly by the presence or absence of the Ethernet attribute in the flow key and mask. This patch filters the PACKET_TYPE attribute out of netlink flow key and mask to be sent to the kernel datapath. Signed-off-by: Lorand Jakab <lojakab@cisco.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Yi Yang <yi.y.yang@intel.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Co-authored-by: Zoltan Balogh <zoltan.balogh@ericsson.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2017-06-02 16:16:17 +00:00
put_exclude_packet_type(buf, OVS_FLOW_ATTR_KEY, flow->key,
flow->key_len);
}
if (flow->mask_len) {
userspace: Switching of L3 packets in L2 pipeline Ports have a new layer3 attribute if they send/receive L3 packets. The packet_type included in structs dp_packet and flow is considered in ofproto-dpif. The classical L2 match fields (dl_src, dl_dst, dl_type, and vlan_tci, vlan_vid, vlan_pcp) now have Ethernet as pre-requisite. A dummy ethernet header is pushed to L3 packets received from L3 ports before the the pipeline processing starts. The ethernet header is popped before sending a packet to a L3 port. For datapath ports that can receive L2 or L3 packets, the packet_type becomes part of the flow key for datapath flows and is handled appropriately in dpif-netdev. In the 'else' branch in flow_put_on_pmd() function, the additional check flow_equal(&match.flow, &netdev_flow->flow) was removed, as a) the dpcls lookup is sufficient to uniquely identify a flow and b) it caused false negatives because the flow in netdev->flow may not properly masked. In dpif_netdev_flow_put() we now use the same method for constructing the netdev_flow_key as the one used when adding the flow to the dplcs to make sure these always match. The function netdev_flow_key_from_flow() used so far was not only inefficient but sometimes caused mismatches and subsequent flow update failures. The kernel datapath does not support the packet_type match field. Instead it encodes the packet type implictly by the presence or absence of the Ethernet attribute in the flow key and mask. This patch filters the PACKET_TYPE attribute out of netlink flow key and mask to be sent to the kernel datapath. Signed-off-by: Lorand Jakab <lojakab@cisco.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Yi Yang <yi.y.yang@intel.com> Signed-off-by: Jan Scheurich <jan.scheurich@ericsson.com> Co-authored-by: Zoltan Balogh <zoltan.balogh@ericsson.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2017-06-02 16:16:17 +00:00
put_exclude_packet_type(buf, OVS_FLOW_ATTR_MASK, flow->mask,
flow->mask_len);
}
if (flow->actions || flow->actions_len) {
nl_msg_put_unspec(buf, OVS_FLOW_ATTR_ACTIONS,
flow->actions, flow->actions_len);
}
}
/* We never need to send these to the kernel. */
ovs_assert(!flow->stats);
ovs_assert(!flow->tcp_flags);
ovs_assert(!flow->used);
if (flow->clear) {
nl_msg_put_flag(buf, OVS_FLOW_ATTR_CLEAR);
}
if (flow->probe) {
nl_msg_put_flag(buf, OVS_FLOW_ATTR_PROBE);
}
}
/* Clears 'flow' to "empty" values. */
static void
dpif_netlink_flow_init(struct dpif_netlink_flow *flow)
{
memset(flow, 0, sizeof *flow);
}
/* Executes 'request' in the kernel datapath. If the command fails, returns a
* positive errno value. Otherwise, if 'reply' and 'bufp' are null, returns 0
* without doing anything else. If 'reply' and 'bufp' are nonnull, then the
* result of the command is expected to be a flow also, which is decoded and
* stored in '*reply' and '*bufp'. The caller must free '*bufp' when the reply
* is no longer needed ('reply' will contain pointers into '*bufp'). */
static int
dpif_netlink_flow_transact(struct dpif_netlink_flow *request,
struct dpif_netlink_flow *reply,
struct ofpbuf **bufp)
{
struct ofpbuf *request_buf;
int error;
ovs_assert((reply != NULL) == (bufp != NULL));
if (reply) {
request->nlmsg_flags |= NLM_F_ECHO;
}
request_buf = ofpbuf_new(1024);
dpif_netlink_flow_to_ofpbuf(request, request_buf);
error = nl_transact(NETLINK_GENERIC, request_buf, bufp);
ofpbuf_delete(request_buf);
if (reply) {
if (!error) {
error = dpif_netlink_flow_from_ofpbuf(reply, *bufp);
}
if (error) {
dpif_netlink_flow_init(reply);
ofpbuf_delete(*bufp);
*bufp = NULL;
}
}
return error;
}
static void
dpif_netlink_flow_get_stats(const struct dpif_netlink_flow *flow,
struct dpif_flow_stats *stats)
{
if (flow->stats) {
stats->n_packets = get_32aligned_u64(&flow->stats->n_packets);
stats->n_bytes = get_32aligned_u64(&flow->stats->n_bytes);
} else {
stats->n_packets = 0;
stats->n_bytes = 0;
}
stats->used = flow->used ? get_32aligned_u64(flow->used) : 0;
stats->tcp_flags = flow->tcp_flags ? *flow->tcp_flags : 0;
}
/* Logs information about a packet that was recently lost in 'ch' (in
* 'dpif_'). */
static void
report_loss(struct dpif_netlink *dpif, struct dpif_channel *ch, uint32_t ch_idx,
uint32_t handler_id)
{
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5);
struct ds s;
if (VLOG_DROP_WARN(&rl)) {
return;
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
if (dpif_netlink_upcall_per_cpu(dpif)) {
VLOG_WARN("%s: lost packet on handler %u",
dpif_name(&dpif->dpif), handler_id);
} else {
ds_init(&s);
if (ch->last_poll != LLONG_MIN) {
ds_put_format(&s, " (last polled %lld ms ago)",
time_msec() - ch->last_poll);
}
VLOG_WARN("%s: lost packet on port channel %u of handler %u%s",
dpif_name(&dpif->dpif), ch_idx, handler_id, ds_cstr(&s));
ds_destroy(&s);
}
}
static void
dpif_netlink_unixctl_dispatch_mode(struct unixctl_conn *conn,
int argc OVS_UNUSED,
const char *argv[] OVS_UNUSED,
void *aux OVS_UNUSED)
{
struct ds reply = DS_EMPTY_INITIALIZER;
struct nl_dump dump;
uint64_t reply_stub[NL_DUMP_BUFSIZE / 8];
struct ofpbuf msg, buf;
int error;
error = dpif_netlink_init();
if (error) {
return;
}
ofpbuf_use_stub(&buf, reply_stub, sizeof reply_stub);
dpif_netlink_dp_dump_start(&dump);
while (nl_dump_next(&dump, &msg, &buf)) {
struct dpif_netlink_dp dp;
if (!dpif_netlink_dp_from_ofpbuf(&dp, &msg)) {
ds_put_format(&reply, "%s: ", dp.name);
if (dp.user_features & OVS_DP_F_DISPATCH_UPCALL_PER_CPU) {
ds_put_format(&reply, "per-cpu dispatch mode");
} else {
ds_put_format(&reply, "per-vport dispatch mode");
}
ds_put_format(&reply, "\n");
}
}
ofpbuf_uninit(&buf);
error = nl_dump_done(&dump);
if (!error) {
unixctl_command_reply(conn, ds_cstr(&reply));
}
dpif-netlink: Introduce per-cpu upcall dispatch. The Open vSwitch kernel module uses the upcall mechanism to send packets from kernel space to user space when it misses in the kernel space flow table. The upcall sends packets via a Netlink socket. Currently, a Netlink socket is created for every vport. In this way, there is a 1:1 mapping between a vport and a Netlink socket. When a packet is received by a vport, if it needs to be sent to user space, it is sent via the corresponding Netlink socket. This mechanism, with various iterations of the corresponding user space code, has seen some limitations and issues: * On systems with a large number of vports, there is correspondingly a large number of Netlink sockets which can limit scaling. (https://bugzilla.redhat.com/show_bug.cgi?id=1526306) * Packet reordering on upcalls. (https://bugzilla.redhat.com/show_bug.cgi?id=1844576) * A thundering herd issue. (https://bugzilla.redhat.com/show_bug.cgi?id=1834444) This patch introduces an alternative, feature-negotiated, upcall mode using a per-cpu dispatch rather than a per-vport dispatch. In this mode, the Netlink socket to be used for the upcall is selected based on the CPU of the thread that is executing the upcall. In this way, it resolves the issues above as: a) The number of Netlink sockets scales with the number of CPUs rather than the number of vports. b) Ordering per-flow is maintained as packets are distributed to CPUs based on mechanisms such as RSS and flows are distributed to a single user space thread. c) Packets from a flow can only wake up one user space thread. Reported-at: https://bugzilla.redhat.com/1844576 Signed-off-by: Mark Gray <mark.d.gray@redhat.com> Acked-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-07-16 06:17:36 -04:00
ds_destroy(&reply);
}