mirror of
https://github.com/openvswitch/ovs
synced 2025-08-22 18:07:40 +00:00
There are cases where users might want simple forwarding or drop rules for all packets received from a specific port, e.g :: "in_port=1,actions=2" "in_port=2,actions=IN_PORT" "in_port=3,vlan_tci=0x1234/0x1fff,actions=drop" "in_port=4,actions=push_vlan:0x8100,set_field:4196->vlan_vid,output:3" There are also cases where complex OpenFlow rules can be simplified down to datapath flows with very simple match criteria. In theory, for very simple forwarding, OVS doesn't need to parse packets at all in order to follow these rules. "Simple match" lookup optimization is intended to speed up packet forwarding in these cases. Design: Due to various implementation constraints userspace datapath has following flow fields always in exact match (i.e. it's required to match at least these fields of a packet even if the OF rule doesn't need that): - recirc_id - in_port - packet_type - dl_type - vlan_tci (CFI + VID) - in most cases - nw_frag - for ip packets Not all of these fields are related to packet itself. We already know the current 'recirc_id' and the 'in_port' before starting the packet processing. It also seems safe to assume that we're working with Ethernet packets. So, for the simple OF rule we need to match only on 'dl_type', 'vlan_tci' and 'nw_frag'. 'in_port', 'dl_type', 'nw_frag' and 13 bits of 'vlan_tci' can be combined in a single 64bit integer (mark) that can be used as a hash in hash map. We are using only VID and CFI form the 'vlan_tci', flows that need to match on PCP will not qualify for the optimization. Workaround for matching on non-existence of vlan updated to match on CFI and VID only in order to qualify for the optimization. CFI is always set by OVS if vlan is present in a packet, so there is no need to match on PCP in this case. 'nw_frag' takes 2 bits of PCP inside the simple match mark. New per-PMD flow table 'simple_match_table' introduced to store simple match flows only. 'dp_netdev_flow_add' adds flow to the usual 'flow_table' and to the 'simple_match_table' if the flow meets following constraints: - 'recirc_id' in flow match is 0. - 'packet_type' in flow match is Ethernet. - Flow wildcards contains only minimal set of non-wildcarded fields (listed above). If the number of flows for current 'in_port' in a regular 'flow_table' equals number of flows for current 'in_port' in a 'simple_match_table', we may use simple match optimization, because all the flows we have are simple match flows. This means that we only need to parse 'dl_type', 'vlan_tci' and 'nw_frag' to perform packet matching. Now we make the unique flow mark from the 'in_port', 'dl_type', 'nw_frag' and 'vlan_tci' and looking for it in the 'simple_match_table'. On successful lookup we don't need to run full 'miniflow_extract()'. Unsuccessful lookup technically means that we have no suitable flow in the datapath and upcall will be required. So, in this case EMC and SMC lookups are disabled. We may optimize this path in the future by bypassing the dpcls lookup too. Performance improvement of this solution on a 'simple match' flows should be comparable with partial HW offloading, because it parses same packet fields and uses similar flow lookup scheme. However, unlike partial HW offloading, it works for all port types including virtual ones. Performance results when compared to EMC: Test setup: virtio-user OVS virtio-user Testpmd1 ------------> pmd1 ------------> Testpmd2 (txonly) x<------ pmd2 <------------ (mac swap) Single stream of 64byte packets. Actions: in_port=vhost0,actions=vhost1 in_port=vhost1,actions=vhost0 Stats collected from pmd1 and pmd2, so there are 2 scenarios: Virt-to-Virt : Testpmd1 ------> pmd1 ------> Testpmd2. Virt-to-NoCopy : Testpmd2 ------> pmd2 --->x Testpmd1. Here the packet sent from pmd2 to Testpmd1 is always dropped, because the virtqueue is full since Testpmd1 is in txonly mode and doesn't receive any packets. This should be closer to the performance of a VM-to-Phy scenario. Test performed on machine with Intel Xeon CPU E5-2690 v4 @ 2.60GHz. Table below represents improvement in throughput when compared to EMC. +----------------+------------------------+------------------------+ | | Default (-g -O2) | "-Ofast -march=native" | | Scenario +------------+-----------+------------+-----------+ | | GCC | Clang | GCC | Clang | +----------------+------------+-----------+------------+-----------+ | Virt-to-Virt | +18.9% | +25.5% | +10.8% | +16.7% | | Virt-to-NoCopy | +24.3% | +33.7% | +14.9% | +22.0% | +----------------+------------+-----------+------------+-----------+ For Phy-to-Phy case performance improvement should be even higher, but it's not the main use-case for this functionality. Performance difference for the non-simple flows is within a margin of error. Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
265 lines
9.1 KiB
Groff
265 lines
9.1 KiB
Groff
.SS "DPIF-NETDEV COMMANDS"
|
|
These commands are used to expose internal information (mostly statistics)
|
|
about the "dpif-netdev" userspace datapath. If there is only one datapath
|
|
(as is often the case, unless \fBdpctl/\fR commands are used), the \fIdp\fR
|
|
argument can be omitted. By default the commands present data for all pmd
|
|
threads in the datapath. By specifying the "-pmd Core" option one can filter
|
|
the output for a single pmd in the datapath.
|
|
.
|
|
.IP "\fBdpif-netdev/pmd-stats-show\fR [\fB-pmd\fR \fIcore\fR] [\fIdp\fR]"
|
|
Shows performance statistics for one or all pmd threads of the datapath
|
|
\fIdp\fR. The special thread "main" sums up the statistics of every non pmd
|
|
thread.
|
|
|
|
The sum of "phwol hits", "simple match hits", "emc hits", "smc hits",
|
|
"megaflow hits" and "miss" is the number of packet lookups performed by the
|
|
datapath. Beware that a recirculated packet experiences one additional lookup
|
|
per recirculation, so there may be more lookups than forwarded packets in the
|
|
datapath.
|
|
|
|
The MFEX Opt hits displays the number of packets that are processed by the
|
|
optimized miniflow extract implementations.
|
|
|
|
Cycles are counted using the TSC or similar facilities (when available on
|
|
the platform). The duration of one cycle depends on the processing platform.
|
|
|
|
"idle cycles" refers to cycles spent in PMD iterations not forwarding any
|
|
any packets. "processing cycles" refers to cycles spent in PMD iterations
|
|
forwarding at least one packet, including the cost for polling, processing and
|
|
transmitting said packets.
|
|
|
|
To reset these counters use \fBdpif-netdev/pmd-stats-clear\fR.
|
|
.
|
|
.IP "\fBdpif-netdev/pmd-stats-clear\fR [\fIdp\fR]"
|
|
Resets to zero the per pmd thread performance numbers shown by the
|
|
\fBdpif-netdev/pmd-stats-show\fR and \fBdpif-netdev/pmd-perf-show\fR commands.
|
|
It will NOT reset datapath or bridge statistics, only the values shown by
|
|
the above commands.
|
|
.
|
|
.IP "\fBdpif-netdev/pmd-perf-show\fR [\fB-nh\fR] [\fB-it\fR \fIiter_len\fR] \
|
|
[\fB-ms\fR \fIms_len\fR] [\fB-pmd\fR \fIcore\fR] [\fIdp\fR]"
|
|
Shows detailed performance metrics for one or all pmds threads of the
|
|
user space datapath.
|
|
|
|
The collection of detailed statistics can be controlled by a new
|
|
configuration parameter "other_config:pmd-perf-metrics". By default it
|
|
is disabled. The run-time overhead, when enabled, is in the order of 1%.
|
|
|
|
.RS
|
|
.IP
|
|
.PD .4v
|
|
.IP \(em
|
|
used cycles
|
|
.IP \(em
|
|
forwared packets
|
|
.IP \(em
|
|
number of rx batches
|
|
.IP \(em
|
|
packets/rx batch
|
|
.IP \(em
|
|
max. vhostuser queue fill level
|
|
.IP \(em
|
|
number of upcalls
|
|
.IP \(em
|
|
cycles spent in upcalls
|
|
.PD
|
|
.RE
|
|
.IP
|
|
This raw recorded data is used threefold:
|
|
|
|
.RS
|
|
.IP
|
|
.PD .4v
|
|
.IP 1.
|
|
In histograms for each of the following metrics:
|
|
.RS
|
|
.IP \(em
|
|
cycles/iteration (logarithmic)
|
|
.IP \(em
|
|
packets/iteration (logarithmic)
|
|
.IP \(em
|
|
cycles/packet
|
|
.IP \(em
|
|
packets/batch
|
|
.IP \(em
|
|
max. vhostuser qlen (logarithmic)
|
|
.IP \(em
|
|
upcalls
|
|
.IP \(em
|
|
cycles/upcall (logarithmic)
|
|
The histograms bins are divided linear or logarithmic.
|
|
.RE
|
|
.IP 2.
|
|
A cyclic history of the above metrics for 1024 iterations
|
|
.IP 3.
|
|
A cyclic history of the cummulative/average values per millisecond wall
|
|
clock for the last 1024 milliseconds:
|
|
.RS
|
|
.IP \(em
|
|
number of iterations
|
|
.IP \(em
|
|
avg. cycles/iteration
|
|
.IP \(em
|
|
packets (Kpps)
|
|
.IP \(em
|
|
avg. packets/batch
|
|
.IP \(em
|
|
avg. max vhost qlen
|
|
.IP \(em
|
|
upcalls
|
|
.IP \(em
|
|
avg. cycles/upcall
|
|
.RE
|
|
.PD
|
|
.RE
|
|
.IP
|
|
.
|
|
The command options are:
|
|
.RS
|
|
.IP "\fB-nh\fR"
|
|
Suppress the histograms
|
|
.IP "\fB-it\fR \fIiter_len\fR"
|
|
Display the last iter_len iteration stats
|
|
.IP "\fB-ms\fR \fIms_len\fR"
|
|
Display the last ms_len millisecond stats
|
|
.RE
|
|
.IP
|
|
The output always contains the following global PMD statistics:
|
|
.RS
|
|
.IP
|
|
.EX
|
|
Time: 15:24:55.270
|
|
Measurement duration: 1.008 s
|
|
|
|
pmd thread numa_id 0 core_id 1:
|
|
|
|
Iterations: 572817 (1.76 us/it)
|
|
- Used TSC cycles: 2419034712 ( 99.9 % of total cycles)
|
|
- idle iterations: 486808 ( 15.9 % of used cycles)
|
|
- busy iterations: 86009 ( 84.1 % of used cycles)
|
|
Rx packets: 2399607 (2381 Kpps, 848 cycles/pkt)
|
|
Datapath passes: 3599415 (1.50 passes/pkt)
|
|
- PHWOL hits: 0 ( 0.0 %)
|
|
- MFEX Opt hits: 3570133 ( 99.2 %)
|
|
- Simple Match hits: 0 ( 0.0 %)
|
|
- EMC hits: 336472 ( 9.3 %)
|
|
- SMC hits: 0 ( 0.0 %)
|
|
- Megaflow hits: 3262943 ( 90.7 %, 1.00 subtbl lookups/hit)
|
|
- Upcalls: 0 ( 0.0 %, 0.0 us/upcall)
|
|
- Lost upcalls: 0 ( 0.0 %)
|
|
Tx packets: 2399607 (2381 Kpps)
|
|
Tx batches: 171400 (14.00 pkts/batch)
|
|
.EE
|
|
.RE
|
|
.IP
|
|
Here "Rx packets" actually reflects the number of packets forwarded by the
|
|
datapath. "Datapath passes" matches the number of packet lookups as
|
|
reported by the \fBdpif-netdev/pmd-stats-show\fR command.
|
|
|
|
To reset the counters and start a new measurement use
|
|
\fBdpif-netdev/pmd-stats-clear\fR.
|
|
.
|
|
.IP "\fBdpif-netdev/pmd-perf-log-set\fR \fBon\fR|\fBoff\fR \
|
|
[\fB-b\fR \fIbefore\fR] [\fB-a\fR \fIafter\fR] [\fB-e\fR|\fB-ne\fR] \
|
|
[\fB-us\fR \fIusec\fR] [\fB-q\fR \fIqlen\fR]"
|
|
.
|
|
The userspace "netdev" datapath is able to supervise the PMD performance
|
|
metrics and detect iterations with suspicious statistics according to the
|
|
following criteria:
|
|
.RS
|
|
.IP \(em
|
|
The iteration lasts longer than \fIusec\fR microseconds (default 250).
|
|
This can be used to capture events where a PMD is blocked or interrupted for
|
|
such a period of time that there is a risk for dropped packets on any of its Rx
|
|
queues.
|
|
.IP \(em
|
|
The max vhost qlen exceeds a threshold \fIqlen\fR (default 128). This can be
|
|
used to infer virtio queue overruns and dropped packets inside a VM, which are
|
|
not visible in OVS otherwise.
|
|
.RE
|
|
.IP
|
|
Such suspicious iterations can be logged together with their iteration
|
|
statistics in the \fBovs-vswitchd.log\fR to be able to correlate them to
|
|
packet drop or other events outside OVS.
|
|
|
|
The above command enables (\fBon\fR) or disables (\fBoff\fR) supervision and
|
|
logging at run-time and can be used to adjust the above thresholds for
|
|
detecting suspicious iterations. By default supervision and logging is
|
|
disabled.
|
|
|
|
The command options are:
|
|
.RS
|
|
.IP "\fB-b\fR \fIbefore\fR"
|
|
The number of iterations before the suspicious iteration to be logged
|
|
(default 5).
|
|
.IP "\fB-a\fR \fIafter\fR"
|
|
The number of iterations after the suspicious iteration to be logged
|
|
(default 5).
|
|
.IP "\fB-e\fR"
|
|
Extend logging interval if another suspicious iteration is detected
|
|
before logging occurs.
|
|
.IP "\fB-ne\fR"
|
|
Do not extend logging interval if another suspicious iteration is detected
|
|
before logging occurs (default).
|
|
.IP "\fB-q\fR \fIqlen\fR"
|
|
Suspicious vhost queue fill level threshold. Increase this to 512 if the Qemu
|
|
supports 1024 virtio queue length (default 128).
|
|
.IP "\fB-us\fR \fIusec\fR"
|
|
Change the duration threshold for a suspicious iteration (default 250 us).
|
|
.RE
|
|
|
|
Note: Logging of suspicious iterations itself consumes a considerable amount
|
|
of processing cycles of a PMD which may be visible in the iteration history.
|
|
In the worst case this can lead OVS to detect another suspicious iteration
|
|
caused by logging.
|
|
|
|
If more than 100 iterations around a suspicious iteration have been logged
|
|
once, OVS falls back to the safe default values (-b 5 -a 5 -ne) to avoid
|
|
that logging itself continuously causes logging of further suspicious
|
|
iterations.
|
|
.
|
|
.IP "\fBdpif-netdev/pmd-rxq-show\fR [\fB-pmd\fR \fIcore\fR] [\fIdp\fR]"
|
|
For one or all pmd threads of the datapath \fIdp\fR show the list of queue-ids
|
|
with port names, which this thread polls.
|
|
.
|
|
.IP "\fBdpif-netdev/pmd-rxq-rebalance\fR [\fIdp\fR]"
|
|
Reassigns rxqs to pmds in the datapath \fIdp\fR based on their current usage.
|
|
.
|
|
.IP "\fBdpif-netdev/bond-show\fR [\fIdp\fR]"
|
|
When "other_config:lb-output-action" is set to "true", the userspace datapath
|
|
handles the load balancing of bonds directly instead of depending on flow
|
|
recirculation (only in balance-tcp mode).
|
|
|
|
When this is the case, the above command prints the load-balancing information
|
|
of the bonds configured in datapath \fIdp\fR showing the interface associated
|
|
with each bucket (hash).
|
|
.
|
|
.IP "\fBdpif-netdev/subtable-lookup-prio-get\fR"
|
|
Lists the DPCLS implementations or lookup functions that are available as well
|
|
as their priorities.
|
|
.
|
|
.IP "\fBdpif-netdev/subtable-lookup-prio-set\fR \fIlookup_function\fR \
|
|
\fIprio\fR"
|
|
Sets the priority of a lookup function by name, \fIlookup_function\fR, and
|
|
priority, \fIprio\fR, which should be a positive integer value. The highest
|
|
priority lookup function is used for classification.
|
|
|
|
The number of affected dpcls ports and subtables is returned.
|
|
.
|
|
.IP "\fBdpif-netdev/dpif-impl-get\fR
|
|
Lists the DPIF implementations that are available.
|
|
.
|
|
.IP "\fBdpif-netdev/dpif-impl-set\fR \fIdpif_impl\fR"
|
|
Sets the DPIF to be used to \fIdpif_impl\fR. By default "dpif_scalar" is used.
|
|
.
|
|
.IP "\fBdpif-netdev/miniflow-parser-get\fR
|
|
Lists the miniflow extract implementations that are available.
|
|
.
|
|
.IP "\fBdpif-netdev/miniflow-parser-set\fR [\fB-pmd\fR \fIcore\fR] \
|
|
\fIminiflow_impl\fR [\fIstudy_cnt\fR]"
|
|
Sets the miniflow extract to \fIminiflow_impl\fR for a specified PMD or all
|
|
PMDs in the case where no value is specified. By default "scalar" is used.
|
|
\fIstudy_cnt\fR defaults to 128 and indicates the number of packets that the
|
|
"study" miniflow implementation must parse before choosing an optimal
|
|
implementation.
|