Rename parse_vlan_push_action() to add_vlan_push_action()
as it is inconsistent with other add/parse action functions.
Seen as it unconditionally returns 0 might as well change
to return void too.
Also, a redundant return code check is removed.
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ilya Maximets <i.maximets@ovn.org>
Support for tnl_push/output already exists, add push_vlan.
Signed-off-by: Allen Chen <allen.chen@jaguarmicro.com>
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
This commit adds support for DPDK v24.11.1.
It updates the CI script and documentation and includes the following
changes coming from the dpdk-latest branch:
- netdev-offload-dpdk: Fix build with v24.11-rc1.
https://patchwork.ozlabs.org/project/openvswitch/list/?series=428784&state=*
Acked-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Frode Nordahl <fnordahl@ubuntu.com>
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Add support for offloading packet match on ICMPv6 header
using rte_flow API.
Signed-off-by: Allen Chen <allen.chen@jaguarmicro.com>
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Previously when a flow was attempted to be offloaded, if it
could not be offloaded and did not return an actions error,
a warning was logged.
The reason there was an exception for an actions error was to allow
for failure for full offload of a flow because it will fallback to
partial offload. There are some issues with this approach to logging.
Some NICs do not specify an actions error, because other config in
the NIC may be checked first. e.g. In the case of Mellanox CX-5,
there can be different types of offload configured, so an unspecified
error may be returned.
More generally, enabling hw-offload is best effort per datapath/NIC/flow
as full and partial offload support in NICs is variable and there is
always fallback to software.
So there is likely to be repeated logging about offloading of flows
failing. With this in mind, change the log level to debug.
The status of the offload can still be seen with below command:
$ ovs-appctl dpctl/dump-flows -m
... offloaded:partial ...
Also, remove some duplicated rate limiting and tidy-up the succeed
and failure logs.
Reviewed-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Action PORT_ID has been deprecated. Use REPRESENTED_PORT instead.
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Signed-off-by: Ivan Malov <ivan.malov@arknetworks.am>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Vport's offloads are done on the tracked orig-in-port, but the flow itself
is associated in the vport's map.
Removing the physdev will flush all the ports that are on its map, but
not the ones on other netdevs' maps. Since flows take reference count on
both their vport and their physdev, the physdev still has references on.
Trying to remove it and re-add it fails with "already in use" error.
Fix it by flushing the physdev's offload flows in all related netdevs,
e.g. the netdev itself, or for physical devices, all vports.
Fixes: adbd4301a249 ("netdev-offload-dpdk: Use per-netdev offload metadata.")
Reported-by: wuxi_seu@163.com
Acked-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Caught while reviewing code.
Fixes: aca2f8a8a6b6 ("netdev-offload-dpdk: Implement HW miss packet recover for vport.")
Fixes: 241bad15d99a ("dpif-netdev: associate flow with a mark id")
Acked-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
The offload thread calling ufid_to_rte_flow_disassociate() may be the
last one holding a reference on the netdev and physdev.
So displaying information about them might trigger a crash when
removing a physical port.
Fixes: faf71e492263 ("netdev-dpdk: Print port name in offload API messages.")
Acked-by: Mike Pattrick <mkp@redhat.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Following DPDK commit bd2a4d4b2e3a ("ethdev: forbid direction attribute
in transfer flow rules"), the ingress attribute presence is rejected for
transfer flows.
Fixes: a77c7796f23a ("dpdk: Update to use v22.11.1.")
Acked-by: Eli Britstein <elibr@nvidia.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Populate the 'is_ipv6' field of 'struct rte_flow_tunnel', which can
be used in the implementation of tunnel pop action for DPDK PMD.
Fixes: be56e063d028 ("netdev-offload-dpdk: Support tunnel pop action.")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Louis Peens <louis.peens@corigine.com>
Acked-by: Eli Britstein <elibr@nvidia.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Same as arpa/inet.h, the netinet/ip6.h on FreeBSD requires
netinet/in.h to be included first. So, adding a similar guard.
Also fixing one instance where this is not respected at the moment.
We do have FreeBSD CI these days, but it is still nice to have
a more clear error message.
Fixes: b2befd5bb2db ("sparse: Add guards to prevent FreeBSD-incompatible #include order.")
Acked-by: Mike Pattrick <mkp@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
When we send parallel flows such as VXLAN to a PF[1] port in OVS with
multiple PMDs. OVS will create a RTE flow with Mark and RSS actions to
send flows to the software data path. But the RSS action does not work
well and all the flows are forwarded to a single PMD. This is because
RSS hash types should be set in RSS action.
[1]: In our testbed, a Mellanox ConnectX-6 is used as a PF port.
[i.maximets]
DPDK PMD drivers supposed to provide "best-effort" RSS configuration
if the type is set to zero. However, they are very inconsistent in
practice and barely put any effort to provide a good configuration.
For example, mlx5 driver seems to use just RTE_ETH_RSS_IP, which is
not enough for most deployments.
Setting the types the same way we configure them for a normal RSS
in netdev-dpdk to workaround the scalability issue.
Signed-off-by: Harold Huang <baymaxhuang@gmail.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
When OVS sees a tunnel push with a nested list next, it will not
clone the packet, as a clone is not needed. However, a clone action will
still be created with the tunnel push encapsulated inside. There is no
need to create the clone action in this case, as extra parsing will need
to be performed, which is less efficient.
Signed-off-by: Rosemarie O'Riorden <roriorden@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
For VLANs, the match of ethernet type should be specified in inner_type
field of the vlan match, and not type field in ethernet match.
Fix it.
Fixes: e8a2b5bf92bb ("netdev-dpdk: implement flow offload with rte flow")
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Salem Sol <salems@nvidia.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
DPDK 20.11 introduced an ability to specify existance/non-existance of
VLAN tag by [1].
Use this attribute.
[1]: 09315fc83861 ("ethdev: add VLAN attributes to ethernet and VLAN items")
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Salem Sol <salems@nvidia.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Read the user configuration in the netdev-offload module to modify the
number of threads used to manage hardware offload requests.
This allows processing insertion, deletion and modification
concurrently.
The offload thread structure was modified to contain all needed
elements. This structure is multiplied by the number of requested
threads and used separately.
Signed-off-by: Gaetan Rivet <grive@u256.net>
Reviewed-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
The port mutex protects the netdev mapping, that can be changed by port
addition or port deletion. HW offloads operations can be considered read
operations on the port mapping itself. Use a rwlock to differentiate
between read and write operations, allowing concurrent queries and
offload insertions.
Because offload queries, deletion, and reconfigure_datapath() calls are
all rdlock, the deadlock fixed by [1] is still avoided, as the rdlock
side is recursive as prescribed by the POSIX standard. Executing
'reconfigure_datapath()' only requires a rdlock taken, but it is sometimes
executed in contexts where wrlock is taken ('do_add_port()' and
'do_del_port()').
This means that the deadlock described in [2] is still valid and should
be mitigated. The rdlock is taken using 'tryrdlock()' during offload query,
keeping the current behavior.
[1]: 81e89d5c2645 ("dpif-netdev: Make datapath port mutex recursive.")
[2]: 12d0edd75eba ("dpif-netdev: Avoid deadlock with offloading during PMD
thread deletion.").
Signed-off-by: Gaetan Rivet <grive@u256.net>
Reviewed-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
The rte_flow API in DPDK is now thread safe for insertion and deletion.
It is not however safe for concurrent query while the offload is being
inserted or deleted.
Insertion is not an issue as the rte_flow handle will be published to
other threads only once it has been inserted in the hardware, so the
query will only be able to proceed once it is already available.
For the deletion path however, offload status queries can be made while
an offload is being destroyed. This would create race conditions and
use-after-free if not properly protected.
As a pre-step before removing the OVS-level locks on the rte_flow API,
mutually exclude offload query and deletion from concurrent execution.
Signed-off-by: Gaetan Rivet <grive@u256.net>
Reviewed-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Add a lock to access the ufid to rte_flow map. This will protect it
from concurrent write accesses when multiple threads attempt it.
At this point, the reason for taking the lock is not to fullfill the
needs of the DPDK offload implementation anymore. Rewrite the comments
to reflect this change. The lock is still needed to protect against
changes to netdev port mapping.
Signed-off-by: Gaetan Rivet <grive@u256.net>
Reviewed-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
The implementation of hardware offload counters in currently meant to be
managed by a single thread. Use the offload thread pool API to manage
one counter per thread.
Signed-off-by: Gaetan Rivet <grive@u256.net>
Reviewed-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
In the DPDK offload provider, keep track of inserted rte_flow and report
it when queried.
Signed-off-by: Gaetan Rivet <grive@u256.net>
Reviewed-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Add a per-netdev offload data field as part of netdev hw_info structure.
Use this field in netdev-offload-dpdk to map offload metadata (ufid to
rte_flow). Use flow API deinit ops to destroy the per-netdev metadata
when deallocating a netdev. Use RCU primitives to ensure coherency
during port deletion.
Signed-off-by: Gaetan Rivet <grive@u256.net>
Reviewed-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
The hw_miss_packet_recover() API results in performance degradation, for
ports that are either not offload capable or do not support this specific
offload API.
For example, in the test configuration shown below, the vhost-user port
does not support offloads and the VF port doesn't support hw_miss offload
API. But because tunnel offload needs to be configured in other bridges
(br-vxlan and br-phy), OVS has been built with -DALLOW_EXPERIMENTAL_API.
br-vhost br-vxlan br-phy
vhost-user<-->VF VF-Rep<-->VxLAN uplink-port
For every packet between the VF and the vhost-user ports, hw_miss API is
called even though it is not supported by the ports involved. This leads
to significant performance drop (~3x in some cases; both cycles and pps).
Return EOPNOTSUPP when this API fails for a device that doesn't support it
and avoid this API on that port for subsequent packets.
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
There are cases where users might want simple forwarding or drop rules
for all packets received from a specific port, e.g ::
"in_port=1,actions=2"
"in_port=2,actions=IN_PORT"
"in_port=3,vlan_tci=0x1234/0x1fff,actions=drop"
"in_port=4,actions=push_vlan:0x8100,set_field:4196->vlan_vid,output:3"
There are also cases where complex OpenFlow rules can be simplified
down to datapath flows with very simple match criteria.
In theory, for very simple forwarding, OVS doesn't need to parse
packets at all in order to follow these rules. "Simple match" lookup
optimization is intended to speed up packet forwarding in these cases.
Design:
Due to various implementation constraints userspace datapath has
following flow fields always in exact match (i.e. it's required to
match at least these fields of a packet even if the OF rule doesn't
need that):
- recirc_id
- in_port
- packet_type
- dl_type
- vlan_tci (CFI + VID) - in most cases
- nw_frag - for ip packets
Not all of these fields are related to packet itself. We already
know the current 'recirc_id' and the 'in_port' before starting the
packet processing. It also seems safe to assume that we're working
with Ethernet packets. So, for the simple OF rule we need to match
only on 'dl_type', 'vlan_tci' and 'nw_frag'.
'in_port', 'dl_type', 'nw_frag' and 13 bits of 'vlan_tci' can be
combined in a single 64bit integer (mark) that can be used as a
hash in hash map. We are using only VID and CFI form the 'vlan_tci',
flows that need to match on PCP will not qualify for the optimization.
Workaround for matching on non-existence of vlan updated to match on
CFI and VID only in order to qualify for the optimization. CFI is
always set by OVS if vlan is present in a packet, so there is no need
to match on PCP in this case. 'nw_frag' takes 2 bits of PCP inside
the simple match mark.
New per-PMD flow table 'simple_match_table' introduced to store
simple match flows only. 'dp_netdev_flow_add' adds flow to the
usual 'flow_table' and to the 'simple_match_table' if the flow
meets following constraints:
- 'recirc_id' in flow match is 0.
- 'packet_type' in flow match is Ethernet.
- Flow wildcards contains only minimal set of non-wildcarded fields
(listed above).
If the number of flows for current 'in_port' in a regular 'flow_table'
equals number of flows for current 'in_port' in a 'simple_match_table',
we may use simple match optimization, because all the flows we have
are simple match flows. This means that we only need to parse
'dl_type', 'vlan_tci' and 'nw_frag' to perform packet matching.
Now we make the unique flow mark from the 'in_port', 'dl_type',
'nw_frag' and 'vlan_tci' and looking for it in the 'simple_match_table'.
On successful lookup we don't need to run full 'miniflow_extract()'.
Unsuccessful lookup technically means that we have no suitable flow
in the datapath and upcall will be required. So, in this case EMC and
SMC lookups are disabled. We may optimize this path in the future by
bypassing the dpcls lookup too.
Performance improvement of this solution on a 'simple match' flows
should be comparable with partial HW offloading, because it parses same
packet fields and uses similar flow lookup scheme.
However, unlike partial HW offloading, it works for all port types
including virtual ones.
Performance results when compared to EMC:
Test setup:
virtio-user OVS virtio-user
Testpmd1 ------------> pmd1 ------------> Testpmd2
(txonly) x<------ pmd2 <------------ (mac swap)
Single stream of 64byte packets. Actions:
in_port=vhost0,actions=vhost1
in_port=vhost1,actions=vhost0
Stats collected from pmd1 and pmd2, so there are 2 scenarios:
Virt-to-Virt : Testpmd1 ------> pmd1 ------> Testpmd2.
Virt-to-NoCopy : Testpmd2 ------> pmd2 --->x Testpmd1.
Here the packet sent from pmd2 to Testpmd1 is always dropped, because
the virtqueue is full since Testpmd1 is in txonly mode and doesn't
receive any packets. This should be closer to the performance of a
VM-to-Phy scenario.
Test performed on machine with Intel Xeon CPU E5-2690 v4 @ 2.60GHz.
Table below represents improvement in throughput when compared to EMC.
+----------------+------------------------+------------------------+
| | Default (-g -O2) | "-Ofast -march=native" |
| Scenario +------------+-----------+------------+-----------+
| | GCC | Clang | GCC | Clang |
+----------------+------------+-----------+------------+-----------+
| Virt-to-Virt | +18.9% | +25.5% | +10.8% | +16.7% |
| Virt-to-NoCopy | +24.3% | +33.7% | +14.9% | +22.0% |
+----------------+------------+-----------+------------+-----------+
For Phy-to-Phy case performance improvement should be even higher, but
it's not the main use-case for this functionality. Performance
difference for the non-simple flows is within a margin of error.
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Add parsing gre match fields.
Signed-off-by: Nir Anteby <nanteby@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Add support for tnl_pop action for gre vport.
Signed-off-by: Nir Anteby <nanteby@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Refactor the function as a pre-step towards supporting more tunnel
types.
Signed-off-by: Nir Anteby <nanteby@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Support IPv6 fragmentation matching.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Support IPv4 fragmentation matching.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Matching on frag types requires range. Add 'last' attribute to patterns.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
The 's_tnl' member in flow_patterns and flow_actions should be
to set to DS_EMPTY_INITIALIZER, to be consistent with dynamic string
initializations.
Also, there's a potential memory leak of flow_patterns->s_tnl.
Fix this by destroying s_tnl in free_flow_patterns().
Fixes: 507d20e77bfe ("netdev-offload-dpdk: Support vports flows offload.")
Fixes: be56e063d028 ("netdev-offload-dpdk: Support tunnel pop action.")
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Acked-by: Gaetan Rivet <grive@u256.net>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Fixes: b6207b1d2711 ("netdev-offload-dpdk: Support offload of set IPv6 actions.")
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Port ID should be obtained from physical device used to
create/destroy flow rules.
Fixes: 507d20e77bfe ("netdev-offload-dpdk: Support vports flows offload.")
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Eli Britstein <elibr@nvidia.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
For VXLAN offload, matches should be done on outer header for tunnel
properties as well as inner packet matches. Add a function for parsing
VXLAN tunnel matches.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Tested-by: Marko Kovacevic <marko.kovacevic@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Vports are virtual, OVS only logical devices, so rte_flows cannot be
applied as is on them. Instead, apply the rules the physical port from
which the packet has arrived, provided by orig_in_port field.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Tested-by: Marko Kovacevic <marko.kovacevic@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Support tunnel pop action.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Tested-by: Marko Kovacevic <marko.kovacevic@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
In order to allow showing more debug messages, increase the rate limits.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Tested-by: Marko Kovacevic <marko.kovacevic@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
'linux_tc' flow API suitable only for tunneling vports with backing
linux interfaces. DPDK flow API is not suitable for such ports.
With this change we could drop vport restriction from dpif-netdev.
This is a prerequisite for enabling vport offloading in DPDK.
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Tested-by: Marko Kovacevic <marko.kovacevic@intel.com>
A miss in virtual port offloads means the flow with tnl_pop was
offloaded, but not the following one. Recover the state and continue
with SW processing.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Tested-by: Marko Kovacevic <marko.kovacevic@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Remove all the rules for the specified netdev.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Emma Finn <emma.finn@intel.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Refactor disassociation to be removed from flow destroy, and to use
already found object instead of re-searching it.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Emma Finn <emma.finn@intel.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Keep the netdev of the offload rule as a field in the offload object as
a pre-step towards support flushing of the offload rules.
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Emma Finn <emma.finn@intel.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Removing temporary patch - 023f257 (netdev-offload-dpdk: Fix for broken
ethernet matching HWOL for XL710NIC).
Ethernet pattern is now being set correctly withtin the i40e PMD.
Signed-off-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ian Stokes <ian.stokes@intel.com>
The offload layer clears the L4 protocol mask in the L3 item, when the
L4 item is passed for matching, as an optimization. This can be confusing
while parsing the headers in the PMD. Also, the datapath flow specifies
this field to be matched. This optimization is best left to the PMD.
This patch restores the code to pass the L4 protocol type in L3 match.
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Acked-by: Eli Britstein <elibr@mellanox.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
In case of a flow modification, preserve the HW statistics of the old HW
flow to the new one.
Fixes: 3c7330ebf036 ("netdev-offload-dpdk: Support offload of output action.")
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Gaetan Rivet <gaetanr@nvidia.com>
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Struct match has the tunnel values/masks in
match->flow.tunnel/match->wc.masks.tunnel.
Load actions such as load:0xa566c10->NXM_NX_TUN_IPV4_DST[],
load:0xbba->NXM_NX_TUN_ID[] are utilizing the tunnel masks fields,
but those should not be used for matching.
Offloading fails if masks is not clear. Clear it if no tunnel used.
Fixes: e8a2b5bf92bb ("netdev-dpdk: implement flow offload with rte flow")
Reviewed-by: Eli Britstein <elibr@mellanox.com>
Reviewed-by: Gaetan Rivet <gaetanr@mellanox.com>
Acked-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Tested-by: Emma Finn <emma.finn@intel.com>
Signed-off-by: Lei Wang <leiw@mellanox.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>