dev->last_rx is used for rebalancing in Linux bonding. However,
on a SMP machine it quickly becomes a very hot cacheline. On
kernels 2.6.29 and later the networking core will update last_rx
only if bonding is in use, so drivers do not need to set it at all.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
vport_ops, tunnel_ops, and ethtool_ops should not change at runtime.
Therefore, mark them as const to keep them out of the hotpath and to
prevent them from getting trampled.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
We currently call skb_reset_mac_header() in a few places when a
packet is received. However, this is not needed because flow_extract()
will set all of the protocol headers during parsing and nothing needs
the packet headers before that time.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
When transmitting on a device, dev_hard_start_xmit() always provides
a private clone. The skb_share_check() in internal_dev_xmit() is
therefore unnecessary, so remove it.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
Linux 2.6.35 added struct rtnl_link_stats64, which as a set of 64-bit
network device counters is what the OVS datapath needs. We might as well
use it instead of our own.
This commit moves the if_link.h compat header from datapath/ into the
top-level include/ directory so that it is visible both to kernel and
userspace code.
Signed-off-by: Ben Pfaff <blp@nicira.com>
Acked-by: Jesse Gross <jesse@nicira.com>
Currently internal devices register a destructor function which
simply calls free_netdev. Instead we can simply set the destructor
to free_netdev. In addition to being cleaner, it is also a bug fix
because the module could be unloaded before the destructor is called,
making a call into our code illegal.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
Commit 4bee42 "tunnel: Correctly check for internal device." fixed
the call to internal_dev_get_vport() by first checking that the
device is in fact an internal device. However, it also accidentally
removed the check ensuring that the vport itself was not NULL. This
adds that check back by redoing the previous change in a more robust
manner.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
An upcoming commit will add support for supplying cached flows for
packets entering the datapath. This adds the code in the datapath
itself to recognize these cached flows and use them instead of
extracting the flow fields and doing a lookup.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Reviewed-by: Ben Pfaff <blp@nicira.com>
'struct net_device' is refcounted and can stick around for quite a
while if someone is still holding a reference to it. However, we
free the vport that it is attached to in the next RCU grace period
after detach. This assigns the vport to NULL on detach and adds
appropriate checks.
In some places we would put the return type on the same line as
the rest of the function definition and other places we wouldn't.
Reformat everything to match kernel style.
Internal devices currently keep track of stats themselves. However,
we now have stats tracking in the vport layer, so convert to use
that instead to avoid code duplication and take advantage of
additional features such as 64-bit counters.
Since vport implementations have no header files they needed to be
declared as extern before being used. They are currently declared
in vport.c but this isn't safe because the compiler will silently
accept it if the type is incorrect. This moves those declarations
into vport.h, which is included by all implementations and will
cause errors about conflicting types if there is a mismatch.
Pull some generic implementations of vport functions out of the
GRE vport so they can be used by others.
Also move the code to set the MTUs of internal devices to the minimum
of attached devices to the generic vport_set_mtu layer.
Places that update per-cpu stats without locking need to have bottom
halves disabled. Otherwise we can be running in process context and
in the middle of an operation and be interrupted by a softirq.
Enables checksum offloading, scatter/gather, and TSO on internal
devices. While these optimizations were not previously enabled on
internal ports we already could receive these types of packets from
Xen guests. This has the obvious performance benefits when these
packets can be passed directly to hardware.
There is also a more subtle benefit for GRE on Xen. GRE packets
pass through OVS twice - once before encapsulation and once after
encapsulation, moving through an internal device in the process.
If it is a SG packet (as is common on Xen), a copy was necessary
to linearize for the internal device. However, Xen uses the
memory allocator to track packets so when the original packet is
freed after the copy netback notifies the guest that the packet
has been sent, despite the fact that it is actually sitting in the
transmit queue. The guest then sends packets as fast as the CPU
can handle, overflowing the transmit queue. By enabling SG on
the internal device, we avoid the copy and keep the accounting
correct.
In certain circumstances this patch can decrease performance for
TCP. TCP has its own mechanism for tracking in-flight packets
and therefore does not benefit from the corrected socket accounting.
However, certain NICs do not like SG when it is not being used for
TSO (these packets can no longer be handled by TSO after GRE
encapsulation). These NICs presumably enable SG even though they
can't handle it well because TSO requires SG.
Tested controllers (all 1G):
Marvell 88E8053 (large performance hit)
Broadcom BCM5721 (small performance hit)
Intel 82571EB (no change)
We currently acquire dp_mutex when we are notified that the MTU
of a device attached to the datapath has changed so that we can
set the internal devices to the minimum MTU. However, it is not
required to hold dp_mutex because we already have RTNL lock and it
causes a deadlock, so don't do it.
Specifically, the issue is that DP mutex is acquired twice: once in
dp_device_event() before calling set_internal_devs_mtu() and then
again in internal_dev_change_mtu() when it is actually being changed
(since the MTU can also be set directly). Since it's not a recursive
mutex, deadlock.
Currently the datapath directly accesses devices through their
Linux functions. Obviously this doesn't work for virtual devices
that are not backed by an actual Linux device. This creates a
new virtual port layer which handles all interaction with devices.
The existing support for Linux devices was then implemented on top
of this layer as two device types. It splits out and renames dp_dev
to internal_dev. There were several places where datapath devices
had to handled in a special manner and this cleans that up by putting
all the special casing in a single location.