I plan to make the vport type part of the standard header stuck on each
Netlink message related to a vport. As such, it is more convenient to use
an integer than a string. In addition, by being fundamentally different
from strings, using an integer may reduce the confusion we've had in the
past over the differences in userspace and kernel names for network device
and vport types.
Signed-off-by: Ben Pfaff <blp@nicira.com>
Acked-by: Jesse Gross <jesse@nicira.com>
I introduced this a long time ago as an efficient way for userspace to find
out whether and where an internal device was attached, but I've always
considered it an ugly kluge. Now that ODP_VPORT_QUERY can fetch a vport's
info regardless of datapath, it is no longer necessary. This commit
stops using Ethtool for this purpose and drops the feature.
Signed-off-by: Ben Pfaff <blp@nicira.com>
Acked-by: Jesse Gross <jesse@nicira.com>
unregister_netdevice() contains a call to synchronize_rcu(), so there
is no need to directly call it ourselves immediately beforehand.
We were relying on the call during unregistration anyways to stop
packets from being transmited on the device, so our version was
both misleading and had a performance penalty.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
The vports are now attached and ready to go when they are allocated,
so we don't have to worry about future changes. As a result, we can
directly store the pointer in the internal dev's netdevice private
space before it is registered. The registration process will handle
the necessary write memory barriers and anyone who has a reference
to the netdev will have done the read side barriers, we don't need
to use RCU at all.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
Checksum offloading has changed quite a bit across different kernel
and Xen versions. Since it is part of the skb data structure it is
unfortunately difficult to separate out into compatibility code.
This consolidates all of the checksum code in one place which makes
it easier read and remove as we prepare for upstreaming. On newer
kernels it also puts everything in inline functions, eliminating the
need to run through the compat code or make extra function calls.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
These steps are sequentially in lockstep, so we might as well combine them.
This expands the region over which the vport_lock is held. I didn't
carefully verify that this was OK.
This also eliminates the synchronize_rcu() call from destruction of tunnel
vports, since they didn't appear to me to need it.
It should be possible to eliminate the synchronize_rcu() from the netdev,
patch, and internal_dev vports, but this commit does not do that.
Signed-off-by: Ben Pfaff <blp@nicira.com>
Acked-by: Jesse Gross <jesse@nicira.com>
After the previous commit, which changed the datapath to always create and
attach a vport at the same time, and to always detach and delete a vport
at the same time, there is no longer any real distinction between a dp_port
and a vport. This commit, therefore, merges the two together to simplify
code. It might even improve performance, although I have not checked.
I wasn't sure at first whether the merged structure should be "struct
dp_port" or "struct vport". I went with the latter since the "v" prefix
sounds cool.
Signed-off-by: Ben Pfaff <blp@nicira.com>
Acked-by: Jesse Gross <jesse@nicira.com>
Upcoming commits will keep needing to pass more information to the vport
'create' member function. It's annoying to have to modify a dozen pieces
of code every time just to do this, so this commit encapsulates all of the
parameters in a new struct and passes that instead.
Acked-by: Jesse Gross <jesse@nicira.com>
We can already receive packets with a frag list due to reassembly
in CAPWAP tunneling. Since we can handle it, we might as well open
it up to internal devices as well to prevent linearization.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
dev->last_rx is used for rebalancing in Linux bonding. However,
on a SMP machine it quickly becomes a very hot cacheline. On
kernels 2.6.29 and later the networking core will update last_rx
only if bonding is in use, so drivers do not need to set it at all.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
vport_ops, tunnel_ops, and ethtool_ops should not change at runtime.
Therefore, mark them as const to keep them out of the hotpath and to
prevent them from getting trampled.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
We currently call skb_reset_mac_header() in a few places when a
packet is received. However, this is not needed because flow_extract()
will set all of the protocol headers during parsing and nothing needs
the packet headers before that time.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
When transmitting on a device, dev_hard_start_xmit() always provides
a private clone. The skb_share_check() in internal_dev_xmit() is
therefore unnecessary, so remove it.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
Linux 2.6.35 added struct rtnl_link_stats64, which as a set of 64-bit
network device counters is what the OVS datapath needs. We might as well
use it instead of our own.
This commit moves the if_link.h compat header from datapath/ into the
top-level include/ directory so that it is visible both to kernel and
userspace code.
Signed-off-by: Ben Pfaff <blp@nicira.com>
Acked-by: Jesse Gross <jesse@nicira.com>
Currently internal devices register a destructor function which
simply calls free_netdev. Instead we can simply set the destructor
to free_netdev. In addition to being cleaner, it is also a bug fix
because the module could be unloaded before the destructor is called,
making a call into our code illegal.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
Commit 4bee42 "tunnel: Correctly check for internal device." fixed
the call to internal_dev_get_vport() by first checking that the
device is in fact an internal device. However, it also accidentally
removed the check ensuring that the vport itself was not NULL. This
adds that check back by redoing the previous change in a more robust
manner.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
An upcoming commit will add support for supplying cached flows for
packets entering the datapath. This adds the code in the datapath
itself to recognize these cached flows and use them instead of
extracting the flow fields and doing a lookup.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Reviewed-by: Ben Pfaff <blp@nicira.com>
'struct net_device' is refcounted and can stick around for quite a
while if someone is still holding a reference to it. However, we
free the vport that it is attached to in the next RCU grace period
after detach. This assigns the vport to NULL on detach and adds
appropriate checks.
In some places we would put the return type on the same line as
the rest of the function definition and other places we wouldn't.
Reformat everything to match kernel style.
Internal devices currently keep track of stats themselves. However,
we now have stats tracking in the vport layer, so convert to use
that instead to avoid code duplication and take advantage of
additional features such as 64-bit counters.
Since vport implementations have no header files they needed to be
declared as extern before being used. They are currently declared
in vport.c but this isn't safe because the compiler will silently
accept it if the type is incorrect. This moves those declarations
into vport.h, which is included by all implementations and will
cause errors about conflicting types if there is a mismatch.
Pull some generic implementations of vport functions out of the
GRE vport so they can be used by others.
Also move the code to set the MTUs of internal devices to the minimum
of attached devices to the generic vport_set_mtu layer.
Places that update per-cpu stats without locking need to have bottom
halves disabled. Otherwise we can be running in process context and
in the middle of an operation and be interrupted by a softirq.
Enables checksum offloading, scatter/gather, and TSO on internal
devices. While these optimizations were not previously enabled on
internal ports we already could receive these types of packets from
Xen guests. This has the obvious performance benefits when these
packets can be passed directly to hardware.
There is also a more subtle benefit for GRE on Xen. GRE packets
pass through OVS twice - once before encapsulation and once after
encapsulation, moving through an internal device in the process.
If it is a SG packet (as is common on Xen), a copy was necessary
to linearize for the internal device. However, Xen uses the
memory allocator to track packets so when the original packet is
freed after the copy netback notifies the guest that the packet
has been sent, despite the fact that it is actually sitting in the
transmit queue. The guest then sends packets as fast as the CPU
can handle, overflowing the transmit queue. By enabling SG on
the internal device, we avoid the copy and keep the accounting
correct.
In certain circumstances this patch can decrease performance for
TCP. TCP has its own mechanism for tracking in-flight packets
and therefore does not benefit from the corrected socket accounting.
However, certain NICs do not like SG when it is not being used for
TSO (these packets can no longer be handled by TSO after GRE
encapsulation). These NICs presumably enable SG even though they
can't handle it well because TSO requires SG.
Tested controllers (all 1G):
Marvell 88E8053 (large performance hit)
Broadcom BCM5721 (small performance hit)
Intel 82571EB (no change)
We currently acquire dp_mutex when we are notified that the MTU
of a device attached to the datapath has changed so that we can
set the internal devices to the minimum MTU. However, it is not
required to hold dp_mutex because we already have RTNL lock and it
causes a deadlock, so don't do it.
Specifically, the issue is that DP mutex is acquired twice: once in
dp_device_event() before calling set_internal_devs_mtu() and then
again in internal_dev_change_mtu() when it is actually being changed
(since the MTU can also be set directly). Since it's not a recursive
mutex, deadlock.
Currently the datapath directly accesses devices through their
Linux functions. Obviously this doesn't work for virtual devices
that are not backed by an actual Linux device. This creates a
new virtual port layer which handles all interaction with devices.
The existing support for Linux devices was then implemented on top
of this layer as two device types. It splits out and renames dp_dev
to internal_dev. There were several places where datapath devices
had to handled in a special manner and this cleans that up by putting
all the special casing in a single location.