mirror of
https://github.com/openvswitch/ovs
synced 2025-09-03 15:55:19 +00:00
dpif-netdev: Report overhead busy cycles per pmd.
Users complained that per rxq pmd usage was confusing: summing those values per pmd would never reach 100% even if increasing traffic load beyond pmd capacity. This is because the dpif-netdev/pmd-rxq-show command only reports "pure" rxq cycles while some cycles are used in the pmd mainloop and adds up to the total pmd load. dpif-netdev/pmd-stats-show does report per pmd load usage. This load is measured since the last dpif-netdev/pmd-stats-clear call. On the other hand, the per rxq pmd usage reflects the pmd load on a 10s sliding window which makes it non trivial to correlate. Gather per pmd busy cycles with the same periodicity and report the difference as overhead in dpif-netdev/pmd-rxq-show so that we have all info in a single command. Example: $ ovs-appctl dpif-netdev/pmd-rxq-show pmd thread numa_id 1 core_id 3: isolated : true port: dpdk0 queue-id: 0 (enabled) pmd usage: 90 % overhead: 4 % pmd thread numa_id 1 core_id 5: isolated : false port: vhost0 queue-id: 0 (enabled) pmd usage: 0 % port: vhost1 queue-id: 0 (enabled) pmd usage: 93 % port: vhost2 queue-id: 0 (enabled) pmd usage: 0 % port: vhost6 queue-id: 0 (enabled) pmd usage: 0 % overhead: 6 % pmd thread numa_id 1 core_id 31: isolated : true port: dpdk1 queue-id: 0 (enabled) pmd usage: 86 % overhead: 4 % pmd thread numa_id 1 core_id 33: isolated : false port: vhost3 queue-id: 0 (enabled) pmd usage: 0 % port: vhost4 queue-id: 0 (enabled) pmd usage: 0 % port: vhost5 queue-id: 0 (enabled) pmd usage: 92 % port: vhost7 queue-id: 0 (enabled) pmd usage: 0 % overhead: 7 % Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Kevin Traynor <ktraynor@redhat.com> Signed-off-by: Ian Stokes <ian.stokes@intel.com>
This commit is contained in:
committed by
Ian Stokes
parent
30bfba0249
commit
3222a89d9a
@@ -99,13 +99,18 @@ struct dp_netdev_pmd_thread {
|
||||
long long int next_optimization;
|
||||
/* End of the next time interval for which processing cycles
|
||||
are stored for each polled rxq. */
|
||||
long long int rxq_next_cycle_store;
|
||||
long long int next_cycle_store;
|
||||
|
||||
/* Last interval timestamp. */
|
||||
uint64_t intrvl_tsc_prev;
|
||||
/* Last interval cycles. */
|
||||
atomic_ullong intrvl_cycles;
|
||||
|
||||
/* Write index for 'busy_cycles_intrvl'. */
|
||||
unsigned int intrvl_idx;
|
||||
/* Busy cycles in last PMD_INTERVAL_MAX intervals. */
|
||||
atomic_ullong *busy_cycles_intrvl;
|
||||
|
||||
/* Current context of the PMD thread. */
|
||||
struct dp_netdev_pmd_thread_ctx ctx;
|
||||
|
||||
|
Reference in New Issue
Block a user