mirror of
https://github.com/openvswitch/ovs
synced 2025-10-13 14:07:02 +00:00
lib/cmap: cmap_find_batch().
Batching the cmap find improves the memory behavior with large cmaps and can make searches twice as fast: $ tests/ovstest test-cmap benchmark 2000000 8 0.1 16 Benchmarking with n=2000000, 8 threads, 0.10% mutations, batch size 16: cmap insert: 533 ms cmap iterate: 57 ms batch search: 146 ms cmap destroy: 233 ms cmap insert: 552 ms cmap iterate: 56 ms cmap search: 299 ms cmap destroy: 229 ms hmap insert: 222 ms hmap iterate: 198 ms hmap search: 2061 ms hmap destroy: 209 ms Batch size 1 has small performance penalty, but all other batch sizes are faster than non-batched cmap_find(). The batch size 16 was experimentally found better than 8 or 32, so now classifier_lookup_miniflow_batch() performs the cmap find operations in batches of 16. Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com> Acked-by: Ben Pfaff <blp@nicira.com>
This commit is contained in:
@@ -297,7 +297,9 @@ struct cls_rule *classifier_lookup(const struct classifier *,
|
||||
struct flow_wildcards *);
|
||||
bool classifier_lookup_miniflow_batch(const struct classifier *cls,
|
||||
const struct miniflow **flows,
|
||||
struct cls_rule **rules, size_t len);
|
||||
struct cls_rule **rules,
|
||||
const size_t cnt);
|
||||
enum { CLASSIFIER_MAX_BATCH = 256 };
|
||||
bool classifier_rule_overlaps(const struct classifier *,
|
||||
const struct cls_rule *);
|
||||
|
||||
|
Reference in New Issue
Block a user