2
0
mirror of https://github.com/openvswitch/ovs synced 2025-08-31 06:15:47 +00:00
Files
ovs/ovsdb/transaction.h

82 lines
3.2 KiB
C
Raw Normal View History

/* Copyright (c) 2009, 2010, 2017, 2019 Nicira, Inc.
2009-11-04 15:11:44 -08:00
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef OVSDB_TRANSACTION_H
#define OVSDB_TRANSACTION_H 1
#include <stdbool.h>
ovsdb: Use table indexes if available for ovsdb_query(). Currently all OVSDB database queries except for UUID lookups all result in linear lookups over the entire table, even if an index is present. This patch modifies ovsdb_query() to attempt an index lookup first, if possible. If no matching indexes are present then a linear index is still conducted. To test this, I set up an ovsdb database with a variable number of rows and timed the average of how long ovsdb-client took to query a single row. The first two tests involved a linear scan that didn't match any rows, so there was no overhead associated with sending or encoding output. The post-patch linear scan was a worst case scenario where the table did have an appropriate index but the conditions made its usage impossible. The indexed lookup test was for a matching row, which did also include overhead associated with a match. The results are included in the table below. Rows | 100k | 200k | 300k | 400k | 500k -----------------------+------+------+------+------+----- Pre-patch linear scan | 9ms | 24ms | 37ms | 49ms | 61ms Post-patch linear scan | 9ms | 24ms | 38ms | 49ms | 61ms Indexed lookup | 3ms | 3ms | 3ms | 3ms | 3ms I also tested the performance of ovsdb_query() by wrapping it in a loop and measuring the time it took to perform 1000 linear scans on 1, 10, 100k, and 200k rows. This test showed that the new index checking code did not slow down worst case lookups to a statistically detectable degree. Reported-at: https://issues.redhat.com/browse/FDP-590 Signed-off-by: Mike Pattrick <mkp@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-06-19 08:54:53 -04:00
#include <stdint.h>
2009-11-04 15:11:44 -08:00
#include "compiler.h"
ovsdb: Use table indexes if available for ovsdb_query(). Currently all OVSDB database queries except for UUID lookups all result in linear lookups over the entire table, even if an index is present. This patch modifies ovsdb_query() to attempt an index lookup first, if possible. If no matching indexes are present then a linear index is still conducted. To test this, I set up an ovsdb database with a variable number of rows and timed the average of how long ovsdb-client took to query a single row. The first two tests involved a linear scan that didn't match any rows, so there was no overhead associated with sending or encoding output. The post-patch linear scan was a worst case scenario where the table did have an appropriate index but the conditions made its usage impossible. The indexed lookup test was for a matching row, which did also include overhead associated with a match. The results are included in the table below. Rows | 100k | 200k | 300k | 400k | 500k -----------------------+------+------+------+------+----- Pre-patch linear scan | 9ms | 24ms | 37ms | 49ms | 61ms Post-patch linear scan | 9ms | 24ms | 38ms | 49ms | 61ms Indexed lookup | 3ms | 3ms | 3ms | 3ms | 3ms I also tested the performance of ovsdb_query() by wrapping it in a loop and measuring the time it took to perform 1000 linear scans on 1, 10, 100k, and 200k rows. This test showed that the new index checking code did not slow down worst case lookups to a statistically detectable degree. Reported-at: https://issues.redhat.com/browse/FDP-590 Signed-off-by: Mike Pattrick <mkp@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-06-19 08:54:53 -04:00
struct hmap;
struct json;
2009-11-04 15:11:44 -08:00
struct ovsdb;
struct ovsdb_row;
struct ovsdb_schema;
2009-11-04 15:11:44 -08:00
struct ovsdb_table;
struct uuid;
struct ovsdb_txn *ovsdb_txn_create(struct ovsdb *);
void ovsdb_txn_set_txnid(const struct uuid *, struct ovsdb_txn *);
ovsdb-monitor: Support monitor_cond_since. Support the new monitor method monitor_cond_since so that a client can request monitoring start from a specific point instead of always from beginning. This will reduce the cost at scenarios when server is restarted/failed-over but client still has all existing data. In these scenarios only new changes (and in most cases no change) needed to be transfered to client. When ovsdb-server restarted, history transactions are read from disk file; when ovsdb-server failed over, history transactions exists already in the memory of the new server. There are situations that the requested transaction may not be found. For example, a transaction that is too old and has been discarded from the maintained history list in memory, or the transactions on disk has been compacted during ovsdb compaction. In those situations the server fall backs to transfer all data start from begining. For more details of the protocol change, see Documentation/ref/ovsdb-server.7.rst. This change includes both server side and ovsdb-client side changes with the new protocol. IDLs using this capability will be added in future patches. Now the feature takes effect only for cluster mode of ovsdb-server, because cluster mode is the only mode that supports unique transcation uuid today. For other modes, the monitor_cond_since always fall back to transfer all data with found = false. Support for those modes can be added in the future. Signed-off-by: Han Zhou <hzhou8@ebay.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2019-02-28 09:15:18 -08:00
const struct uuid *ovsdb_txn_get_txnid(const struct ovsdb_txn *);
2009-11-04 15:11:44 -08:00
void ovsdb_txn_abort(struct ovsdb_txn *);
ovsdb raft: Precheck prereq before proposing commit. In current OVSDB Raft design, when there are multiple transactions pending, either from same server node or different nodes in the cluster, only the first one can be successful at once, and following ones will fail at the prerequisite check on leader node, because the first one will update the expected prerequisite eid on leader node, and the prerequisite used for proposing a commit has to be committed eid, so it is not possible for a node to use the latest prerequisite expected by the leader to propose a commit until the lastest transaction is committed by the leader and updated the committed_index on the node. Current implementation proposes the commit as soon as the transaction is requested by the client, which results in continously retry which causes high CPU load and waste. Particularly, even if all clients are using leader_only to connect to only the leader, the prereq check failure still happens a lot when a batch of transactions are pending on the leader node - the leader node proposes a batch of commits using the same committed eid as prerequisite and it updates the expected prereq as soon as the first one is in progress, but it needs time to append to followers and wait until majority replies to update the committed_index, which results in continously useless retries of the following transactions proposed by the leader itself. This patch doesn't change the design but simplely pre-checks if current eid is same as prereq, before proposing the commit, to avoid waste of CPU cycles, for both leader and followers. When clients use leader_only mode, this patch completely eliminates the prereq check failures. In scale test of OVN with 1k HVs and creating and binding 10k lports, the patch resulted in 90% CPU cost reduction on leader and >80% CPU cost reduction on followers. (The test was with leader election base time set to 10000ms, because otherwise the test couldn't complete because of the frequent leader re-election.) This is just one of the related performance problems of the prereq checking mechanism dicussed at: https://mail.openvswitch.org/pipermail/ovs-discuss/2019-February/048243.html Signed-off-by: Han Zhou <hzhou8@ebay.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
2019-03-01 10:56:37 -08:00
bool ovsdb_txn_precheck_prereq(const struct ovsdb *db);
struct ovsdb_error *ovsdb_txn_replay_commit(struct ovsdb_txn *)
OVS_WARN_UNUSED_RESULT;
struct ovsdb_txn_progress *ovsdb_txn_propose_commit(struct ovsdb_txn *,
bool durable)
OVS_WARN_UNUSED_RESULT;
struct ovsdb_error *ovsdb_txn_propose_commit_block(struct ovsdb_txn *,
bool durable)
OVS_WARN_UNUSED_RESULT;
void ovsdb_txn_complete(struct ovsdb_txn *);
struct ovsdb_txn_progress *ovsdb_txn_propose_schema_change(
struct ovsdb *, const struct ovsdb_schema *,
const struct json *data, struct uuid *txnid);
bool ovsdb_txn_progress_is_complete(const struct ovsdb_txn_progress *);
const struct ovsdb_error *ovsdb_txn_progress_get_error(
const struct ovsdb_txn_progress *);
void ovsdb_txn_progress_destroy(struct ovsdb_txn_progress *);
2009-11-04 15:11:44 -08:00
void ovsdb_txn_row_modify(struct ovsdb_txn *, const struct ovsdb_row *,
struct ovsdb_row **row_new,
struct ovsdb_row **row_diff);
2009-11-04 15:11:44 -08:00
void ovsdb_txn_row_insert(struct ovsdb_txn *, struct ovsdb_row *);
void ovsdb_txn_row_delete(struct ovsdb_txn *, const struct ovsdb_row *);
bool ovsdb_txn_may_create_row(const struct ovsdb_table *,
const struct uuid *row_uuid);
typedef bool ovsdb_txn_row_cb_func(const struct ovsdb_row *old,
const struct ovsdb_row *new,
const unsigned long int *changed,
void *aux);
void ovsdb_txn_for_each_change(const struct ovsdb_txn *,
ovsdb_txn_row_cb_func *, void *aux);
ovsdb: Use table indexes if available for ovsdb_query(). Currently all OVSDB database queries except for UUID lookups all result in linear lookups over the entire table, even if an index is present. This patch modifies ovsdb_query() to attempt an index lookup first, if possible. If no matching indexes are present then a linear index is still conducted. To test this, I set up an ovsdb database with a variable number of rows and timed the average of how long ovsdb-client took to query a single row. The first two tests involved a linear scan that didn't match any rows, so there was no overhead associated with sending or encoding output. The post-patch linear scan was a worst case scenario where the table did have an appropriate index but the conditions made its usage impossible. The indexed lookup test was for a matching row, which did also include overhead associated with a match. The results are included in the table below. Rows | 100k | 200k | 300k | 400k | 500k -----------------------+------+------+------+------+----- Pre-patch linear scan | 9ms | 24ms | 37ms | 49ms | 61ms Post-patch linear scan | 9ms | 24ms | 38ms | 49ms | 61ms Indexed lookup | 3ms | 3ms | 3ms | 3ms | 3ms I also tested the performance of ovsdb_query() by wrapping it in a loop and measuring the time it took to perform 1000 linear scans on 1, 10, 100k, and 200k rows. This test showed that the new index checking code did not slow down worst case lookups to a statistically detectable degree. Reported-at: https://issues.redhat.com/browse/FDP-590 Signed-off-by: Mike Pattrick <mkp@redhat.com> Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-06-19 08:54:53 -04:00
struct ovsdb_row *ovsdb_index_search(struct hmap *index,
struct ovsdb_row *, size_t i,
uint32_t hash);
void ovsdb_txn_add_comment(struct ovsdb_txn *, const char *);
const char *ovsdb_txn_get_comment(const struct ovsdb_txn *);
void ovsdb_txn_history_run(struct ovsdb *);
2019-08-19 16:30:35 -07:00
void ovsdb_txn_history_init(struct ovsdb *, bool need_txn_history);
void ovsdb_txn_history_destroy(struct ovsdb *);
2009-11-04 15:11:44 -08:00
#endif /* ovsdb/transaction.h */