2017-03-19 10:22:05 -07:00
|
|
|
|
/* Copyright (c) 2009, 2010, 2011, 2012, 2013, 2014, 2016, 2017 Nicira, Inc.
|
2009-11-04 15:11:44 -08:00
|
|
|
|
*
|
|
|
|
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
|
* you may not use this file except in compliance with the License.
|
|
|
|
|
* You may obtain a copy of the License at:
|
|
|
|
|
*
|
|
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
*
|
|
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
|
* See the License for the specific language governing permissions and
|
|
|
|
|
* limitations under the License.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
#include <config.h>
|
|
|
|
|
|
|
|
|
|
#include <errno.h>
|
|
|
|
|
#include <getopt.h>
|
2013-04-18 16:37:05 -07:00
|
|
|
|
#include <inttypes.h>
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include <signal.h>
|
2014-03-25 15:51:23 -07:00
|
|
|
|
#include <sys/stat.h>
|
2009-11-16 15:09:50 -08:00
|
|
|
|
#include <unistd.h>
|
2009-11-04 15:11:44 -08:00
|
|
|
|
|
2010-01-04 10:05:51 -08:00
|
|
|
|
#include "column.h"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include "command-line.h"
|
2024-01-16 22:52:05 +00:00
|
|
|
|
#include "cooperative-multitasking.h"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include "daemon.h"
|
2011-07-26 09:53:49 -07:00
|
|
|
|
#include "dirs.h"
|
2022-03-10 23:33:17 +01:00
|
|
|
|
#include "dns-resolve.h"
|
2016-03-03 10:20:46 -08:00
|
|
|
|
#include "openvswitch/dynamic-string.h"
|
2014-02-26 10:44:46 -08:00
|
|
|
|
#include "fatal-signal.h"
|
2009-11-13 13:37:55 -08:00
|
|
|
|
#include "file.h"
|
2011-07-26 10:24:17 -07:00
|
|
|
|
#include "hash.h"
|
2016-07-12 16:37:34 -05:00
|
|
|
|
#include "openvswitch/json.h"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include "jsonrpc.h"
|
|
|
|
|
#include "jsonrpc-server.h"
|
2016-03-25 14:10:21 -07:00
|
|
|
|
#include "openvswitch/list.h"
|
2012-05-08 15:44:21 -07:00
|
|
|
|
#include "memory.h"
|
2016-02-03 20:57:32 -08:00
|
|
|
|
#include "monitor.h"
|
2021-05-27 15:29:02 +02:00
|
|
|
|
#include "ovs-replay.h"
|
2011-07-26 10:24:17 -07:00
|
|
|
|
#include "ovsdb.h"
|
2010-01-04 10:05:51 -08:00
|
|
|
|
#include "ovsdb-data.h"
|
|
|
|
|
#include "ovsdb-types.h"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include "ovsdb-error.h"
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
#include "ovsdb-parser.h"
|
2017-11-03 13:53:53 +08:00
|
|
|
|
#include "openvswitch/poll-loop.h"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include "process.h"
|
2016-06-24 17:13:06 -07:00
|
|
|
|
#include "replication.h"
|
2021-06-01 23:27:36 +02:00
|
|
|
|
#include "relay.h"
|
2010-01-04 10:05:51 -08:00
|
|
|
|
#include "row.h"
|
2012-05-08 15:44:21 -07:00
|
|
|
|
#include "simap.h"
|
2016-07-12 16:37:34 -05:00
|
|
|
|
#include "openvswitch/shash.h"
|
2009-12-21 13:13:48 -08:00
|
|
|
|
#include "stream-ssl.h"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include "stream.h"
|
2011-03-25 15:26:30 -07:00
|
|
|
|
#include "sset.h"
|
2017-12-31 21:15:58 -08:00
|
|
|
|
#include "storage.h"
|
2010-01-04 10:05:51 -08:00
|
|
|
|
#include "table.h"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include "timeval.h"
|
2011-01-28 15:39:55 -08:00
|
|
|
|
#include "transaction.h"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
#include "trigger.h"
|
|
|
|
|
#include "util.h"
|
|
|
|
|
#include "unixctl.h"
|
2015-03-21 00:00:49 -07:00
|
|
|
|
#include "perf-counter.h"
|
2017-05-01 10:13:11 -04:00
|
|
|
|
#include "ovsdb-util.h"
|
2014-12-15 14:10:38 +01:00
|
|
|
|
#include "openvswitch/vlog.h"
|
2010-07-16 11:02:49 -07:00
|
|
|
|
|
2010-10-19 14:47:01 -07:00
|
|
|
|
VLOG_DEFINE_THIS_MODULE(ovsdb_server);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
|
treewide: Refer to SSL configuration as SSL/TLS.
SSL protocol family is not actually being used or supported in OVS.
What we use is actually TLS.
Terms "SSL" and "TLS" are often used interchangeably in modern
software and refer to the same thing, which is normally just TLS.
Let's replace "SSL" with "SSL/TLS" in documentation and user-visible
messages, where it makes sense. This may make it more clear what
is meant for a less experienced user that may look for TLS support
in OVS and not find much.
We're not changing any actual code, because, for example, most of
OpenSSL APIs are using just SSL, for historical reasons. And our
database is using "SSL" table. We may consider migrating to "TLS"
naming for user-visible configuration like command line arguments
and database names, but that will require extra work on making sure
upgrades can still work. In general, a slightly more clear
documentation should be enough for now, especially since term SSL
is still widely used in the industry.
"SSL/TLS" is chosen over "TLS/SSL" simply because our user-visible
configuration knobs are using "SSL" naming, e.g. '--ssl-cyphers'
or 'ovs-vsctl set-ssl'. So, it might be less confusing this way.
We may switch that, if we decide on re-working the user-visible
commands towards "TLS" naming, or providing both alternatives.
Some other projects did similar changes. For example, the python ssl
library is now using "TLS/SSL" in the documentation whenever possible.
Same goes for OpenSSL itself.
Acked-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-12-09 17:38:45 +01:00
|
|
|
|
/* SSL/TLS configuration. */
|
2010-03-18 17:12:02 -07:00
|
|
|
|
static char *private_key_file;
|
|
|
|
|
static char *certificate_file;
|
|
|
|
|
static char *ca_cert_file;
|
2016-10-06 16:21:33 -07:00
|
|
|
|
static char *ssl_protocols;
|
|
|
|
|
static char *ssl_ciphers;
|
2024-12-09 17:38:53 +01:00
|
|
|
|
static char *ssl_ciphersuites;
|
2010-03-18 17:12:02 -07:00
|
|
|
|
static bool bootstrap_ca_cert;
|
|
|
|
|
|
ovsdb-server: Reclaim heap memory after compaction.
Compaction happens at most once in 10 minutes. That is a big time
interval for a heavy loaded ovsdb-server in cluster mode.
In 10 minutes raft logs could grow up to tens of thousands of entries
with tens of gigabytes in total size.
While compaction cleans up raft log entries, the memory in many cases
is not returned to the system, but kept in the heap of running
ovsdb-server process, and it could stay in this condition for a really
long time. In the end one performance spike could lead to a fast
growth of the raft log and this memory will never (for a really long
time) be released to the system even if the database if empty.
Simple example how to reproduce with OVN sandbox:
1. make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run following script that creates 1 port group, adds 4000 acls and
removes all of that in the end:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
3. Stopping one of Northbound DB servers:
ovs-appctl -t $(pwd)/sandbox/nb1 exit
Make sure that ovsdb-server didn't compact the database before
it was stopped. Now we have a db file on disk that contains
4000 fairly big transactions inside.
4. Trying to start same ovsdb-server with this file.
# cd sandbox && ovsdb-server <...> nb1.db
At this point ovsdb-server reads all the transactions from db
file and performs all of them as fast as it can one by one.
When it finishes this, raft log contains 4000 entries and
ovsdb-server consumes (on my system) ~13GB of memory while
database is empty. And libc will likely never return this memory
back to system, or, at least, will hold it for a really long time.
This patch adds a new command 'ovsdb-server/memory-trim-on-compaction'.
It's disabled by default, but once enabled, ovsdb-server will call
'malloc_trim(0)' after every successful compaction to try to return
unused heap memory back to system. This is glibc-specific, so we
need to detect function availability in a build time.
Disabled by default since it adds from 1% to 30% (depending on the
current state) to the snapshot creation time and, also, next memory
allocations will likely require requests to kernel and that might be
slower. Could be enabled by default later if considered broadly
beneficial.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1888829
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-10-24 02:25:48 +02:00
|
|
|
|
/* Try to reclaim heap memory back to system after DB compaction. */
|
2022-06-30 13:27:00 +02:00
|
|
|
|
static bool trim_memory = true;
|
ovsdb-server: Reclaim heap memory after compaction.
Compaction happens at most once in 10 minutes. That is a big time
interval for a heavy loaded ovsdb-server in cluster mode.
In 10 minutes raft logs could grow up to tens of thousands of entries
with tens of gigabytes in total size.
While compaction cleans up raft log entries, the memory in many cases
is not returned to the system, but kept in the heap of running
ovsdb-server process, and it could stay in this condition for a really
long time. In the end one performance spike could lead to a fast
growth of the raft log and this memory will never (for a really long
time) be released to the system even if the database if empty.
Simple example how to reproduce with OVN sandbox:
1. make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run following script that creates 1 port group, adds 4000 acls and
removes all of that in the end:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
3. Stopping one of Northbound DB servers:
ovs-appctl -t $(pwd)/sandbox/nb1 exit
Make sure that ovsdb-server didn't compact the database before
it was stopped. Now we have a db file on disk that contains
4000 fairly big transactions inside.
4. Trying to start same ovsdb-server with this file.
# cd sandbox && ovsdb-server <...> nb1.db
At this point ovsdb-server reads all the transactions from db
file and performs all of them as fast as it can one by one.
When it finishes this, raft log contains 4000 entries and
ovsdb-server consumes (on my system) ~13GB of memory while
database is empty. And libc will likely never return this memory
back to system, or, at least, will hold it for a really long time.
This patch adds a new command 'ovsdb-server/memory-trim-on-compaction'.
It's disabled by default, but once enabled, ovsdb-server will call
'malloc_trim(0)' after every successful compaction to try to return
unused heap memory back to system. This is glibc-specific, so we
need to detect function availability in a build time.
Disabled by default since it adds from 1% to 30% (depending on the
current state) to the snapshot creation time and, also, next memory
allocations will likely require requests to kernel and that might be
slower. Could be enabled by default later if considered broadly
beneficial.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1888829
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-10-24 02:25:48 +02:00
|
|
|
|
|
2009-11-17 16:02:38 -08:00
|
|
|
|
static unixctl_cb_func ovsdb_server_exit;
|
2010-03-18 11:24:55 -07:00
|
|
|
|
static unixctl_cb_func ovsdb_server_compact;
|
ovsdb-server: Reclaim heap memory after compaction.
Compaction happens at most once in 10 minutes. That is a big time
interval for a heavy loaded ovsdb-server in cluster mode.
In 10 minutes raft logs could grow up to tens of thousands of entries
with tens of gigabytes in total size.
While compaction cleans up raft log entries, the memory in many cases
is not returned to the system, but kept in the heap of running
ovsdb-server process, and it could stay in this condition for a really
long time. In the end one performance spike could lead to a fast
growth of the raft log and this memory will never (for a really long
time) be released to the system even if the database if empty.
Simple example how to reproduce with OVN sandbox:
1. make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run following script that creates 1 port group, adds 4000 acls and
removes all of that in the end:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
3. Stopping one of Northbound DB servers:
ovs-appctl -t $(pwd)/sandbox/nb1 exit
Make sure that ovsdb-server didn't compact the database before
it was stopped. Now we have a db file on disk that contains
4000 fairly big transactions inside.
4. Trying to start same ovsdb-server with this file.
# cd sandbox && ovsdb-server <...> nb1.db
At this point ovsdb-server reads all the transactions from db
file and performs all of them as fast as it can one by one.
When it finishes this, raft log contains 4000 entries and
ovsdb-server consumes (on my system) ~13GB of memory while
database is empty. And libc will likely never return this memory
back to system, or, at least, will hold it for a really long time.
This patch adds a new command 'ovsdb-server/memory-trim-on-compaction'.
It's disabled by default, but once enabled, ovsdb-server will call
'malloc_trim(0)' after every successful compaction to try to return
unused heap memory back to system. This is glibc-specific, so we
need to detect function availability in a build time.
Disabled by default since it adds from 1% to 30% (depending on the
current state) to the snapshot creation time and, also, next memory
allocations will likely require requests to kernel and that might be
slower. Could be enabled by default later if considered broadly
beneficial.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1888829
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-10-24 02:25:48 +02:00
|
|
|
|
static unixctl_cb_func ovsdb_server_memory_trim_on_compaction;
|
2010-06-24 12:56:30 -07:00
|
|
|
|
static unixctl_cb_func ovsdb_server_reconnect;
|
2015-03-21 00:00:49 -07:00
|
|
|
|
static unixctl_cb_func ovsdb_server_perf_counters_clear;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_perf_counters_show;
|
2016-07-18 11:45:55 +03:00
|
|
|
|
static unixctl_cb_func ovsdb_server_disable_monitor_cond;
|
2016-07-28 11:35:01 -07:00
|
|
|
|
static unixctl_cb_func ovsdb_server_set_active_ovsdb_server;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_get_active_ovsdb_server;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_connect_active_ovsdb_server;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_disconnect_active_ovsdb_server;
|
2020-01-07 10:24:48 +05:30
|
|
|
|
static unixctl_cb_func ovsdb_server_set_active_ovsdb_server_probe_interval;
|
2023-07-17 11:06:53 +02:00
|
|
|
|
static unixctl_cb_func ovsdb_server_set_relay_source_interval;
|
2016-08-23 04:05:11 -07:00
|
|
|
|
static unixctl_cb_func ovsdb_server_set_sync_exclude_tables;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_get_sync_exclude_tables;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_get_sync_status;
|
2020-08-03 17:05:28 +02:00
|
|
|
|
static unixctl_cb_func ovsdb_server_get_db_storage_status;
|
2009-11-17 16:02:38 -08:00
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
/* Holds the name of the configuration file passed via --config-file.
|
|
|
|
|
* Mutually exclusive with command-line and unixctl configuration
|
|
|
|
|
* that can otherwise be done via configuration file. */
|
|
|
|
|
static char *config_file_path;
|
|
|
|
|
/* UnixCtl command to reload configuration from a configuration file. */
|
|
|
|
|
static unixctl_cb_func ovsdb_server_reload;
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
#define SERVICE_MODELS \
|
|
|
|
|
SERVICE_MODEL(UNDEFINED, undefined) \
|
|
|
|
|
SERVICE_MODEL(STANDALONE, standalone) \
|
|
|
|
|
SERVICE_MODEL(CLUSTERED, clustered) \
|
|
|
|
|
SERVICE_MODEL(ACTIVE_BACKUP, active-backup) \
|
|
|
|
|
SERVICE_MODEL(RELAY, relay)
|
|
|
|
|
|
|
|
|
|
enum service_model {
|
|
|
|
|
#define SERVICE_MODEL(ENUM, NAME) SM_##ENUM,
|
|
|
|
|
SERVICE_MODELS
|
|
|
|
|
#undef SERVICE_MODEL
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static const char *
|
|
|
|
|
service_model_to_string(enum service_model model)
|
|
|
|
|
{
|
|
|
|
|
switch (model) {
|
|
|
|
|
#define SERVICE_MODEL(ENUM, NAME) \
|
|
|
|
|
case SM_##ENUM: return #NAME;
|
|
|
|
|
SERVICE_MODELS
|
|
|
|
|
#undef SERVICE_MODEL
|
|
|
|
|
default: OVS_NOT_REACHED();
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static enum service_model
|
|
|
|
|
service_model_from_string(const char *model)
|
|
|
|
|
{
|
|
|
|
|
#define SERVICE_MODEL(ENUM, NAME) \
|
|
|
|
|
if (!strcmp(model, #NAME)) { \
|
|
|
|
|
return SM_##ENUM; \
|
|
|
|
|
}
|
|
|
|
|
SERVICE_MODELS
|
|
|
|
|
#undef SERVICE_MODEL
|
|
|
|
|
|
|
|
|
|
VLOG_WARN("Unrecognized database service model: '%s'", model);
|
|
|
|
|
|
|
|
|
|
return SM_UNDEFINED;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
struct db_config {
|
|
|
|
|
enum service_model model;
|
|
|
|
|
char *source; /* sync-from for backup or relay source. */
|
|
|
|
|
struct ovsdb_jsonrpc_options *options; /* For 'source' connection. */
|
|
|
|
|
|
|
|
|
|
/* Configuration specific to SM_ACTIVE_BACKUP. */
|
|
|
|
|
struct {
|
|
|
|
|
char *sync_exclude; /* Tables to exclude. */
|
|
|
|
|
bool backup; /* If true, the database is read-only and receives
|
|
|
|
|
* updates from the 'source'. */
|
|
|
|
|
} ab;
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
struct db {
|
|
|
|
|
struct ovsdb *db;
|
|
|
|
|
char *filename;
|
|
|
|
|
struct db_config *config;
|
|
|
|
|
struct uuid row_uuid;
|
|
|
|
|
};
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
struct server_config {
|
2024-01-09 23:49:03 +01:00
|
|
|
|
struct shash *remotes;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash *all_dbs; /* All the currently serviced databases.
|
|
|
|
|
* 'struct db' by a schema name. */
|
|
|
|
|
struct ovsdb_jsonrpc_server *jsonrpc;
|
|
|
|
|
|
|
|
|
|
/* Command line + appctl configuration. */
|
2016-08-23 04:05:11 -07:00
|
|
|
|
char **sync_from;
|
|
|
|
|
char **sync_exclude;
|
|
|
|
|
bool *is_backup;
|
2020-01-07 10:24:48 +05:30
|
|
|
|
int *replication_probe_interval;
|
2023-07-17 11:06:53 +02:00
|
|
|
|
int *relay_source_probe_interval;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
FILE *config_tmpfile;
|
2013-04-10 09:34:49 -07:00
|
|
|
|
};
|
|
|
|
|
static unixctl_cb_func ovsdb_server_add_remote;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_remove_remote;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_list_remotes;
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
static unixctl_cb_func ovsdb_server_add_database;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_remove_database;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_list_databases;
|
2022-06-24 11:55:58 +02:00
|
|
|
|
static unixctl_cb_func ovsdb_server_tlog_set;
|
|
|
|
|
static unixctl_cb_func ovsdb_server_tlog_list;
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
static void read_db(struct server_config *, struct db *);
|
|
|
|
|
static struct ovsdb_error *open_db(struct server_config *,
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
const char *filename,
|
|
|
|
|
const struct db_config *)
|
2017-12-31 21:15:58 -08:00
|
|
|
|
OVS_WARN_UNUSED_RESULT;
|
2017-12-15 11:14:55 -08:00
|
|
|
|
static void add_server_db(struct server_config *);
|
2017-12-31 21:15:58 -08:00
|
|
|
|
static void remove_db(struct server_config *, struct shash_node *db, char *);
|
|
|
|
|
static void close_db(struct server_config *, struct db *, char *);
|
2013-06-13 04:30:32 -07:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
static struct ovsdb_error *update_schema(struct ovsdb *,
|
|
|
|
|
const struct ovsdb_schema *,
|
|
|
|
|
const struct uuid *txnid,
|
|
|
|
|
bool conversion_with_no_data,
|
|
|
|
|
void *aux)
|
|
|
|
|
OVS_WARN_UNUSED_RESULT;
|
|
|
|
|
|
2017-12-28 13:21:11 -08:00
|
|
|
|
static void parse_options(int argc, char *argvp[],
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash *db_conf, struct shash *remotes,
|
2017-12-28 13:21:11 -08:00
|
|
|
|
char **unixctl_pathp, char **run_command,
|
|
|
|
|
char **sync_from, char **sync_exclude,
|
|
|
|
|
bool *is_backup);
|
2014-12-15 14:10:38 +01:00
|
|
|
|
OVS_NO_RETURN static void usage(void);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
|
2024-01-09 23:49:03 +01:00
|
|
|
|
static struct ovsdb_jsonrpc_options *add_remote(
|
|
|
|
|
struct shash *remotes, const char *target,
|
|
|
|
|
const struct ovsdb_jsonrpc_options *);
|
|
|
|
|
static void free_remotes(struct shash *remotes);
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
static char *reconfigure_remotes(struct ovsdb_jsonrpc_server *,
|
|
|
|
|
const struct shash *all_dbs,
|
2024-01-09 23:49:03 +01:00
|
|
|
|
struct shash *remotes);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
static char *reconfigure_ssl(const struct shash *all_dbs);
|
|
|
|
|
static void report_error_if_changed(char *error, char **last_errorp);
|
2010-01-04 10:05:51 -08:00
|
|
|
|
|
2011-01-28 15:39:55 -08:00
|
|
|
|
static void update_remote_status(const struct ovsdb_jsonrpc_server *jsonrpc,
|
2024-01-09 23:49:03 +01:00
|
|
|
|
const struct shash *remotes,
|
2013-06-13 04:30:32 -07:00
|
|
|
|
struct shash *all_dbs);
|
2017-12-15 11:14:55 -08:00
|
|
|
|
static void update_server_status(struct shash *all_dbs);
|
2011-01-28 15:39:55 -08:00
|
|
|
|
|
2024-01-09 23:49:03 +01:00
|
|
|
|
static void save_config__(FILE *config_file, const struct shash *remotes,
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
const struct shash *db_conf,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
const char *sync_from, const char *sync_exclude,
|
|
|
|
|
bool is_backup);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
static void save_config(struct server_config *);
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
static bool load_config(FILE *config_file, struct shash *remotes,
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash *db_conf, char **sync_from,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
char **sync_exclude, bool *is_backup);
|
2013-06-13 12:25:39 -07:00
|
|
|
|
|
2014-07-11 13:24:06 +02:00
|
|
|
|
static void
|
2017-12-31 21:15:58 -08:00
|
|
|
|
log_and_free_error(struct ovsdb_error *error)
|
|
|
|
|
{
|
|
|
|
|
if (error) {
|
|
|
|
|
char *s = ovsdb_error_to_string_free(error);
|
|
|
|
|
VLOG_INFO("%s", s);
|
|
|
|
|
free(s);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_replication_remove_db(struct db *db)
|
|
|
|
|
{
|
|
|
|
|
replication_remove_db(db->db);
|
|
|
|
|
db->config->ab.backup = false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_replication_run(struct server_config *config)
|
|
|
|
|
{
|
|
|
|
|
struct shash_node *node;
|
|
|
|
|
bool all_alive = true;
|
|
|
|
|
|
|
|
|
|
replication_run();
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
|
|
|
|
|
if (db->config->model == SM_ACTIVE_BACKUP && db->config->ab.backup
|
|
|
|
|
&& !replication_is_alive(db->db)) {
|
|
|
|
|
ovsdb_server_replication_remove_db(db);
|
|
|
|
|
all_alive = false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* If one connection is broken, switch all databases to active,
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
* if they are configured via the command line / appctl and so have
|
|
|
|
|
* shared configuration. */
|
|
|
|
|
if (!config_file_path && !all_alive && *config->is_backup) {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
*config->is_backup = false;
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
|
|
|
|
|
if (db->config->model == SM_ACTIVE_BACKUP
|
|
|
|
|
&& db->config->ab.backup) {
|
|
|
|
|
ovsdb_server_replication_remove_db(db);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
static void
|
|
|
|
|
main_loop(struct server_config *config,
|
|
|
|
|
struct ovsdb_jsonrpc_server *jsonrpc, struct shash *all_dbs,
|
2024-01-09 23:49:03 +01:00
|
|
|
|
struct unixctl_server *unixctl, struct shash *remotes,
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct process *run_process, bool *exiting)
|
2014-07-11 13:24:06 +02:00
|
|
|
|
{
|
|
|
|
|
char *remotes_error, *ssl_error;
|
|
|
|
|
struct shash_node *node;
|
|
|
|
|
long long int status_timer = LLONG_MIN;
|
|
|
|
|
|
|
|
|
|
*exiting = false;
|
|
|
|
|
ssl_error = NULL;
|
|
|
|
|
remotes_error = NULL;
|
|
|
|
|
while (!*exiting) {
|
|
|
|
|
memory_run();
|
|
|
|
|
if (memory_should_report()) {
|
|
|
|
|
struct simap usage;
|
|
|
|
|
|
|
|
|
|
simap_init(&usage);
|
|
|
|
|
ovsdb_jsonrpc_server_get_memory_usage(jsonrpc, &usage);
|
2016-02-03 20:57:32 -08:00
|
|
|
|
ovsdb_monitor_get_memory_usage(&usage);
|
2014-07-11 13:24:06 +02:00
|
|
|
|
SHASH_FOR_EACH(node, all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
ovsdb_get_memory_usage(db->db, &usage);
|
|
|
|
|
}
|
|
|
|
|
memory_report(&usage);
|
|
|
|
|
simap_destroy(&usage);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Run unixctl_server_run() before reconfigure_remotes() because
|
|
|
|
|
* ovsdb-server/add-remote and ovsdb-server/remove-remote can change
|
|
|
|
|
* the set of remotes that reconfigure_remotes() uses. */
|
|
|
|
|
unixctl_server_run(unixctl);
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_jsonrpc_server_set_read_only(jsonrpc, false);
|
2016-07-29 14:39:29 -07:00
|
|
|
|
|
2014-07-11 13:24:06 +02:00
|
|
|
|
report_error_if_changed(
|
|
|
|
|
reconfigure_remotes(jsonrpc, all_dbs, remotes),
|
|
|
|
|
&remotes_error);
|
|
|
|
|
report_error_if_changed(reconfigure_ssl(all_dbs), &ssl_error);
|
|
|
|
|
ovsdb_jsonrpc_server_run(jsonrpc);
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_server_replication_run(config);
|
2021-06-01 23:27:36 +02:00
|
|
|
|
ovsdb_relay_run();
|
|
|
|
|
|
2022-03-23 12:56:17 +01:00
|
|
|
|
SHASH_FOR_EACH_SAFE (node, all_dbs) {
|
2014-07-11 13:24:06 +02:00
|
|
|
|
struct db *db = node->data;
|
2023-08-02 15:45:32 +02:00
|
|
|
|
|
2019-03-01 10:56:36 -08:00
|
|
|
|
ovsdb_storage_run(db->db->storage);
|
|
|
|
|
read_db(config, db);
|
|
|
|
|
/* Run triggers after storage_run and read_db to make sure new raft
|
|
|
|
|
* updates are utilized in current iteration. */
|
2017-12-28 13:21:11 -08:00
|
|
|
|
if (ovsdb_trigger_run(db->db, time_msec())) {
|
2017-12-31 21:15:58 -08:00
|
|
|
|
/* The message below is currently the only reason to disconnect
|
|
|
|
|
* all clients. */
|
|
|
|
|
ovsdb_jsonrpc_server_reconnect(
|
|
|
|
|
jsonrpc, false,
|
|
|
|
|
xasprintf("committed %s database schema conversion",
|
|
|
|
|
db->db->name));
|
|
|
|
|
}
|
|
|
|
|
if (ovsdb_storage_is_dead(db->db->storage)) {
|
|
|
|
|
VLOG_INFO("%s: removing database because storage disconnected "
|
|
|
|
|
"permanently", node->name);
|
|
|
|
|
remove_db(config, node,
|
|
|
|
|
xasprintf("removing database %s because storage "
|
|
|
|
|
"disconnected permanently", node->name));
|
ovsdb: Prepare snapshot JSON in a separate thread.
Conversion of the database data into JSON object, serialization
and destruction of that object are the most heavy operations
during the database compaction. If these operations are moved
to a separate thread, the main thread can continue processing
database requests in the meantime.
With this change, the compaction is split in 3 phases:
1. Initialization:
- Create a copy of the database.
- Remember current database index.
- Start a separate thread to convert a copy of the database
into serialized JSON object.
2. Wait:
- Continue normal operation until compaction thread is done.
- Meanwhile, compaction thread:
* Convert database copy to JSON.
* Serialize resulted JSON.
* Destroy original JSON object.
3. Finish:
- Destroy the database copy.
- Take the snapshot created by the thread.
- Write on disk.
The key for this schema to be fast is the ability to create
a shallow copy of the database. This doesn't take too much
time allowing the thread to do most of work.
Database copy is created and destroyed only by the main thread,
so there is no need for synchronization.
Such solution allows to reduce the time main thread is blocked
by compaction by 80-90%. For example, in ovn-heater tests
with 120 node density-heavy scenario, where compaction normally
takes 5-6 seconds at the end of a test, measured compaction
times was all below 1 second with the change applied. Also,
note that these measured times are the sum of phases 1 and 3,
so actual poll intervals are about half a second in this case.
Only implemented for raft storage for now. The implementation
for standalone databases can be added later by using a file
offset as a database index and copying newly added changes
from the old file to a new one during ovsdb_log_replace().
Reported-at: https://bugzilla.redhat.com/2069108
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-07-01 01:34:07 +02:00
|
|
|
|
} else if (!ovsdb_snapshot_in_progress(db->db)
|
|
|
|
|
&& (ovsdb_storage_should_snapshot(db->db->storage) ||
|
|
|
|
|
ovsdb_snapshot_ready(db->db))) {
|
ovsdb-server: Reclaim heap memory after compaction.
Compaction happens at most once in 10 minutes. That is a big time
interval for a heavy loaded ovsdb-server in cluster mode.
In 10 minutes raft logs could grow up to tens of thousands of entries
with tens of gigabytes in total size.
While compaction cleans up raft log entries, the memory in many cases
is not returned to the system, but kept in the heap of running
ovsdb-server process, and it could stay in this condition for a really
long time. In the end one performance spike could lead to a fast
growth of the raft log and this memory will never (for a really long
time) be released to the system even if the database if empty.
Simple example how to reproduce with OVN sandbox:
1. make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run following script that creates 1 port group, adds 4000 acls and
removes all of that in the end:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
3. Stopping one of Northbound DB servers:
ovs-appctl -t $(pwd)/sandbox/nb1 exit
Make sure that ovsdb-server didn't compact the database before
it was stopped. Now we have a db file on disk that contains
4000 fairly big transactions inside.
4. Trying to start same ovsdb-server with this file.
# cd sandbox && ovsdb-server <...> nb1.db
At this point ovsdb-server reads all the transactions from db
file and performs all of them as fast as it can one by one.
When it finishes this, raft log contains 4000 entries and
ovsdb-server consumes (on my system) ~13GB of memory while
database is empty. And libc will likely never return this memory
back to system, or, at least, will hold it for a really long time.
This patch adds a new command 'ovsdb-server/memory-trim-on-compaction'.
It's disabled by default, but once enabled, ovsdb-server will call
'malloc_trim(0)' after every successful compaction to try to return
unused heap memory back to system. This is glibc-specific, so we
need to detect function availability in a build time.
Disabled by default since it adds from 1% to 30% (depending on the
current state) to the snapshot creation time and, also, next memory
allocations will likely require requests to kernel and that might be
slower. Could be enabled by default later if considered broadly
beneficial.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1888829
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-10-24 02:25:48 +02:00
|
|
|
|
log_and_free_error(ovsdb_snapshot(db->db, trim_memory));
|
2017-12-28 13:21:11 -08:00
|
|
|
|
}
|
2014-07-11 13:24:06 +02:00
|
|
|
|
}
|
|
|
|
|
if (run_process) {
|
|
|
|
|
process_run();
|
|
|
|
|
if (process_exited(run_process)) {
|
|
|
|
|
*exiting = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-05-27 15:29:03 +02:00
|
|
|
|
/* update Manager status(es) every 2.5 seconds. Don't update if we're
|
|
|
|
|
* recording or performing replay. */
|
|
|
|
|
if (status_timer == LLONG_MIN ||
|
|
|
|
|
(!ovs_replay_is_active() && time_msec() >= status_timer)) {
|
2016-06-24 17:13:06 -07:00
|
|
|
|
status_timer = time_msec() + 2500;
|
2014-07-11 13:24:06 +02:00
|
|
|
|
update_remote_status(jsonrpc, remotes, all_dbs);
|
|
|
|
|
}
|
|
|
|
|
|
2017-12-15 11:14:55 -08:00
|
|
|
|
update_server_status(all_dbs);
|
|
|
|
|
|
2014-07-11 13:24:06 +02:00
|
|
|
|
memory_wait();
|
2016-08-23 13:57:37 -07:00
|
|
|
|
|
2024-01-09 23:49:05 +01:00
|
|
|
|
replication_wait();
|
2021-06-01 23:27:36 +02:00
|
|
|
|
ovsdb_relay_wait();
|
|
|
|
|
|
2014-07-11 13:24:06 +02:00
|
|
|
|
ovsdb_jsonrpc_server_wait(jsonrpc);
|
|
|
|
|
unixctl_server_wait(unixctl);
|
|
|
|
|
SHASH_FOR_EACH(node, all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
ovsdb_trigger_wait(db->db, time_msec());
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_storage_wait(db->db->storage);
|
|
|
|
|
ovsdb_storage_read_wait(db->db->storage);
|
ovsdb: Prepare snapshot JSON in a separate thread.
Conversion of the database data into JSON object, serialization
and destruction of that object are the most heavy operations
during the database compaction. If these operations are moved
to a separate thread, the main thread can continue processing
database requests in the meantime.
With this change, the compaction is split in 3 phases:
1. Initialization:
- Create a copy of the database.
- Remember current database index.
- Start a separate thread to convert a copy of the database
into serialized JSON object.
2. Wait:
- Continue normal operation until compaction thread is done.
- Meanwhile, compaction thread:
* Convert database copy to JSON.
* Serialize resulted JSON.
* Destroy original JSON object.
3. Finish:
- Destroy the database copy.
- Take the snapshot created by the thread.
- Write on disk.
The key for this schema to be fast is the ability to create
a shallow copy of the database. This doesn't take too much
time allowing the thread to do most of work.
Database copy is created and destroyed only by the main thread,
so there is no need for synchronization.
Such solution allows to reduce the time main thread is blocked
by compaction by 80-90%. For example, in ovn-heater tests
with 120 node density-heavy scenario, where compaction normally
takes 5-6 seconds at the end of a test, measured compaction
times was all below 1 second with the change applied. Also,
note that these measured times are the sum of phases 1 and 3,
so actual poll intervals are about half a second in this case.
Only implemented for raft storage for now. The implementation
for standalone databases can be added later by using a file
offset as a database index and copying newly added changes
from the old file to a new one during ovsdb_log_replace().
Reported-at: https://bugzilla.redhat.com/2069108
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-07-01 01:34:07 +02:00
|
|
|
|
ovsdb_snapshot_wait(db->db);
|
2014-07-11 13:24:06 +02:00
|
|
|
|
}
|
|
|
|
|
if (run_process) {
|
|
|
|
|
process_wait(run_process);
|
|
|
|
|
}
|
|
|
|
|
if (*exiting) {
|
|
|
|
|
poll_immediate_wake();
|
|
|
|
|
}
|
2021-05-27 15:29:03 +02:00
|
|
|
|
if (!ovs_replay_is_active()) {
|
|
|
|
|
poll_timer_wait_until(status_timer);
|
|
|
|
|
}
|
2014-07-11 13:24:06 +02:00
|
|
|
|
poll_block();
|
|
|
|
|
if (should_service_stop()) {
|
|
|
|
|
*exiting = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2016-05-13 10:33:07 -07:00
|
|
|
|
free(remotes_error);
|
2014-07-11 13:24:06 +02:00
|
|
|
|
}
|
|
|
|
|
|
2024-01-09 23:49:04 +01:00
|
|
|
|
/* Parsing the relay in format 'relay:DB_NAME:<list of remotes>'.
|
|
|
|
|
* On success, returns 'true', 'name' is set to DB_NAME, 'remotes' to
|
|
|
|
|
* '<list of remotes>'. Caller is responsible of freeing 'name' and
|
|
|
|
|
* 'remotes'. On failure, returns 'false'. */
|
|
|
|
|
static bool
|
|
|
|
|
parse_relay_args(const char *arg, char **name, char **remote)
|
|
|
|
|
{
|
|
|
|
|
const char *relay_prefix = "relay:";
|
|
|
|
|
const int relay_prefix_len = strlen(relay_prefix);
|
|
|
|
|
bool is_relay;
|
|
|
|
|
|
|
|
|
|
is_relay = !strncmp(arg, relay_prefix, relay_prefix_len);
|
|
|
|
|
if (!is_relay) {
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
*remote = strchr(arg + relay_prefix_len, ':');
|
|
|
|
|
|
|
|
|
|
if (!*remote || (*remote)[0] == '\0') {
|
|
|
|
|
*remote = NULL;
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
arg += relay_prefix_len;
|
|
|
|
|
*name = xmemdup0(arg, *remote - arg);
|
|
|
|
|
*remote = xstrdup(*remote + 1);
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
static void
|
|
|
|
|
db_config_destroy(struct db_config *conf)
|
|
|
|
|
{
|
|
|
|
|
if (!conf) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
free(conf->source);
|
|
|
|
|
ovsdb_jsonrpc_options_free(conf->options);
|
|
|
|
|
free(conf->ab.sync_exclude);
|
|
|
|
|
free(conf);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static struct db_config *
|
|
|
|
|
db_config_clone(const struct db_config *c)
|
|
|
|
|
{
|
|
|
|
|
struct db_config *conf = xmemdup(c, sizeof *c);
|
|
|
|
|
|
|
|
|
|
conf->source = nullable_xstrdup(c->source);
|
|
|
|
|
if (c->options) {
|
|
|
|
|
conf->options = ovsdb_jsonrpc_options_clone(c->options);
|
|
|
|
|
}
|
|
|
|
|
conf->ab.sync_exclude = nullable_xstrdup(c->ab.sync_exclude);
|
|
|
|
|
|
|
|
|
|
return conf;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static struct ovsdb_jsonrpc_options *
|
|
|
|
|
get_jsonrpc_options(const char *target, enum service_model model)
|
|
|
|
|
{
|
|
|
|
|
struct ovsdb_jsonrpc_options *options;
|
|
|
|
|
|
|
|
|
|
options = ovsdb_jsonrpc_default_options(target);
|
|
|
|
|
if (model == SM_ACTIVE_BACKUP) {
|
2024-01-09 23:49:11 +01:00
|
|
|
|
options->rpc.probe_interval = REPLICATION_DEFAULT_PROBE_INTERVAL;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
} else if (model == SM_RELAY) {
|
2024-01-09 23:49:11 +01:00
|
|
|
|
options->rpc.probe_interval = RELAY_SOURCE_DEFAULT_PROBE_INTERVAL;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return options;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
add_database_config(struct shash *db_conf, const char *opt,
|
|
|
|
|
const char *sync_from, const char *sync_exclude,
|
|
|
|
|
bool active)
|
|
|
|
|
{
|
|
|
|
|
struct db_config *conf = xzalloc(sizeof *conf);
|
|
|
|
|
char *filename = NULL;
|
|
|
|
|
|
|
|
|
|
if (parse_relay_args(opt, &filename, &conf->source)) {
|
|
|
|
|
conf->model = SM_RELAY;
|
|
|
|
|
conf->options = get_jsonrpc_options(conf->source, conf->model);
|
|
|
|
|
} else if (sync_from) {
|
|
|
|
|
conf->model = SM_ACTIVE_BACKUP;
|
|
|
|
|
conf->source = xstrdup(sync_from);
|
|
|
|
|
conf->options = get_jsonrpc_options(conf->source, conf->model);
|
|
|
|
|
conf->ab.sync_exclude = nullable_xstrdup(sync_exclude);
|
|
|
|
|
conf->ab.backup = !active;
|
|
|
|
|
filename = xstrdup(opt);
|
|
|
|
|
} else {
|
|
|
|
|
conf->model = SM_UNDEFINED; /* We'll update once the file is open. */
|
|
|
|
|
filename = xstrdup(opt);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
conf = shash_replace_nocopy(db_conf, filename, conf);
|
|
|
|
|
if (conf) {
|
2025-02-10 00:56:20 -05:00
|
|
|
|
VLOG_WARN("Duplicate database configuration: %s", opt);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
db_config_destroy(conf);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
free_database_configs(struct shash *db_conf)
|
|
|
|
|
{
|
|
|
|
|
struct shash_node *node;
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, db_conf) {
|
|
|
|
|
db_config_destroy(node->data);
|
|
|
|
|
}
|
|
|
|
|
shash_clear(db_conf);
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
static bool
|
|
|
|
|
service_model_can_convert(enum service_model a, enum service_model b)
|
|
|
|
|
{
|
|
|
|
|
ovs_assert(a != SM_UNDEFINED);
|
|
|
|
|
|
|
|
|
|
if (a == b) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (b == SM_UNDEFINED) {
|
|
|
|
|
return a == SM_STANDALONE || a == SM_CLUSTERED;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Conversion can happen only between standalone and active-backup. */
|
|
|
|
|
return (a == SM_STANDALONE && b == SM_ACTIVE_BACKUP)
|
|
|
|
|
|| (a == SM_ACTIVE_BACKUP && b == SM_STANDALONE);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
database_update_config(struct server_config *server_config,
|
|
|
|
|
struct db *db, const struct db_config *new_conf)
|
|
|
|
|
{
|
|
|
|
|
struct db_config *conf = db->config;
|
|
|
|
|
enum service_model model = conf->model;
|
|
|
|
|
|
|
|
|
|
/* Stop replicating when transitioning to active or standalone. */
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP && conf->ab.backup
|
|
|
|
|
&& (new_conf->model == SM_STANDALONE || !new_conf->ab.backup)) {
|
|
|
|
|
ovsdb_server_replication_remove_db(db);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
db_config_destroy(conf);
|
|
|
|
|
conf = db->config = db_config_clone(new_conf);
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_UNDEFINED) {
|
|
|
|
|
/* We're operating on the same file, the model is the same. */
|
|
|
|
|
conf->model = model;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_RELAY) {
|
|
|
|
|
ovsdb_relay_add_db(db->db, conf->source, update_schema, server_config,
|
|
|
|
|
&conf->options->rpc);
|
|
|
|
|
}
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP && conf->ab.backup) {
|
|
|
|
|
const struct uuid *server_uuid;
|
|
|
|
|
|
|
|
|
|
server_uuid = ovsdb_jsonrpc_server_get_uuid(server_config->jsonrpc);
|
|
|
|
|
replication_set_db(db->db, conf->source, conf->ab.sync_exclude,
|
|
|
|
|
server_uuid, &conf->options->rpc);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool
|
|
|
|
|
reconfigure_databases(struct server_config *server_config,
|
|
|
|
|
struct shash *db_conf)
|
|
|
|
|
{
|
|
|
|
|
struct db_config *cur_conf, *new_conf;
|
|
|
|
|
struct shash_node *node, *conf_node;
|
|
|
|
|
bool res = true;
|
|
|
|
|
struct db *db;
|
|
|
|
|
|
|
|
|
|
/* Remove databases that are no longer in the configuration or have
|
|
|
|
|
* incompatible configuration. Update compatible ones. */
|
|
|
|
|
SHASH_FOR_EACH_SAFE (node, server_config->all_dbs) {
|
|
|
|
|
db = node->data;
|
|
|
|
|
|
|
|
|
|
if (node->name[0] == '_') {
|
|
|
|
|
/* Skip internal databases. */
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
cur_conf = db->config;
|
|
|
|
|
conf_node = shash_find(db_conf, db->filename);
|
|
|
|
|
new_conf = conf_node ? conf_node->data : NULL;
|
|
|
|
|
|
|
|
|
|
if (!new_conf) {
|
|
|
|
|
remove_db(server_config, node,
|
|
|
|
|
xasprintf("database %s removed from configuration",
|
|
|
|
|
node->name));
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
if (!service_model_can_convert(cur_conf->model, new_conf->model)) {
|
|
|
|
|
remove_db(server_config, node,
|
|
|
|
|
xasprintf("service model changed for database %s",
|
|
|
|
|
node->name));
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
database_update_config(server_config, db, new_conf);
|
|
|
|
|
|
|
|
|
|
db_config_destroy(new_conf);
|
|
|
|
|
shash_delete(db_conf, conf_node);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Create new databases. */
|
|
|
|
|
SHASH_FOR_EACH (node, db_conf) {
|
|
|
|
|
struct ovsdb_error *error = open_db(server_config,
|
|
|
|
|
node->name, node->data);
|
|
|
|
|
if (error) {
|
|
|
|
|
char *s = ovsdb_error_to_string_free(error);
|
|
|
|
|
|
|
|
|
|
VLOG_WARN("failed to open database '%s': %s", node->name, s);
|
|
|
|
|
free(s);
|
|
|
|
|
res = false;
|
|
|
|
|
}
|
|
|
|
|
db_config_destroy(node->data);
|
|
|
|
|
}
|
|
|
|
|
shash_clear(db_conf);
|
|
|
|
|
|
|
|
|
|
return res;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool
|
|
|
|
|
reconfigure_ovsdb_server(struct server_config *server_config)
|
|
|
|
|
{
|
|
|
|
|
char *sync_from = NULL, *sync_exclude = NULL;
|
|
|
|
|
bool is_backup = false;
|
|
|
|
|
struct shash remotes;
|
|
|
|
|
struct shash db_conf;
|
|
|
|
|
bool res = true;
|
|
|
|
|
|
|
|
|
|
FILE *file = NULL;
|
|
|
|
|
|
|
|
|
|
if (config_file_path) {
|
|
|
|
|
file = fopen(config_file_path, "r+b");
|
|
|
|
|
if (!file) {
|
|
|
|
|
VLOG_ERR("failed to open configuration file '%s': %s",
|
|
|
|
|
config_file_path, ovs_strerror(errno));
|
|
|
|
|
return false;
|
|
|
|
|
} else {
|
|
|
|
|
VLOG_INFO("loading configuration from '%s'", config_file_path);
|
|
|
|
|
}
|
|
|
|
|
} else {
|
|
|
|
|
file = server_config->config_tmpfile;
|
|
|
|
|
}
|
|
|
|
|
ovs_assert(file);
|
|
|
|
|
|
|
|
|
|
shash_init(&remotes);
|
|
|
|
|
shash_init(&db_conf);
|
|
|
|
|
|
|
|
|
|
if (!load_config(file, &remotes, &db_conf,
|
|
|
|
|
&sync_from, &sync_exclude, &is_backup)) {
|
|
|
|
|
if (config_file_path) {
|
|
|
|
|
VLOG_WARN("failed to load configuration from %s",
|
|
|
|
|
config_file_path);
|
|
|
|
|
} else {
|
|
|
|
|
VLOG_FATAL("failed to load configuration from a temporary file");
|
|
|
|
|
}
|
|
|
|
|
res = false;
|
|
|
|
|
goto exit_close;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Parsing was successful. Update the server configuration. */
|
|
|
|
|
shash_swap(server_config->remotes, &remotes);
|
|
|
|
|
free(*server_config->sync_from);
|
|
|
|
|
*server_config->sync_from = sync_from;
|
|
|
|
|
free(*server_config->sync_exclude);
|
|
|
|
|
*server_config->sync_exclude = sync_exclude;
|
|
|
|
|
*server_config->is_backup = is_backup;
|
|
|
|
|
|
|
|
|
|
if (!reconfigure_databases(server_config, &db_conf)) {
|
|
|
|
|
VLOG_WARN("failed to configure databases");
|
|
|
|
|
res = false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
char *error = reconfigure_remotes(server_config->jsonrpc,
|
|
|
|
|
server_config->all_dbs,
|
|
|
|
|
server_config->remotes);
|
|
|
|
|
if (error) {
|
|
|
|
|
VLOG_WARN("failed to configure remotes: %s", error);
|
|
|
|
|
res = false;
|
|
|
|
|
} else {
|
|
|
|
|
error = reconfigure_ssl(server_config->all_dbs);
|
|
|
|
|
if (error) {
|
treewide: Refer to SSL configuration as SSL/TLS.
SSL protocol family is not actually being used or supported in OVS.
What we use is actually TLS.
Terms "SSL" and "TLS" are often used interchangeably in modern
software and refer to the same thing, which is normally just TLS.
Let's replace "SSL" with "SSL/TLS" in documentation and user-visible
messages, where it makes sense. This may make it more clear what
is meant for a less experienced user that may look for TLS support
in OVS and not find much.
We're not changing any actual code, because, for example, most of
OpenSSL APIs are using just SSL, for historical reasons. And our
database is using "SSL" table. We may consider migrating to "TLS"
naming for user-visible configuration like command line arguments
and database names, but that will require extra work on making sure
upgrades can still work. In general, a slightly more clear
documentation should be enough for now, especially since term SSL
is still widely used in the industry.
"SSL/TLS" is chosen over "TLS/SSL" simply because our user-visible
configuration knobs are using "SSL" naming, e.g. '--ssl-cyphers'
or 'ovs-vsctl set-ssl'. So, it might be less confusing this way.
We may switch that, if we decide on re-working the user-visible
commands towards "TLS" naming, or providing both alternatives.
Some other projects did similar changes. For example, the python ssl
library is now using "TLS/SSL" in the documentation whenever possible.
Same goes for OpenSSL itself.
Acked-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-12-09 17:38:45 +01:00
|
|
|
|
VLOG_WARN("failed to configure SSL/TLS: %s", error);
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
res = false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
free(error);
|
|
|
|
|
|
|
|
|
|
exit_close:
|
|
|
|
|
if (config_file_path) {
|
|
|
|
|
fclose(file);
|
|
|
|
|
}
|
|
|
|
|
free_remotes(&remotes);
|
|
|
|
|
free_database_configs(&db_conf);
|
|
|
|
|
shash_destroy(&remotes);
|
|
|
|
|
shash_destroy(&db_conf);
|
|
|
|
|
return res;
|
|
|
|
|
}
|
|
|
|
|
|
2009-11-04 15:11:44 -08:00
|
|
|
|
int
|
|
|
|
|
main(int argc, char *argv[])
|
|
|
|
|
{
|
2009-11-17 16:02:38 -08:00
|
|
|
|
char *unixctl_path = NULL;
|
2010-02-12 11:17:17 -08:00
|
|
|
|
char *run_command = NULL;
|
2009-11-04 15:11:44 -08:00
|
|
|
|
struct unixctl_server *unixctl;
|
|
|
|
|
struct ovsdb_jsonrpc_server *jsonrpc;
|
2010-02-12 11:17:17 -08:00
|
|
|
|
struct process *run_process;
|
2009-11-17 16:02:38 -08:00
|
|
|
|
bool exiting;
|
2009-11-04 15:11:44 -08:00
|
|
|
|
int retval;
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
FILE *config_tmpfile = NULL;
|
2013-06-13 04:30:32 -07:00
|
|
|
|
struct shash all_dbs;
|
2022-03-23 12:56:17 +01:00
|
|
|
|
struct shash_node *node;
|
2020-01-07 10:24:48 +05:30
|
|
|
|
int replication_probe_interval = REPLICATION_DEFAULT_PROBE_INTERVAL;
|
2023-07-17 11:06:53 +02:00
|
|
|
|
int relay_source_probe_interval = RELAY_SOURCE_DEFAULT_PROBE_INTERVAL;
|
2024-01-09 23:49:03 +01:00
|
|
|
|
struct sset db_filenames = SSET_INITIALIZER(&db_filenames);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash db_conf = SHASH_INITIALIZER(&db_conf);
|
2024-01-09 23:49:03 +01:00
|
|
|
|
struct shash remotes = SHASH_INITIALIZER(&remotes);
|
|
|
|
|
char *sync_from = NULL, *sync_exclude = NULL;
|
|
|
|
|
bool is_backup;
|
2012-09-07 10:07:03 -07:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
struct server_config server_config = {
|
|
|
|
|
.remotes = &remotes,
|
|
|
|
|
.all_dbs = &all_dbs,
|
|
|
|
|
.sync_from = &sync_from,
|
|
|
|
|
.sync_exclude = &sync_exclude,
|
|
|
|
|
.is_backup = &is_backup,
|
|
|
|
|
.replication_probe_interval = &replication_probe_interval,
|
|
|
|
|
.relay_source_probe_interval = &relay_source_probe_interval,
|
|
|
|
|
};
|
|
|
|
|
|
2015-03-16 12:01:55 -04:00
|
|
|
|
ovs_cmdl_proctitle_init(argc, argv);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
set_program_name(argv[0]);
|
2014-01-17 10:43:03 -08:00
|
|
|
|
service_start(&argc, &argv);
|
2014-02-26 10:44:46 -08:00
|
|
|
|
fatal_ignore_sigpipe();
|
2009-11-04 15:11:44 -08:00
|
|
|
|
process_init();
|
2022-03-10 23:33:17 +01:00
|
|
|
|
dns_resolve_init(true);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
|
2016-08-23 04:05:11 -07:00
|
|
|
|
bool active = false;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
parse_options(argc, argv, &db_conf, &remotes, &unixctl_path,
|
2017-12-28 13:21:11 -08:00
|
|
|
|
&run_command, &sync_from, &sync_exclude, &active);
|
2016-08-23 04:05:11 -07:00
|
|
|
|
is_backup = sync_from && !active;
|
|
|
|
|
|
dpdk: Allow retaining CAP_SYS_RAWIO privileges.
Open vSwitch generally tries to let the underlying operating system
managed the low level details of hardware, for example DMA mapping,
bus arbitration, etc. However, when using DPDK, the underlying
operating system yields control of many of these details to userspace
for management.
In the case of some DPDK port drivers, configuring rte_flow or even
allocating resources may require access to iopl/ioperm calls, which
are guarded by the CAP_SYS_RAWIO privilege on linux systems. These
calls are dangerous, and can allow a process to completely compromise
a system. However, they are needed in the case of some userspace
driver code which manages the hardware (for example, the mlx
implementation of backend support for rte_flow).
Here, we create an opt-in flag passed to the command line to allow
this access. We need to do this before ever accessing the database,
because we want to drop all privileges asap, and cannot wait for
a connection to the database to be established and functional before
dropping. There may be distribution specific ways to do capability
management as well (using for example, systemd), but they are not
as universal to the vswitchd as a flag.
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Flavio Leitner <fbl@sysclose.org>
Acked-by: Gaetan Rivet <gaetanr@nvidia.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2023-03-16 08:00:39 -04:00
|
|
|
|
daemon_become_new_user(false, false);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
if (!config_file_path) {
|
|
|
|
|
/* Create and initialize 'config_tmpfile' as a temporary file to hold
|
|
|
|
|
* ovsdb-server's most basic configuration, and then save our initial
|
|
|
|
|
* configuration to it. When --monitor is used, this preserves the
|
|
|
|
|
* effects of ovs-appctl commands such as ovsdb-server/add-remote
|
|
|
|
|
* (which saves the new configuration) across crashes. */
|
|
|
|
|
config_tmpfile = tmpfile();
|
|
|
|
|
if (!config_tmpfile) {
|
|
|
|
|
ovs_fatal(errno, "failed to create temporary file");
|
|
|
|
|
}
|
|
|
|
|
server_config.config_tmpfile = config_tmpfile;
|
|
|
|
|
save_config__(config_tmpfile, &remotes, &db_conf, sync_from,
|
|
|
|
|
sync_exclude, is_backup);
|
2013-06-13 12:25:39 -07:00
|
|
|
|
}
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
free_remotes(&remotes);
|
|
|
|
|
free_database_configs(&db_conf);
|
2013-06-13 12:25:39 -07:00
|
|
|
|
|
dpdk: Allow retaining CAP_SYS_RAWIO privileges.
Open vSwitch generally tries to let the underlying operating system
managed the low level details of hardware, for example DMA mapping,
bus arbitration, etc. However, when using DPDK, the underlying
operating system yields control of many of these details to userspace
for management.
In the case of some DPDK port drivers, configuring rte_flow or even
allocating resources may require access to iopl/ioperm calls, which
are guarded by the CAP_SYS_RAWIO privilege on linux systems. These
calls are dangerous, and can allow a process to completely compromise
a system. However, they are needed in the case of some userspace
driver code which manages the hardware (for example, the mlx
implementation of backend support for rte_flow).
Here, we create an opt-in flag passed to the command line to allow
this access. We need to do this before ever accessing the database,
because we want to drop all privileges asap, and cannot wait for
a connection to the database to be established and functional before
dropping. There may be distribution specific ways to do capability
management as well (using for example, systemd), but they are not
as universal to the vswitchd as a flag.
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Flavio Leitner <fbl@sysclose.org>
Acked-by: Gaetan Rivet <gaetanr@nvidia.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2023-03-16 08:00:39 -04:00
|
|
|
|
daemonize_start(false, false);
|
2009-11-16 15:09:50 -08:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
perf_counters_init();
|
2013-06-13 04:30:32 -07:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
/* Start ovsdb jsonrpc server. Both read and write transactions are
|
|
|
|
|
* allowed by default, individual remotes and databases will be configured
|
|
|
|
|
* as read-only, if necessary. */
|
|
|
|
|
jsonrpc = ovsdb_jsonrpc_server_create(false);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
server_config.jsonrpc = jsonrpc;
|
2015-12-23 10:58:15 -08:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
shash_init(&all_dbs);
|
2017-12-15 11:14:55 -08:00
|
|
|
|
add_server_db(&server_config);
|
2012-09-07 10:07:03 -07:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
if (!reconfigure_ovsdb_server(&server_config)) {
|
|
|
|
|
ovs_fatal(0, "server configuration failed");
|
2013-06-27 10:27:57 -07:00
|
|
|
|
}
|
2009-11-04 15:11:44 -08:00
|
|
|
|
|
2009-11-17 16:02:38 -08:00
|
|
|
|
retval = unixctl_server_create(unixctl_path, &unixctl);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
if (retval) {
|
2010-01-15 10:31:57 -08:00
|
|
|
|
exit(EXIT_FAILURE);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
}
|
|
|
|
|
|
2010-02-12 11:17:17 -08:00
|
|
|
|
if (run_command) {
|
|
|
|
|
char *run_argv[4];
|
|
|
|
|
|
|
|
|
|
run_argv[0] = "/bin/sh";
|
|
|
|
|
run_argv[1] = "-c";
|
|
|
|
|
run_argv[2] = run_command;
|
|
|
|
|
run_argv[3] = NULL;
|
|
|
|
|
|
2013-05-08 14:31:55 -07:00
|
|
|
|
retval = process_start(run_argv, &run_process);
|
2010-02-12 11:17:17 -08:00
|
|
|
|
if (retval) {
|
|
|
|
|
ovs_fatal(retval, "%s: process failed to start", run_command);
|
|
|
|
|
}
|
|
|
|
|
} else {
|
|
|
|
|
run_process = NULL;
|
|
|
|
|
}
|
|
|
|
|
|
2009-12-17 10:56:01 -08:00
|
|
|
|
daemonize_complete();
|
2009-11-17 16:02:38 -08:00
|
|
|
|
|
2012-07-17 10:07:36 -07:00
|
|
|
|
if (!run_command) {
|
|
|
|
|
/* ovsdb-server is usually a long-running process, in which case it
|
|
|
|
|
* makes plenty of sense to log the version, but --run makes
|
|
|
|
|
* ovsdb-server more like a command-line tool, so skip it. */
|
2025-01-20 17:00:46 +02:00
|
|
|
|
VLOG_INFO("%s", ovs_get_program_version());
|
2012-07-17 10:07:36 -07:00
|
|
|
|
}
|
2012-07-17 09:28:08 -07:00
|
|
|
|
|
2011-12-02 15:29:19 -08:00
|
|
|
|
unixctl_command_register("exit", "", 0, 0, ovsdb_server_exit, &exiting);
|
2012-09-07 10:07:03 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/compact", "", 0, 1,
|
2013-06-13 04:30:32 -07:00
|
|
|
|
ovsdb_server_compact, &all_dbs);
|
ovsdb-server: Reclaim heap memory after compaction.
Compaction happens at most once in 10 minutes. That is a big time
interval for a heavy loaded ovsdb-server in cluster mode.
In 10 minutes raft logs could grow up to tens of thousands of entries
with tens of gigabytes in total size.
While compaction cleans up raft log entries, the memory in many cases
is not returned to the system, but kept in the heap of running
ovsdb-server process, and it could stay in this condition for a really
long time. In the end one performance spike could lead to a fast
growth of the raft log and this memory will never (for a really long
time) be released to the system even if the database if empty.
Simple example how to reproduce with OVN sandbox:
1. make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run following script that creates 1 port group, adds 4000 acls and
removes all of that in the end:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
3. Stopping one of Northbound DB servers:
ovs-appctl -t $(pwd)/sandbox/nb1 exit
Make sure that ovsdb-server didn't compact the database before
it was stopped. Now we have a db file on disk that contains
4000 fairly big transactions inside.
4. Trying to start same ovsdb-server with this file.
# cd sandbox && ovsdb-server <...> nb1.db
At this point ovsdb-server reads all the transactions from db
file and performs all of them as fast as it can one by one.
When it finishes this, raft log contains 4000 entries and
ovsdb-server consumes (on my system) ~13GB of memory while
database is empty. And libc will likely never return this memory
back to system, or, at least, will hold it for a really long time.
This patch adds a new command 'ovsdb-server/memory-trim-on-compaction'.
It's disabled by default, but once enabled, ovsdb-server will call
'malloc_trim(0)' after every successful compaction to try to return
unused heap memory back to system. This is glibc-specific, so we
need to detect function availability in a build time.
Disabled by default since it adds from 1% to 30% (depending on the
current state) to the snapshot creation time and, also, next memory
allocations will likely require requests to kernel and that might be
slower. Could be enabled by default later if considered broadly
beneficial.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1888829
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-10-24 02:25:48 +02:00
|
|
|
|
unixctl_command_register("ovsdb-server/memory-trim-on-compaction",
|
|
|
|
|
"on|off", 1, 1,
|
|
|
|
|
ovsdb_server_memory_trim_on_compaction, NULL);
|
2011-12-02 15:29:19 -08:00
|
|
|
|
unixctl_command_register("ovsdb-server/reconnect", "", 0, 0,
|
2011-09-26 14:59:35 -07:00
|
|
|
|
ovsdb_server_reconnect, jsonrpc);
|
2024-01-09 23:49:08 +01:00
|
|
|
|
unixctl_command_register("ovsdb-server/reload", "", 0, 0,
|
|
|
|
|
ovsdb_server_reload, &server_config);
|
2009-11-17 16:02:38 -08:00
|
|
|
|
|
2013-04-10 09:34:49 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/add-remote", "REMOTE", 1, 1,
|
2013-06-27 10:27:57 -07:00
|
|
|
|
ovsdb_server_add_remote, &server_config);
|
2013-04-10 09:34:49 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/remove-remote", "REMOTE", 1, 1,
|
2013-06-27 10:27:57 -07:00
|
|
|
|
ovsdb_server_remove_remote, &server_config);
|
2013-04-10 09:34:49 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/list-remotes", "", 0, 0,
|
|
|
|
|
ovsdb_server_list_remotes, &remotes);
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/add-db", "DB", 1, 1,
|
|
|
|
|
ovsdb_server_add_database, &server_config);
|
|
|
|
|
unixctl_command_register("ovsdb-server/remove-db", "DB", 1, 1,
|
|
|
|
|
ovsdb_server_remove_database, &server_config);
|
|
|
|
|
unixctl_command_register("ovsdb-server/list-dbs", "", 0, 0,
|
|
|
|
|
ovsdb_server_list_databases, &all_dbs);
|
2022-06-24 11:55:58 +02:00
|
|
|
|
unixctl_command_register("ovsdb-server/tlog-set", "DB:TABLE on|off",
|
|
|
|
|
2, 2, ovsdb_server_tlog_set, &all_dbs);
|
|
|
|
|
unixctl_command_register("ovsdb-server/tlog-list", "",
|
|
|
|
|
0, 0, ovsdb_server_tlog_list, &all_dbs);
|
2015-03-21 00:00:49 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/perf-counters-show", "", 0, 0,
|
|
|
|
|
ovsdb_server_perf_counters_show, NULL);
|
|
|
|
|
unixctl_command_register("ovsdb-server/perf-counters-clear", "", 0, 0,
|
|
|
|
|
ovsdb_server_perf_counters_clear, NULL);
|
2016-08-23 04:05:11 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/set-active-ovsdb-server", "", 1, 1,
|
|
|
|
|
ovsdb_server_set_active_ovsdb_server,
|
|
|
|
|
&server_config);
|
2016-07-28 11:35:01 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/get-active-ovsdb-server", "", 0, 0,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
ovsdb_server_get_active_ovsdb_server,
|
|
|
|
|
&server_config);
|
2016-07-29 14:39:29 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/connect-active-ovsdb-server", "",
|
|
|
|
|
0, 0, ovsdb_server_connect_active_ovsdb_server,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
&server_config);
|
2016-07-29 14:39:29 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/disconnect-active-ovsdb-server", "",
|
|
|
|
|
0, 0, ovsdb_server_disconnect_active_ovsdb_server,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
&server_config);
|
2020-01-07 10:24:48 +05:30
|
|
|
|
unixctl_command_register(
|
|
|
|
|
"ovsdb-server/set-active-ovsdb-server-probe-interval", "", 1, 1,
|
|
|
|
|
ovsdb_server_set_active_ovsdb_server_probe_interval, &server_config);
|
2023-07-17 11:06:53 +02:00
|
|
|
|
unixctl_command_register(
|
|
|
|
|
"ovsdb-server/set-relay-source-probe-interval", "", 1, 1,
|
|
|
|
|
ovsdb_server_set_relay_source_interval, &server_config);
|
2016-08-23 04:05:11 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/set-sync-exclude-tables", "",
|
|
|
|
|
0, 1, ovsdb_server_set_sync_exclude_tables,
|
|
|
|
|
&server_config);
|
|
|
|
|
unixctl_command_register("ovsdb-server/get-sync-exclude-tables", "",
|
|
|
|
|
0, 0, ovsdb_server_get_sync_exclude_tables,
|
2024-01-09 23:49:05 +01:00
|
|
|
|
&server_config);
|
2016-08-23 04:05:11 -07:00
|
|
|
|
unixctl_command_register("ovsdb-server/sync-status", "",
|
|
|
|
|
0, 0, ovsdb_server_get_sync_status,
|
|
|
|
|
&server_config);
|
2020-08-03 17:05:28 +02:00
|
|
|
|
unixctl_command_register("ovsdb-server/get-db-storage-status", "DB", 1, 1,
|
|
|
|
|
ovsdb_server_get_db_storage_status,
|
|
|
|
|
&server_config);
|
2016-07-19 14:54:51 -07:00
|
|
|
|
|
2015-10-20 12:50:23 -07:00
|
|
|
|
/* Simulate the behavior of OVS release prior to version 2.5 that
|
2016-07-18 11:45:55 +03:00
|
|
|
|
* does not support the monitor_cond method. */
|
|
|
|
|
unixctl_command_register("ovsdb-server/disable-monitor-cond", "", 0, 0,
|
|
|
|
|
ovsdb_server_disable_monitor_cond, jsonrpc);
|
2015-10-20 12:50:23 -07:00
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
main_loop(&server_config, jsonrpc, &all_dbs, unixctl, &remotes,
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
run_process, &exiting);
|
2012-05-08 15:44:21 -07:00
|
|
|
|
|
2022-03-23 12:56:17 +01:00
|
|
|
|
SHASH_FOR_EACH_SAFE (node, &all_dbs) {
|
2013-06-13 04:30:32 -07:00
|
|
|
|
struct db *db = node->data;
|
2018-06-15 15:11:09 -07:00
|
|
|
|
close_db(&server_config, db, NULL);
|
2014-07-02 15:00:16 -07:00
|
|
|
|
shash_delete(&all_dbs, node);
|
2012-09-07 10:07:03 -07:00
|
|
|
|
}
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_jsonrpc_server_destroy(jsonrpc);
|
2015-10-21 23:58:10 -07:00
|
|
|
|
shash_destroy(&all_dbs);
|
2024-01-09 23:49:03 +01:00
|
|
|
|
free_remotes(&remotes);
|
|
|
|
|
shash_destroy(&remotes);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
free_database_configs(&db_conf);
|
|
|
|
|
shash_destroy(&db_conf);
|
2016-08-23 04:05:11 -07:00
|
|
|
|
free(sync_from);
|
|
|
|
|
free(sync_exclude);
|
2010-02-02 14:41:00 -08:00
|
|
|
|
unixctl_server_destroy(unixctl);
|
2016-08-16 14:56:19 -07:00
|
|
|
|
replication_destroy();
|
2024-01-09 23:49:08 +01:00
|
|
|
|
free(config_file_path);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
|
2010-02-12 11:17:17 -08:00
|
|
|
|
if (run_process && process_exited(run_process)) {
|
|
|
|
|
int status = process_status(run_process);
|
|
|
|
|
if (status) {
|
|
|
|
|
ovs_fatal(0, "%s: child exited, %s",
|
|
|
|
|
run_command, process_status_msg(status));
|
|
|
|
|
}
|
|
|
|
|
}
|
2022-03-10 23:33:17 +01:00
|
|
|
|
dns_resolve_destroy();
|
2015-03-21 00:00:49 -07:00
|
|
|
|
perf_counters_destroy();
|
2024-01-16 22:52:05 +00:00
|
|
|
|
cooperative_multitasking_destroy();
|
2014-01-17 10:43:03 -08:00
|
|
|
|
service_stop();
|
2009-11-04 15:11:44 -08:00
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2014-03-25 15:51:23 -07:00
|
|
|
|
/* Returns true if 'filename' is known to be already open as a database,
|
|
|
|
|
* false if not.
|
|
|
|
|
*
|
|
|
|
|
* "False negatives" are possible. */
|
|
|
|
|
static bool
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
is_already_open(struct server_config *server_config OVS_UNUSED,
|
2014-03-25 15:51:23 -07:00
|
|
|
|
const char *filename OVS_UNUSED)
|
|
|
|
|
{
|
|
|
|
|
#ifndef _WIN32
|
|
|
|
|
struct stat s;
|
|
|
|
|
|
|
|
|
|
if (!stat(filename, &s)) {
|
|
|
|
|
struct shash_node *node;
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
SHASH_FOR_EACH (node, server_config->all_dbs) {
|
2014-03-25 15:51:23 -07:00
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
struct stat s2;
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (db->config->model != SM_RELAY
|
|
|
|
|
&& !stat(db->filename, &s2)
|
2014-03-25 15:51:23 -07:00
|
|
|
|
&& s.st_dev == s2.st_dev
|
|
|
|
|
&& s.st_ino == s2.st_ino) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
#endif /* !_WIN32 */
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2014-07-02 15:00:16 -07:00
|
|
|
|
static void
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
close_db(struct server_config *server_config, struct db *db, char *comment)
|
2017-12-31 21:15:58 -08:00
|
|
|
|
{
|
|
|
|
|
if (db) {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_jsonrpc_server_remove_db(server_config->jsonrpc,
|
|
|
|
|
db->db, comment);
|
|
|
|
|
if (db->config->model == SM_RELAY) {
|
2021-06-01 23:27:36 +02:00
|
|
|
|
ovsdb_relay_del_db(db->db);
|
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (db->config->model == SM_ACTIVE_BACKUP
|
|
|
|
|
&& db->config->ab.backup) {
|
|
|
|
|
ovsdb_server_replication_remove_db(db);
|
2024-01-09 23:49:05 +01:00
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
db_config_destroy(db->config);
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_destroy(db->db);
|
|
|
|
|
free(db->filename);
|
|
|
|
|
free(db);
|
|
|
|
|
} else {
|
|
|
|
|
free(comment);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-03-27 21:42:58 +02:00
|
|
|
|
static struct ovsdb_error * OVS_WARN_UNUSED_RESULT
|
2023-03-27 21:43:00 +02:00
|
|
|
|
update_schema(struct ovsdb *db,
|
|
|
|
|
const struct ovsdb_schema *schema,
|
|
|
|
|
const struct uuid *txnid,
|
2023-03-27 21:42:58 +02:00
|
|
|
|
bool conversion_with_no_data, void *aux)
|
2021-06-01 23:27:36 +02:00
|
|
|
|
{
|
|
|
|
|
struct server_config *config = aux;
|
|
|
|
|
|
|
|
|
|
if (!db->schema || strcmp(schema->version, db->schema->version)) {
|
|
|
|
|
ovsdb_jsonrpc_server_reconnect(
|
|
|
|
|
config->jsonrpc, false,
|
|
|
|
|
(db->schema
|
|
|
|
|
? xasprintf("database %s schema changed", db->name)
|
|
|
|
|
: xasprintf("database %s connected to storage", db->name)));
|
|
|
|
|
}
|
|
|
|
|
|
2023-03-27 21:42:58 +02:00
|
|
|
|
if (db->schema && conversion_with_no_data) {
|
|
|
|
|
struct ovsdb *new_db = NULL;
|
|
|
|
|
struct ovsdb_error *error;
|
|
|
|
|
|
2023-03-27 21:43:00 +02:00
|
|
|
|
/* If conversion was triggered by the current process, we might
|
|
|
|
|
* already have converted version of a database. */
|
|
|
|
|
new_db = ovsdb_trigger_find_and_steal_converted_db(db, txnid);
|
|
|
|
|
if (!new_db) {
|
|
|
|
|
/* No luck. Converting. */
|
|
|
|
|
error = ovsdb_convert(db, schema, &new_db);
|
|
|
|
|
if (error) {
|
|
|
|
|
/* Should never happen, because conversion should have been
|
|
|
|
|
* checked before writing the schema to the storage. */
|
|
|
|
|
return error;
|
|
|
|
|
}
|
2023-03-27 21:42:58 +02:00
|
|
|
|
}
|
|
|
|
|
ovsdb_replace(db, new_db);
|
|
|
|
|
} else {
|
|
|
|
|
ovsdb_replace(db, ovsdb_create(ovsdb_schema_clone(schema), NULL));
|
|
|
|
|
}
|
2021-06-01 23:27:36 +02:00
|
|
|
|
|
|
|
|
|
/* Force update to schema in _Server database. */
|
|
|
|
|
struct db *dbp = shash_find_data(config->all_dbs, db->name);
|
|
|
|
|
if (dbp) {
|
|
|
|
|
dbp->row_uuid = UUID_ZERO;
|
|
|
|
|
}
|
2023-03-27 21:42:58 +02:00
|
|
|
|
return NULL;
|
2021-06-01 23:27:36 +02:00
|
|
|
|
}
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
static struct ovsdb_error * OVS_WARN_UNUSED_RESULT
|
|
|
|
|
parse_txn(struct server_config *config, struct db *db,
|
2020-05-14 22:10:45 +02:00
|
|
|
|
const struct ovsdb_schema *schema, const struct json *txn_json,
|
2017-12-31 21:15:58 -08:00
|
|
|
|
const struct uuid *txnid)
|
|
|
|
|
{
|
2023-03-27 21:42:58 +02:00
|
|
|
|
struct ovsdb_error *error = NULL;
|
|
|
|
|
struct ovsdb_txn *txn = NULL;
|
|
|
|
|
|
2020-08-05 21:40:51 +02:00
|
|
|
|
if (schema) {
|
2023-03-27 21:42:58 +02:00
|
|
|
|
/* We're replacing the schema (and the data). If transaction includes
|
|
|
|
|
* replacement data, destroy the database (first grabbing its storage),
|
|
|
|
|
* then replace it with the new schema. If not, it's a conversion
|
|
|
|
|
* without data specified. In this case, convert the current database
|
|
|
|
|
* to a new schema instead.
|
2017-12-31 21:15:58 -08:00
|
|
|
|
*
|
2020-08-05 21:40:51 +02:00
|
|
|
|
* Only clustered database schema changes and snapshot installs
|
|
|
|
|
* go through this path.
|
|
|
|
|
*/
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovs_assert(ovsdb_storage_is_clustered(db->db->storage));
|
|
|
|
|
|
2023-03-27 21:42:58 +02:00
|
|
|
|
error = ovsdb_schema_check_for_ephemeral_columns(schema);
|
|
|
|
|
if (error) {
|
|
|
|
|
return error;
|
|
|
|
|
}
|
|
|
|
|
|
2023-03-27 21:43:00 +02:00
|
|
|
|
error = update_schema(db->db, schema, txnid, txn_json == NULL, config);
|
2017-12-31 21:15:58 -08:00
|
|
|
|
if (error) {
|
|
|
|
|
return error;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (txn_json) {
|
|
|
|
|
if (!db->db->schema) {
|
|
|
|
|
return ovsdb_error(NULL, "%s: data without schema", db->filename);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
error = ovsdb_file_txn_from_json(db->db, txn_json, false, &txn);
|
|
|
|
|
if (error) {
|
|
|
|
|
ovsdb_storage_unread(db->db->storage);
|
|
|
|
|
return error;
|
|
|
|
|
}
|
2023-03-27 21:42:58 +02:00
|
|
|
|
} else if (schema) {
|
|
|
|
|
/* We just performed conversion without data. Transaction history
|
|
|
|
|
* was destroyed. Commit a dummy transaction to set the txnid. */
|
|
|
|
|
txn = ovsdb_txn_create(db->db);
|
2017-12-31 21:15:58 -08:00
|
|
|
|
}
|
|
|
|
|
|
2023-03-27 21:42:58 +02:00
|
|
|
|
if (txn) {
|
|
|
|
|
ovsdb_txn_set_txnid(txnid, txn);
|
|
|
|
|
error = ovsdb_txn_replay_commit(txn);
|
|
|
|
|
if (!error && !uuid_is_zero(txnid)) {
|
|
|
|
|
db->db->prereq = *txnid;
|
|
|
|
|
}
|
2023-08-02 15:45:32 +02:00
|
|
|
|
ovsdb_txn_history_run(db->db);
|
2023-03-27 21:42:58 +02:00
|
|
|
|
}
|
|
|
|
|
return error;
|
2017-12-31 21:15:58 -08:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
read_db(struct server_config *config, struct db *db)
|
2014-07-02 15:00:16 -07:00
|
|
|
|
{
|
2017-12-31 21:15:58 -08:00
|
|
|
|
struct ovsdb_error *error;
|
|
|
|
|
for (;;) {
|
|
|
|
|
struct ovsdb_schema *schema;
|
|
|
|
|
struct json *txn_json;
|
|
|
|
|
struct uuid txnid;
|
|
|
|
|
error = ovsdb_storage_read(db->db->storage, &schema, &txn_json,
|
|
|
|
|
&txnid);
|
|
|
|
|
if (error) {
|
|
|
|
|
break;
|
|
|
|
|
} else if (!schema && !txn_json) {
|
|
|
|
|
/* End of file. */
|
|
|
|
|
return;
|
|
|
|
|
} else {
|
|
|
|
|
error = parse_txn(config, db, schema, txn_json, &txnid);
|
|
|
|
|
json_destroy(txn_json);
|
2020-05-14 22:10:45 +02:00
|
|
|
|
ovsdb_schema_destroy(schema);
|
2017-12-31 21:15:58 -08:00
|
|
|
|
if (error) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Log error but otherwise ignore it. Probably the database just
|
|
|
|
|
* got truncated due to power failure etc. and we should use its
|
|
|
|
|
* current contents. */
|
|
|
|
|
char *msg = ovsdb_error_to_string_free(error);
|
|
|
|
|
VLOG_ERR("%s", msg);
|
|
|
|
|
free(msg);
|
2014-07-02 15:00:16 -07:00
|
|
|
|
}
|
|
|
|
|
|
2017-12-15 11:14:55 -08:00
|
|
|
|
static void
|
2017-12-31 21:15:58 -08:00
|
|
|
|
add_db(struct server_config *config, struct db *db)
|
2017-12-15 11:14:55 -08:00
|
|
|
|
{
|
|
|
|
|
db->row_uuid = UUID_ZERO;
|
2017-12-31 21:15:58 -08:00
|
|
|
|
shash_add_assert(config->all_dbs, db->db->name, db);
|
2017-12-15 11:14:55 -08:00
|
|
|
|
}
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
static struct ovsdb_error * OVS_WARN_UNUSED_RESULT
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
open_db(struct server_config *server_config,
|
|
|
|
|
const char *filename, const struct db_config *conf)
|
2013-06-13 04:30:32 -07:00
|
|
|
|
{
|
2017-12-31 21:15:58 -08:00
|
|
|
|
struct ovsdb_storage *storage;
|
|
|
|
|
struct ovsdb_error *error;
|
2021-06-01 23:27:36 +02:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (conf->model != SM_RELAY) {
|
2021-06-01 23:27:36 +02:00
|
|
|
|
/* If we know that the file is already open, return a good error
|
|
|
|
|
* message. Otherwise, if the file is open, we'll fail later on with
|
|
|
|
|
* a harder to interpret file locking error. */
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (is_already_open(server_config, filename)) {
|
2021-06-01 23:27:36 +02:00
|
|
|
|
return ovsdb_error(NULL, "%s: already open", filename);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
error = ovsdb_storage_open(filename, true, &storage);
|
|
|
|
|
if (error) {
|
|
|
|
|
return error;
|
|
|
|
|
}
|
|
|
|
|
} else {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
storage = ovsdb_storage_create_unbacked(filename);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
enum service_model model = conf->model;
|
|
|
|
|
if (model == SM_UNDEFINED || model == SM_STANDALONE
|
|
|
|
|
|| model == SM_CLUSTERED) {
|
|
|
|
|
/* Check the actual service model from the storage. */
|
|
|
|
|
model = ovsdb_storage_is_clustered(storage)
|
|
|
|
|
? SM_CLUSTERED : SM_STANDALONE;
|
|
|
|
|
}
|
|
|
|
|
if (conf->model != SM_UNDEFINED && conf->model != model) {
|
|
|
|
|
ovsdb_storage_close(storage);
|
|
|
|
|
return ovsdb_error(NULL, "%s: database is %s and not %s",
|
|
|
|
|
filename, service_model_to_string(model),
|
|
|
|
|
service_model_to_string(conf->model));
|
2014-03-25 15:51:23 -07:00
|
|
|
|
}
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
struct ovsdb_schema *schema;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (model == SM_RELAY || model == SM_CLUSTERED) {
|
2017-12-31 21:15:58 -08:00
|
|
|
|
schema = NULL;
|
2013-06-27 10:27:57 -07:00
|
|
|
|
} else {
|
2017-12-31 21:15:58 -08:00
|
|
|
|
struct json *txn_json;
|
|
|
|
|
error = ovsdb_storage_read(storage, &schema, &txn_json, NULL);
|
|
|
|
|
if (error) {
|
|
|
|
|
ovsdb_storage_close(storage);
|
|
|
|
|
return error;
|
|
|
|
|
}
|
|
|
|
|
ovs_assert(schema && !txn_json);
|
2013-06-13 04:30:32 -07:00
|
|
|
|
}
|
2021-07-14 09:21:19 +02:00
|
|
|
|
|
|
|
|
|
struct db *db = xzalloc(sizeof *db);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
db->filename = xstrdup(filename);
|
|
|
|
|
db->config = db_config_clone(conf);
|
|
|
|
|
db->config->model = model;
|
2017-12-31 21:15:58 -08:00
|
|
|
|
db->db = ovsdb_create(schema, storage);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_jsonrpc_server_add_db(server_config->jsonrpc, db->db);
|
2013-06-13 04:30:32 -07:00
|
|
|
|
|
ovsdb: relay: Add transaction history support.
Even though relays can be scaled to the big number of servers to
handle a lot more clients, lack of transaction history may cause
significant load if clients are re-connecting.
E.g. in case of the upgrade of a large-scale OVN deployment, relays
can be taken down one by one forcing all the clients of one relay to
jump to other ones. And all these clients will download the database
from scratch from a new relay.
Since relay itself supports monitor_cond_since connection to the
main cluster, it receives the last transaction id along with each
update. Since these transaction ids are 'eid's of actual transactions,
they can be used by relay for a transaction history.
Relay may not receive all the transaction ids, because the main cluster
may combine several changes into a single monitor update. However,
all relays will, likely, receive same updates with the same transaction
ids, so the case where transaction id can not be found after
re-connection between relays should not be very common. If some id
is missing on the relay (i.e. this update was merged with some other
update and newer id was used) the client will just re-download the
database as if there was a normal transaction history miss.
OVSDB client synchronization module updated to provide the last
transaction id along with the update. Relay module updated to use
these ids as a transaction id. If ids are zero, relay decides that
the main server doesn't support transaction ids and disables the
transaction history accordingly.
Using ovsdb_txn_replay_commit() instead of ovsdb_txn_propose_commit_block(),
so transactions are added to the history. This can be done, because
relays has no file storage, so there is no need to write anything.
Relay tests modified to test both standalone and clustered database
as a main server. Checks added to ensure that all servers receive the
same transaction ids in monitor updates.
Acked-by: Mike Pattrick <mkp@redhat.com>
Acked-by: Han Zhou <hzhou@ovn.org>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2021-12-19 15:09:39 +01:00
|
|
|
|
/* Enable txn history for clustered and relay modes. It is not enabled for
|
|
|
|
|
* other modes for now, since txn id is available for clustered and relay
|
|
|
|
|
* modes only. */
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_txn_history_init(db->db, model == SM_RELAY || model == SM_CLUSTERED);
|
ovsdb monitor: Fix crash when using non-zero last-id with standalone DB.
When a client uses monitor-cond-since with a non-zero last-id but the
server is not in cluster mode for the DB being monitored, it leads to
segmentation fault because the txn_history list is not initialized in
this case.
Program terminated with signal SIGSEGV, Segmentation fault.
1536 struct ovsdb_txn *txn = h_node->txn;
(gdb) bt
0 ovsdb_monitor_get_changes_after (txn_uuid=txn_uuid@entry=0x7ffe8605b7e0, dbmon=0x17c1b40, p_mcs=p_mcs@entry=0x17c4900) at ovsdb/monitor.c:1536
1 0x000000000040da2d in ovsdb_jsonrpc_monitor_create (request_id=0x1804630, version=<optimized out>, params=0x17ad330, db=0x18015b0, s=<optimized out>) at ovsdb/jsonrpc-server.c:1469
2 ovsdb_jsonrpc_session_got_request (request=0x17ad520, s=<optimized out>) at ovsdb/jsonrpc-server.c:1002
3 ovsdb_jsonrpc_session_run (s=<optimized out>) at ovsdb/jsonrpc-server.c:556
...
Although it doesn't happen in normal use cases, no one can prevent a
client to send this on purpose or in a corner case when a client firstly
connected to a clustered DB but later the server restarted with a
non-clustered DB.
This patch fixes it by always initialize the txn_history list to avoid
the undefined behavior in this case. It adds a test case to cover it, too.
Fixes: 695e815 ("ovsdb-server: Transaction history tracking.")
Reported-by: Aliasgar Ginwala <aginwala@ebay.com>
Signed-off-by: Han Zhou <hzhou8@ebay.com>
Signed-off-by: Ben Pfaff <blp@ovn.org>
2019-08-19 16:30:35 -07:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
read_db(server_config, db);
|
2017-12-31 21:15:58 -08:00
|
|
|
|
|
|
|
|
|
error = (db->db->name[0] == '_'
|
|
|
|
|
? ovsdb_error(NULL, "%s: names beginning with \"_\" are reserved",
|
|
|
|
|
db->db->name)
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
: shash_find(server_config->all_dbs, db->db->name)
|
2017-12-31 21:15:58 -08:00
|
|
|
|
? ovsdb_error(NULL, "%s: duplicate database name", db->db->name)
|
|
|
|
|
: NULL);
|
|
|
|
|
if (error) {
|
|
|
|
|
char *error_s = ovsdb_error_to_string(error);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
close_db(server_config, db,
|
2017-12-31 21:15:58 -08:00
|
|
|
|
xasprintf("cannot complete opening %s database (%s)",
|
|
|
|
|
db->db->name, error_s));
|
|
|
|
|
free(error_s);
|
|
|
|
|
return error;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
add_db(server_config, db);
|
2021-06-01 23:27:36 +02:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (model == SM_RELAY) {
|
|
|
|
|
ovsdb_relay_add_db(db->db, conf->source, update_schema, server_config,
|
2024-01-09 23:49:14 +01:00
|
|
|
|
&conf->options->rpc);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
}
|
|
|
|
|
if (model == SM_ACTIVE_BACKUP && conf->ab.backup) {
|
|
|
|
|
const struct uuid *server_uuid;
|
|
|
|
|
|
|
|
|
|
server_uuid = ovsdb_jsonrpc_server_get_uuid(server_config->jsonrpc);
|
|
|
|
|
replication_set_db(db->db, conf->source, conf->ab.sync_exclude,
|
2024-01-09 23:49:12 +01:00
|
|
|
|
server_uuid, &conf->options->rpc);
|
2021-06-01 23:27:36 +02:00
|
|
|
|
}
|
2017-12-31 21:15:58 -08:00
|
|
|
|
return NULL;
|
2013-06-13 04:30:32 -07:00
|
|
|
|
}
|
|
|
|
|
|
2017-12-15 11:14:55 -08:00
|
|
|
|
/* Add the internal _Server database to the server configuration. */
|
|
|
|
|
static void
|
|
|
|
|
add_server_db(struct server_config *config)
|
|
|
|
|
{
|
|
|
|
|
struct json *schema_json = json_from_string(
|
|
|
|
|
#include "ovsdb/_server.ovsschema.inc"
|
|
|
|
|
);
|
|
|
|
|
ovs_assert(schema_json->type == JSON_OBJECT);
|
|
|
|
|
|
|
|
|
|
struct ovsdb_schema *schema;
|
|
|
|
|
struct ovsdb_error *error OVS_UNUSED = ovsdb_schema_from_json(schema_json,
|
|
|
|
|
&schema);
|
|
|
|
|
ovs_assert(!error);
|
|
|
|
|
json_destroy(schema_json);
|
|
|
|
|
|
|
|
|
|
struct db *db = xzalloc(sizeof *db);
|
2019-02-28 09:15:17 -08:00
|
|
|
|
/* We don't need txn_history for server_db. */
|
|
|
|
|
|
2017-12-15 11:14:55 -08:00
|
|
|
|
db->filename = xstrdup("<internal>");
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
db->config = xzalloc(sizeof *db->config);
|
|
|
|
|
db->config->model = SM_UNDEFINED;
|
2021-06-01 21:52:08 +02:00
|
|
|
|
db->db = ovsdb_create(schema, ovsdb_storage_create_unbacked(NULL));
|
2024-01-09 23:49:01 +01:00
|
|
|
|
db->db->read_only = true;
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
bool ok OVS_UNUSED = ovsdb_jsonrpc_server_add_db(config->jsonrpc, db->db);
|
|
|
|
|
ovs_assert(ok);
|
2024-01-09 23:49:01 +01:00
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
add_db(config, db);
|
2017-12-15 11:14:55 -08:00
|
|
|
|
}
|
|
|
|
|
|
2014-12-15 14:10:38 +01:00
|
|
|
|
static char * OVS_WARN_UNUSED_RESULT
|
2013-06-13 04:30:32 -07:00
|
|
|
|
parse_db_column__(const struct shash *all_dbs,
|
2013-04-10 16:22:00 -07:00
|
|
|
|
const char *name_, char *name,
|
|
|
|
|
const struct db **dbp,
|
|
|
|
|
const struct ovsdb_table **tablep,
|
|
|
|
|
const struct ovsdb_column **columnp)
|
2010-01-04 10:05:51 -08:00
|
|
|
|
{
|
2013-06-24 08:38:22 -07:00
|
|
|
|
const char *db_name, *table_name, *column_name;
|
2012-09-07 10:07:03 -07:00
|
|
|
|
const char *tokens[3];
|
2010-01-04 10:05:51 -08:00
|
|
|
|
char *save_ptr = NULL;
|
|
|
|
|
|
2013-04-10 16:22:00 -07:00
|
|
|
|
*dbp = NULL;
|
|
|
|
|
*tablep = NULL;
|
|
|
|
|
*columnp = NULL;
|
|
|
|
|
|
2010-02-10 11:08:27 -08:00
|
|
|
|
strtok_r(name, ":", &save_ptr); /* "db:" */
|
2012-09-07 10:07:03 -07:00
|
|
|
|
tokens[0] = strtok_r(NULL, ",", &save_ptr);
|
|
|
|
|
tokens[1] = strtok_r(NULL, ",", &save_ptr);
|
|
|
|
|
tokens[2] = strtok_r(NULL, ",", &save_ptr);
|
2013-06-24 08:38:22 -07:00
|
|
|
|
if (!tokens[0] || !tokens[1] || !tokens[2]) {
|
2013-04-10 16:22:00 -07:00
|
|
|
|
return xasprintf("\"%s\": invalid syntax", name_);
|
2010-01-04 10:05:51 -08:00
|
|
|
|
}
|
2012-09-07 10:07:03 -07:00
|
|
|
|
|
2013-06-24 08:38:22 -07:00
|
|
|
|
db_name = tokens[0];
|
|
|
|
|
table_name = tokens[1];
|
|
|
|
|
column_name = tokens[2];
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
*dbp = shash_find_data(all_dbs, tokens[0]);
|
|
|
|
|
if (!*dbp) {
|
2013-06-24 08:38:22 -07:00
|
|
|
|
return xasprintf("\"%s\": no database named %s", name_, db_name);
|
2012-09-07 10:07:03 -07:00
|
|
|
|
}
|
2010-01-04 10:05:51 -08:00
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
*tablep = ovsdb_get_table((*dbp)->db, table_name);
|
|
|
|
|
if (!*tablep) {
|
2013-04-10 16:22:00 -07:00
|
|
|
|
return xasprintf("\"%s\": no table named %s", name_, table_name);
|
2010-01-04 10:05:51 -08:00
|
|
|
|
}
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
*columnp = ovsdb_table_schema_get_column((*tablep)->schema, column_name);
|
|
|
|
|
if (!*columnp) {
|
2013-04-10 16:22:00 -07:00
|
|
|
|
return xasprintf("\"%s\": table \"%s\" has no column \"%s\"",
|
|
|
|
|
name_, table_name, column_name);
|
2010-01-04 10:05:51 -08:00
|
|
|
|
}
|
|
|
|
|
|
2013-04-10 16:22:00 -07:00
|
|
|
|
return NULL;
|
2010-11-05 10:22:18 -07:00
|
|
|
|
}
|
|
|
|
|
|
2013-04-10 16:22:00 -07:00
|
|
|
|
/* Returns NULL if successful, otherwise a malloc()'d string describing the
|
|
|
|
|
* error. */
|
2014-12-15 14:10:38 +01:00
|
|
|
|
static char * OVS_WARN_UNUSED_RESULT
|
2013-06-13 04:30:32 -07:00
|
|
|
|
parse_db_column(const struct shash *all_dbs,
|
2013-04-10 16:22:00 -07:00
|
|
|
|
const char *name_,
|
|
|
|
|
const struct db **dbp,
|
|
|
|
|
const struct ovsdb_table **tablep,
|
|
|
|
|
const struct ovsdb_column **columnp)
|
|
|
|
|
{
|
|
|
|
|
char *name = xstrdup(name_);
|
2013-06-13 04:30:32 -07:00
|
|
|
|
char *retval = parse_db_column__(all_dbs, name_, name,
|
2013-04-10 16:22:00 -07:00
|
|
|
|
dbp, tablep, columnp);
|
|
|
|
|
free(name);
|
|
|
|
|
return retval;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Returns NULL if successful, otherwise a malloc()'d string describing the
|
|
|
|
|
* error. */
|
2014-12-15 14:10:38 +01:00
|
|
|
|
static char * OVS_WARN_UNUSED_RESULT
|
2013-06-13 04:30:32 -07:00
|
|
|
|
parse_db_string_column(const struct shash *all_dbs,
|
2010-11-05 10:22:18 -07:00
|
|
|
|
const char *name,
|
2012-09-07 10:07:03 -07:00
|
|
|
|
const struct db **dbp,
|
2010-11-05 10:22:18 -07:00
|
|
|
|
const struct ovsdb_table **tablep,
|
|
|
|
|
const struct ovsdb_column **columnp)
|
|
|
|
|
{
|
2013-04-10 16:22:00 -07:00
|
|
|
|
char *retval;
|
2010-11-05 10:22:18 -07:00
|
|
|
|
|
2013-06-13 04:30:32 -07:00
|
|
|
|
retval = parse_db_column(all_dbs, name, dbp, tablep, columnp);
|
2013-04-10 16:22:00 -07:00
|
|
|
|
if (retval) {
|
|
|
|
|
return retval;
|
|
|
|
|
}
|
2010-11-05 10:22:18 -07:00
|
|
|
|
|
2013-04-10 16:22:00 -07:00
|
|
|
|
if ((*columnp)->type.key.type != OVSDB_TYPE_STRING
|
|
|
|
|
|| (*columnp)->type.value.type != OVSDB_TYPE_VOID) {
|
|
|
|
|
return xasprintf("\"%s\": table \"%s\" column \"%s\" is "
|
|
|
|
|
"not string or set of strings",
|
|
|
|
|
name, (*tablep)->schema->name, (*columnp)->name);
|
2010-03-18 17:12:02 -07:00
|
|
|
|
}
|
|
|
|
|
|
2013-04-10 16:22:00 -07:00
|
|
|
|
return NULL;
|
2010-03-18 17:12:02 -07:00
|
|
|
|
}
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
static const char *
|
|
|
|
|
query_db_string(const struct shash *all_dbs, const char *name,
|
|
|
|
|
struct ds *errors)
|
2010-03-18 17:12:02 -07:00
|
|
|
|
{
|
|
|
|
|
if (!name || strncmp(name, "db:", 3)) {
|
|
|
|
|
return name;
|
|
|
|
|
} else {
|
|
|
|
|
const struct ovsdb_column *column;
|
|
|
|
|
const struct ovsdb_table *table;
|
|
|
|
|
const struct ovsdb_row *row;
|
2012-09-07 10:07:03 -07:00
|
|
|
|
const struct db *db;
|
2013-04-10 16:22:00 -07:00
|
|
|
|
char *retval;
|
2010-03-18 17:12:02 -07:00
|
|
|
|
|
2013-06-13 04:30:32 -07:00
|
|
|
|
retval = parse_db_string_column(all_dbs, name,
|
2013-04-10 16:22:00 -07:00
|
|
|
|
&db, &table, &column);
|
|
|
|
|
if (retval) {
|
2017-12-31 21:15:58 -08:00
|
|
|
|
if (db && !db->db->schema) {
|
|
|
|
|
/* 'db' is a clustered database but it hasn't connected to the
|
|
|
|
|
* cluster yet, so we can't get anything out of it, not even a
|
|
|
|
|
* schema. Not really an error. */
|
|
|
|
|
} else {
|
|
|
|
|
ds_put_format(errors, "%s\n", retval);
|
|
|
|
|
}
|
2014-07-28 15:21:17 +08:00
|
|
|
|
free(retval);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
return NULL;
|
2013-04-10 16:22:00 -07:00
|
|
|
|
}
|
2010-03-18 17:12:02 -07:00
|
|
|
|
|
2010-09-17 10:33:10 -07:00
|
|
|
|
HMAP_FOR_EACH (row, hmap_node, &table->rows) {
|
2010-03-18 17:12:02 -07:00
|
|
|
|
const struct ovsdb_datum *datum;
|
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
|
|
datum = &row->fields[column->index];
|
|
|
|
|
for (i = 0; i < datum->n; i++) {
|
2021-11-22 01:09:32 +01:00
|
|
|
|
const char *key = json_string(datum->keys[i].s);
|
|
|
|
|
if (key[0]) {
|
|
|
|
|
return key;
|
2010-03-18 17:12:02 -07:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return NULL;
|
2010-01-04 10:05:51 -08:00
|
|
|
|
}
|
2010-03-18 17:12:02 -07:00
|
|
|
|
}
|
|
|
|
|
|
2010-11-05 10:22:18 -07:00
|
|
|
|
static struct ovsdb_jsonrpc_options *
|
2024-01-09 23:49:03 +01:00
|
|
|
|
add_remote(struct shash *remotes, const char *target,
|
|
|
|
|
const struct ovsdb_jsonrpc_options *options_)
|
2010-11-05 10:22:18 -07:00
|
|
|
|
{
|
|
|
|
|
struct ovsdb_jsonrpc_options *options;
|
|
|
|
|
|
|
|
|
|
options = shash_find_data(remotes, target);
|
|
|
|
|
if (!options) {
|
2024-01-09 23:49:03 +01:00
|
|
|
|
options = options_
|
|
|
|
|
? ovsdb_jsonrpc_options_clone(options_)
|
|
|
|
|
: ovsdb_jsonrpc_default_options(target);
|
2010-11-05 10:22:18 -07:00
|
|
|
|
shash_add(remotes, target, options);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return options;
|
|
|
|
|
}
|
|
|
|
|
|
2017-10-31 10:52:10 -07:00
|
|
|
|
static void
|
|
|
|
|
free_remotes(struct shash *remotes)
|
|
|
|
|
{
|
|
|
|
|
if (remotes) {
|
|
|
|
|
struct shash_node *node;
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, remotes) {
|
|
|
|
|
struct ovsdb_jsonrpc_options *options = node->data;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
|
|
|
|
ovsdb_jsonrpc_options_free(options);
|
2017-10-31 10:52:10 -07:00
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
shash_clear(remotes);
|
2017-10-31 10:52:10 -07:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2010-11-05 10:22:18 -07:00
|
|
|
|
/* Adds a remote and options to 'remotes', based on the Manager table row in
|
|
|
|
|
* 'row'. */
|
|
|
|
|
static void
|
|
|
|
|
add_manager_options(struct shash *remotes, const struct ovsdb_row *row)
|
|
|
|
|
{
|
|
|
|
|
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
|
|
|
|
|
struct ovsdb_jsonrpc_options *options;
|
|
|
|
|
long long int max_backoff, probe_interval;
|
2016-10-25 12:38:48 -04:00
|
|
|
|
bool read_only;
|
2017-05-31 19:04:32 -04:00
|
|
|
|
const char *target, *dscp_string, *role;
|
2010-11-05 10:22:18 -07:00
|
|
|
|
|
2017-05-01 10:13:11 -04:00
|
|
|
|
if (!ovsdb_util_read_string_column(row, "target", &target) || !target) {
|
2010-11-05 10:22:18 -07:00
|
|
|
|
VLOG_INFO_RL(&rl, "Table `%s' has missing or invalid `target' column",
|
|
|
|
|
row->table->schema->name);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-09 23:49:03 +01:00
|
|
|
|
options = add_remote(remotes, target, NULL);
|
2017-05-01 10:13:11 -04:00
|
|
|
|
if (ovsdb_util_read_integer_column(row, "max_backoff", &max_backoff)) {
|
2024-01-09 23:49:11 +01:00
|
|
|
|
options->rpc.max_backoff = max_backoff;
|
2010-11-05 10:22:18 -07:00
|
|
|
|
}
|
2017-05-01 10:13:11 -04:00
|
|
|
|
if (ovsdb_util_read_integer_column(row, "inactivity_probe",
|
|
|
|
|
&probe_interval)) {
|
2024-01-09 23:49:11 +01:00
|
|
|
|
options->rpc.probe_interval = probe_interval;
|
2010-11-05 10:22:18 -07:00
|
|
|
|
}
|
2017-05-01 10:13:11 -04:00
|
|
|
|
if (ovsdb_util_read_bool_column(row, "read_only", &read_only)) {
|
2016-10-25 12:38:48 -04:00
|
|
|
|
options->read_only = read_only;
|
|
|
|
|
}
|
2012-03-10 15:58:10 -08:00
|
|
|
|
|
2017-05-31 19:04:32 -04:00
|
|
|
|
free(options->role);
|
|
|
|
|
options->role = NULL;
|
|
|
|
|
if (ovsdb_util_read_string_column(row, "role", &role) && role) {
|
|
|
|
|
options->role = xstrdup(role);
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-09 23:49:11 +01:00
|
|
|
|
options->rpc.dscp = DSCP_DEFAULT;
|
2017-05-01 10:13:11 -04:00
|
|
|
|
dscp_string = ovsdb_util_read_map_string_column(row, "other_config",
|
|
|
|
|
"dscp");
|
2012-04-16 12:09:49 -07:00
|
|
|
|
if (dscp_string) {
|
|
|
|
|
int dscp = atoi(dscp_string);
|
|
|
|
|
if (dscp >= 0 && dscp <= 63) {
|
2024-01-09 23:49:11 +01:00
|
|
|
|
options->rpc.dscp = dscp;
|
2012-04-16 12:09:49 -07:00
|
|
|
|
}
|
|
|
|
|
}
|
2010-11-05 10:22:18 -07:00
|
|
|
|
}
|
|
|
|
|
|
2010-03-18 17:12:02 -07:00
|
|
|
|
static void
|
2013-06-13 04:30:32 -07:00
|
|
|
|
query_db_remotes(const char *name, const struct shash *all_dbs,
|
2013-06-27 10:27:57 -07:00
|
|
|
|
struct shash *remotes, struct ds *errors)
|
2010-03-18 17:12:02 -07:00
|
|
|
|
{
|
|
|
|
|
const struct ovsdb_column *column;
|
|
|
|
|
const struct ovsdb_table *table;
|
|
|
|
|
const struct ovsdb_row *row;
|
2012-09-07 10:07:03 -07:00
|
|
|
|
const struct db *db;
|
2013-04-10 16:22:00 -07:00
|
|
|
|
char *retval;
|
2010-03-18 17:12:02 -07:00
|
|
|
|
|
2013-06-13 04:30:32 -07:00
|
|
|
|
retval = parse_db_column(all_dbs, name, &db, &table, &column);
|
2013-04-10 16:22:00 -07:00
|
|
|
|
if (retval) {
|
2017-12-31 21:15:58 -08:00
|
|
|
|
if (db && !db->db->schema) {
|
|
|
|
|
/* 'db' is a clustered database but it hasn't connected to the
|
|
|
|
|
* cluster yet, so we can't get anything out of it, not even a
|
|
|
|
|
* schema. Not really an error. */
|
|
|
|
|
} else {
|
|
|
|
|
ds_put_format(errors, "%s\n", retval);
|
|
|
|
|
}
|
2013-06-27 10:27:57 -07:00
|
|
|
|
free(retval);
|
|
|
|
|
return;
|
2013-04-10 16:22:00 -07:00
|
|
|
|
}
|
2010-01-04 10:05:51 -08:00
|
|
|
|
|
2010-11-05 10:22:18 -07:00
|
|
|
|
if (column->type.key.type == OVSDB_TYPE_STRING
|
|
|
|
|
&& column->type.value.type == OVSDB_TYPE_VOID) {
|
|
|
|
|
HMAP_FOR_EACH (row, hmap_node, &table->rows) {
|
|
|
|
|
const struct ovsdb_datum *datum;
|
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
|
|
datum = &row->fields[column->index];
|
|
|
|
|
for (i = 0; i < datum->n; i++) {
|
2024-01-09 23:49:03 +01:00
|
|
|
|
add_remote(remotes, json_string(datum->keys[i].s), NULL);
|
2010-11-05 10:22:18 -07:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
} else if (column->type.key.type == OVSDB_TYPE_UUID
|
2018-05-24 10:32:59 -07:00
|
|
|
|
&& column->type.key.uuid.refTable
|
2010-11-05 10:22:18 -07:00
|
|
|
|
&& column->type.value.type == OVSDB_TYPE_VOID) {
|
2018-05-24 10:32:59 -07:00
|
|
|
|
const struct ovsdb_table *ref_table = column->type.key.uuid.refTable;
|
2010-11-05 10:22:18 -07:00
|
|
|
|
HMAP_FOR_EACH (row, hmap_node, &table->rows) {
|
|
|
|
|
const struct ovsdb_datum *datum;
|
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
|
|
datum = &row->fields[column->index];
|
|
|
|
|
for (i = 0; i < datum->n; i++) {
|
|
|
|
|
const struct ovsdb_row *ref_row;
|
2010-01-04 10:05:51 -08:00
|
|
|
|
|
2010-11-05 10:22:18 -07:00
|
|
|
|
ref_row = ovsdb_table_get_row(ref_table, &datum->keys[i].uuid);
|
|
|
|
|
if (ref_row) {
|
|
|
|
|
add_manager_options(remotes, ref_row);
|
|
|
|
|
}
|
|
|
|
|
}
|
2010-01-04 10:05:51 -08:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2011-01-28 15:39:55 -08:00
|
|
|
|
static void
|
|
|
|
|
update_remote_row(const struct ovsdb_row *row, struct ovsdb_txn *txn,
|
2011-07-13 16:15:22 -07:00
|
|
|
|
const struct ovsdb_jsonrpc_server *jsonrpc)
|
2011-01-28 15:39:55 -08:00
|
|
|
|
{
|
2011-07-13 16:15:22 -07:00
|
|
|
|
struct ovsdb_jsonrpc_remote_status status;
|
2011-01-28 15:39:55 -08:00
|
|
|
|
struct ovsdb_row *rw_row;
|
|
|
|
|
const char *target;
|
2013-04-18 16:37:05 -07:00
|
|
|
|
char *keys[9], *values[9];
|
2011-01-28 15:39:55 -08:00
|
|
|
|
size_t n = 0;
|
|
|
|
|
|
|
|
|
|
/* Get the "target" (protocol/host/port) spec. */
|
2017-05-01 10:13:11 -04:00
|
|
|
|
if (!ovsdb_util_read_string_column(row, "target", &target)) {
|
2011-01-28 15:39:55 -08:00
|
|
|
|
/* Bad remote spec or incorrect schema. */
|
|
|
|
|
return;
|
|
|
|
|
}
|
2024-01-09 20:54:03 +01:00
|
|
|
|
ovsdb_txn_row_modify(txn, row, &rw_row, NULL);
|
2011-07-13 16:15:22 -07:00
|
|
|
|
ovsdb_jsonrpc_server_get_remote_status(jsonrpc, target, &status);
|
2011-01-28 15:39:55 -08:00
|
|
|
|
|
|
|
|
|
/* Update status information columns. */
|
2017-05-01 10:13:11 -04:00
|
|
|
|
ovsdb_util_write_bool_column(rw_row, "is_connected", status.is_connected);
|
2011-01-28 15:39:55 -08:00
|
|
|
|
|
2011-07-13 16:15:22 -07:00
|
|
|
|
if (status.state) {
|
|
|
|
|
keys[n] = xstrdup("state");
|
|
|
|
|
values[n++] = xstrdup(status.state);
|
|
|
|
|
}
|
|
|
|
|
if (status.sec_since_connect != UINT_MAX) {
|
2011-03-14 13:10:02 -07:00
|
|
|
|
keys[n] = xstrdup("sec_since_connect");
|
2011-07-13 16:15:22 -07:00
|
|
|
|
values[n++] = xasprintf("%u", status.sec_since_connect);
|
2011-03-14 13:10:02 -07:00
|
|
|
|
}
|
2011-07-13 16:15:22 -07:00
|
|
|
|
if (status.sec_since_disconnect != UINT_MAX) {
|
2011-03-14 13:10:02 -07:00
|
|
|
|
keys[n] = xstrdup("sec_since_disconnect");
|
2011-07-13 16:15:22 -07:00
|
|
|
|
values[n++] = xasprintf("%u", status.sec_since_disconnect);
|
2011-03-14 13:10:02 -07:00
|
|
|
|
}
|
2011-07-13 16:15:22 -07:00
|
|
|
|
if (status.last_error) {
|
2011-01-28 15:39:55 -08:00
|
|
|
|
keys[n] = xstrdup("last_error");
|
|
|
|
|
values[n++] =
|
2011-07-13 16:15:22 -07:00
|
|
|
|
xstrdup(ovs_retval_to_string(status.last_error));
|
2011-01-28 15:39:55 -08:00
|
|
|
|
}
|
2011-07-26 10:24:17 -07:00
|
|
|
|
if (status.locks_held && status.locks_held[0]) {
|
|
|
|
|
keys[n] = xstrdup("locks_held");
|
|
|
|
|
values[n++] = xstrdup(status.locks_held);
|
|
|
|
|
}
|
|
|
|
|
if (status.locks_waiting && status.locks_waiting[0]) {
|
|
|
|
|
keys[n] = xstrdup("locks_waiting");
|
|
|
|
|
values[n++] = xstrdup(status.locks_waiting);
|
|
|
|
|
}
|
|
|
|
|
if (status.locks_lost && status.locks_lost[0]) {
|
|
|
|
|
keys[n] = xstrdup("locks_lost");
|
|
|
|
|
values[n++] = xstrdup(status.locks_lost);
|
|
|
|
|
}
|
2011-07-13 16:08:37 -07:00
|
|
|
|
if (status.n_connections > 1) {
|
|
|
|
|
keys[n] = xstrdup("n_connections");
|
|
|
|
|
values[n++] = xasprintf("%d", status.n_connections);
|
|
|
|
|
}
|
2013-04-18 16:37:05 -07:00
|
|
|
|
if (status.bound_port != htons(0)) {
|
|
|
|
|
keys[n] = xstrdup("bound_port");
|
|
|
|
|
values[n++] = xasprintf("%"PRIu16, ntohs(status.bound_port));
|
|
|
|
|
}
|
2017-05-01 10:13:11 -04:00
|
|
|
|
ovsdb_util_write_string_string_column(rw_row, "status", keys, values, n);
|
2011-07-26 10:24:17 -07:00
|
|
|
|
|
|
|
|
|
ovsdb_jsonrpc_server_free_remote_status(&status);
|
2011-01-28 15:39:55 -08:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
2017-03-18 16:41:37 -07:00
|
|
|
|
update_remote_rows(const struct shash *all_dbs, const struct db *db_,
|
2011-07-13 16:15:22 -07:00
|
|
|
|
const char *remote_name,
|
2017-03-18 16:41:37 -07:00
|
|
|
|
const struct ovsdb_jsonrpc_server *jsonrpc,
|
|
|
|
|
struct ovsdb_txn *txn)
|
2011-01-28 15:39:55 -08:00
|
|
|
|
{
|
|
|
|
|
const struct ovsdb_table *table, *ref_table;
|
|
|
|
|
const struct ovsdb_column *column;
|
|
|
|
|
const struct ovsdb_row *row;
|
2012-09-07 10:07:03 -07:00
|
|
|
|
const struct db *db;
|
2013-04-10 16:22:00 -07:00
|
|
|
|
char *retval;
|
2011-01-28 15:39:55 -08:00
|
|
|
|
|
|
|
|
|
if (strncmp("db:", remote_name, 3)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2013-06-13 04:30:32 -07:00
|
|
|
|
retval = parse_db_column(all_dbs, remote_name, &db, &table, &column);
|
2013-04-10 16:22:00 -07:00
|
|
|
|
if (retval) {
|
2013-06-27 10:27:57 -07:00
|
|
|
|
free(retval);
|
|
|
|
|
return;
|
2013-04-10 16:22:00 -07:00
|
|
|
|
}
|
2011-01-28 15:39:55 -08:00
|
|
|
|
|
2017-03-18 16:41:37 -07:00
|
|
|
|
if (db != db_
|
|
|
|
|
|| column->type.key.type != OVSDB_TYPE_UUID
|
2018-05-24 10:32:59 -07:00
|
|
|
|
|| !column->type.key.uuid.refTable
|
2011-01-28 15:39:55 -08:00
|
|
|
|
|| column->type.value.type != OVSDB_TYPE_VOID) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2018-05-24 10:32:59 -07:00
|
|
|
|
ref_table = column->type.key.uuid.refTable;
|
2011-01-28 15:39:55 -08:00
|
|
|
|
|
|
|
|
|
HMAP_FOR_EACH (row, hmap_node, &table->rows) {
|
|
|
|
|
const struct ovsdb_datum *datum;
|
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
|
|
datum = &row->fields[column->index];
|
|
|
|
|
for (i = 0; i < datum->n; i++) {
|
|
|
|
|
const struct ovsdb_row *ref_row;
|
|
|
|
|
|
|
|
|
|
ref_row = ovsdb_table_get_row(ref_table, &datum->keys[i].uuid);
|
|
|
|
|
if (ref_row) {
|
2017-03-18 16:41:37 -07:00
|
|
|
|
update_remote_row(ref_row, txn, jsonrpc);
|
2011-01-28 15:39:55 -08:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2017-12-15 11:14:55 -08:00
|
|
|
|
static void
|
|
|
|
|
commit_txn(struct ovsdb_txn *txn, const char *name)
|
|
|
|
|
{
|
2017-12-31 21:15:58 -08:00
|
|
|
|
struct ovsdb_error *error = ovsdb_txn_propose_commit_block(txn, false);
|
2017-12-15 11:14:55 -08:00
|
|
|
|
if (error) {
|
|
|
|
|
static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
|
|
|
|
|
char *msg = ovsdb_error_to_string_free(error);
|
|
|
|
|
VLOG_ERR_RL(&rl, "Failed to update %s: %s", name, msg);
|
|
|
|
|
free(msg);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2011-01-28 15:39:55 -08:00
|
|
|
|
static void
|
|
|
|
|
update_remote_status(const struct ovsdb_jsonrpc_server *jsonrpc,
|
2024-01-09 23:49:03 +01:00
|
|
|
|
const struct shash *remotes,
|
2013-06-13 04:30:32 -07:00
|
|
|
|
struct shash *all_dbs)
|
2011-01-28 15:39:55 -08:00
|
|
|
|
{
|
2024-01-09 23:49:03 +01:00
|
|
|
|
struct shash_node *db_node;
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (db_node, all_dbs) {
|
|
|
|
|
struct db *db = db_node->data;
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
if (!db->db || ovsdb_storage_is_clustered(db->db->storage)) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2011-01-28 15:39:55 -08:00
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
struct ovsdb_txn *txn = ovsdb_txn_create(db->db);
|
2024-01-09 23:49:03 +01:00
|
|
|
|
const struct shash_node *remote_node;
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (remote_node, remotes) {
|
|
|
|
|
const char *remote = remote_node->name;
|
|
|
|
|
|
2017-03-18 16:41:37 -07:00
|
|
|
|
update_remote_rows(all_dbs, db, remote, jsonrpc, txn);
|
|
|
|
|
}
|
2017-12-31 21:15:58 -08:00
|
|
|
|
commit_txn(txn, "remote status");
|
2017-12-15 11:14:55 -08:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Updates 'row', a row in the _Server database's Database table, to match
|
|
|
|
|
* 'db'. */
|
|
|
|
|
static void
|
|
|
|
|
update_database_status(struct ovsdb_row *row, struct db *db)
|
|
|
|
|
{
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_util_write_string_column(row, "name", db->db->name);
|
|
|
|
|
ovsdb_util_write_string_column(row, "model",
|
2021-06-01 23:27:36 +02:00
|
|
|
|
db->db->is_relay ? "relay" : ovsdb_storage_get_model(db->db->storage));
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_util_write_bool_column(row, "connected",
|
2021-06-09 16:50:37 +02:00
|
|
|
|
db->db->is_relay ? ovsdb_relay_is_connected(db->db)
|
|
|
|
|
: ovsdb_storage_is_connected(db->db->storage));
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_util_write_bool_column(row, "leader",
|
2021-06-01 23:27:36 +02:00
|
|
|
|
db->db->is_relay ? false : ovsdb_storage_is_leader(db->db->storage));
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_util_write_uuid_column(row, "cid",
|
|
|
|
|
ovsdb_storage_get_cid(db->db->storage));
|
|
|
|
|
ovsdb_util_write_uuid_column(row, "sid",
|
|
|
|
|
ovsdb_storage_get_sid(db->db->storage));
|
|
|
|
|
|
|
|
|
|
uint64_t index = ovsdb_storage_get_applied_index(db->db->storage);
|
|
|
|
|
if (index) {
|
|
|
|
|
ovsdb_util_write_integer_column(row, "index", index);
|
|
|
|
|
} else {
|
|
|
|
|
ovsdb_util_clear_column(row, "index");
|
|
|
|
|
}
|
2017-12-15 11:14:55 -08:00
|
|
|
|
|
|
|
|
|
const struct uuid *row_uuid = ovsdb_row_get_uuid(row);
|
|
|
|
|
if (!uuid_equals(row_uuid, &db->row_uuid)) {
|
|
|
|
|
db->row_uuid = *row_uuid;
|
|
|
|
|
|
|
|
|
|
/* The schema can only change if the row UUID changes, so only update
|
|
|
|
|
* it in that case. Presumably, this is worth optimizing because
|
|
|
|
|
* schemas are often kilobytes in size and nontrivial to serialize. */
|
2017-12-31 21:15:58 -08:00
|
|
|
|
char *schema = NULL;
|
|
|
|
|
if (db->db->schema) {
|
|
|
|
|
struct json *json_schema = ovsdb_schema_to_json(db->db->schema);
|
|
|
|
|
schema = json_to_string(json_schema, JSSF_SORT);
|
|
|
|
|
json_destroy(json_schema);
|
|
|
|
|
}
|
2017-12-15 11:14:55 -08:00
|
|
|
|
ovsdb_util_write_string_column(row, "schema", schema);
|
|
|
|
|
free(schema);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Updates the Database table in the _Server database. */
|
|
|
|
|
static void
|
|
|
|
|
update_server_status(struct shash *all_dbs)
|
|
|
|
|
{
|
|
|
|
|
struct db *server_db = shash_find_data(all_dbs, "_Server");
|
|
|
|
|
struct ovsdb_table *database_table = shash_find_data(
|
|
|
|
|
&server_db->db->tables, "Database");
|
|
|
|
|
struct ovsdb_txn *txn = ovsdb_txn_create(server_db->db);
|
|
|
|
|
|
|
|
|
|
/* Update rows for databases that still exist.
|
|
|
|
|
* Delete rows for databases that no longer exist. */
|
2022-03-23 12:56:17 +01:00
|
|
|
|
const struct ovsdb_row *row;
|
|
|
|
|
HMAP_FOR_EACH_SAFE (row, hmap_node, &database_table->rows) {
|
2017-12-15 11:14:55 -08:00
|
|
|
|
const char *name;
|
|
|
|
|
ovsdb_util_read_string_column(row, "name", &name);
|
|
|
|
|
struct db *db = shash_find_data(all_dbs, name);
|
|
|
|
|
if (!db || !db->db) {
|
|
|
|
|
ovsdb_txn_row_delete(txn, row);
|
|
|
|
|
} else {
|
2024-01-09 20:54:03 +01:00
|
|
|
|
struct ovsdb_row *rw_row;
|
|
|
|
|
|
|
|
|
|
ovsdb_txn_row_modify(txn, row, &rw_row, NULL);
|
|
|
|
|
update_database_status(rw_row, db);
|
2012-09-07 10:07:03 -07:00
|
|
|
|
}
|
2011-01-28 15:39:55 -08:00
|
|
|
|
}
|
2017-12-15 11:14:55 -08:00
|
|
|
|
|
|
|
|
|
/* Add rows for new databases.
|
|
|
|
|
*
|
|
|
|
|
* This is O(n**2) but usually there are only 2 or 3 databases. */
|
|
|
|
|
struct shash_node *node;
|
|
|
|
|
SHASH_FOR_EACH (node, all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
|
|
|
|
|
if (!db->db) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
HMAP_FOR_EACH (row, hmap_node, &database_table->rows) {
|
|
|
|
|
const char *name;
|
|
|
|
|
ovsdb_util_read_string_column(row, "name", &name);
|
|
|
|
|
if (!strcmp(name, node->name)) {
|
|
|
|
|
goto next;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Add row. */
|
2017-12-31 21:15:58 -08:00
|
|
|
|
struct ovsdb_row *new_row = ovsdb_row_create(database_table);
|
|
|
|
|
uuid_generate(ovsdb_row_get_uuid_rw(new_row));
|
|
|
|
|
update_database_status(new_row, db);
|
|
|
|
|
ovsdb_txn_row_insert(txn, new_row);
|
2017-12-15 11:14:55 -08:00
|
|
|
|
|
|
|
|
|
next:;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
commit_txn(txn, "_Server");
|
2011-01-28 15:39:55 -08:00
|
|
|
|
}
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
/* Reconfigures ovsdb-server's remotes based on information in the database. */
|
|
|
|
|
static char *
|
|
|
|
|
reconfigure_remotes(struct ovsdb_jsonrpc_server *jsonrpc,
|
2024-01-09 23:49:03 +01:00
|
|
|
|
const struct shash *all_dbs, struct shash *remotes)
|
2010-01-04 10:05:51 -08:00
|
|
|
|
{
|
2013-06-27 10:27:57 -07:00
|
|
|
|
struct ds errors = DS_EMPTY_INITIALIZER;
|
2010-01-04 10:05:51 -08:00
|
|
|
|
struct shash resolved_remotes;
|
2024-01-09 23:49:03 +01:00
|
|
|
|
struct shash_node *node;
|
2010-01-04 10:05:51 -08:00
|
|
|
|
|
2010-03-18 17:12:02 -07:00
|
|
|
|
/* Configure remotes. */
|
2010-01-04 10:05:51 -08:00
|
|
|
|
shash_init(&resolved_remotes);
|
2024-01-09 23:49:03 +01:00
|
|
|
|
SHASH_FOR_EACH (node, remotes) {
|
|
|
|
|
const struct ovsdb_jsonrpc_options *options = node->data;
|
|
|
|
|
const char *name = node->name;
|
|
|
|
|
|
2010-01-04 10:05:51 -08:00
|
|
|
|
if (!strncmp(name, "db:", 3)) {
|
2013-06-27 10:27:57 -07:00
|
|
|
|
query_db_remotes(name, all_dbs, &resolved_remotes, &errors);
|
2010-01-04 10:05:51 -08:00
|
|
|
|
} else {
|
2024-01-09 23:49:03 +01:00
|
|
|
|
add_remote(&resolved_remotes, name, options);
|
2010-01-04 10:05:51 -08:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
ovsdb_jsonrpc_server_set_remotes(jsonrpc, &resolved_remotes);
|
2017-10-31 10:52:10 -07:00
|
|
|
|
free_remotes(&resolved_remotes);
|
2024-01-09 23:49:03 +01:00
|
|
|
|
shash_destroy(&resolved_remotes);
|
2010-01-04 10:05:51 -08:00
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
return errors.string;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static char *
|
|
|
|
|
reconfigure_ssl(const struct shash *all_dbs)
|
|
|
|
|
{
|
|
|
|
|
struct ds errors = DS_EMPTY_INITIALIZER;
|
|
|
|
|
const char *resolved_private_key;
|
|
|
|
|
const char *resolved_certificate;
|
|
|
|
|
const char *resolved_ca_cert;
|
2016-10-06 16:21:33 -07:00
|
|
|
|
const char *resolved_ssl_protocols;
|
|
|
|
|
const char *resolved_ssl_ciphers;
|
2024-12-09 17:38:53 +01:00
|
|
|
|
const char *resolved_ssl_ciphersuites;
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
|
|
|
|
resolved_private_key = query_db_string(all_dbs, private_key_file, &errors);
|
|
|
|
|
resolved_certificate = query_db_string(all_dbs, certificate_file, &errors);
|
|
|
|
|
resolved_ca_cert = query_db_string(all_dbs, ca_cert_file, &errors);
|
2016-10-06 16:21:33 -07:00
|
|
|
|
resolved_ssl_protocols = query_db_string(all_dbs, ssl_protocols, &errors);
|
|
|
|
|
resolved_ssl_ciphers = query_db_string(all_dbs, ssl_ciphers, &errors);
|
2024-12-09 17:38:53 +01:00
|
|
|
|
resolved_ssl_ciphersuites = query_db_string(all_dbs, ssl_ciphersuites,
|
|
|
|
|
&errors);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
|
|
|
|
stream_ssl_set_key_and_cert(resolved_private_key, resolved_certificate);
|
|
|
|
|
stream_ssl_set_ca_cert_file(resolved_ca_cert, bootstrap_ca_cert);
|
2016-10-06 16:21:33 -07:00
|
|
|
|
stream_ssl_set_protocols(resolved_ssl_protocols);
|
|
|
|
|
stream_ssl_set_ciphers(resolved_ssl_ciphers);
|
2024-12-09 17:38:53 +01:00
|
|
|
|
stream_ssl_set_ciphersuites(resolved_ssl_ciphersuites);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
|
|
|
|
return errors.string;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
report_error_if_changed(char *error, char **last_errorp)
|
|
|
|
|
{
|
|
|
|
|
if (error) {
|
|
|
|
|
if (!*last_errorp || strcmp(error, *last_errorp)) {
|
|
|
|
|
VLOG_WARN("%s", error);
|
|
|
|
|
free(*last_errorp);
|
|
|
|
|
*last_errorp = error;
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
free(error);
|
|
|
|
|
} else {
|
|
|
|
|
free(*last_errorp);
|
|
|
|
|
*last_errorp = NULL;
|
|
|
|
|
}
|
2010-03-18 17:12:02 -07:00
|
|
|
|
}
|
2010-01-04 10:05:51 -08:00
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
static bool
|
|
|
|
|
check_config_file_on_unixctl(struct unixctl_conn *conn)
|
|
|
|
|
{
|
|
|
|
|
struct ds ds = DS_EMPTY_INITIALIZER;
|
|
|
|
|
|
|
|
|
|
if (!config_file_path) {
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
ds_put_format(&ds, "Update the %s and use ovsdb-server/reload instead",
|
|
|
|
|
config_file_path);
|
|
|
|
|
unixctl_command_reply_error(conn, ds_cstr(&ds));
|
|
|
|
|
ds_destroy(&ds);
|
|
|
|
|
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
2016-07-19 14:54:51 -07:00
|
|
|
|
static void
|
2016-07-28 11:35:01 -07:00
|
|
|
|
ovsdb_server_set_active_ovsdb_server(struct unixctl_conn *conn,
|
2016-07-19 14:54:51 -07:00
|
|
|
|
int argc OVS_UNUSED, const char *argv[],
|
2016-08-23 04:05:11 -07:00
|
|
|
|
void *config_)
|
2016-07-19 14:54:51 -07:00
|
|
|
|
{
|
2016-08-23 04:05:11 -07:00
|
|
|
|
struct server_config *config = config_;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash_node *node;
|
2016-08-23 04:05:11 -07:00
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
free(*config->sync_from);
|
2016-08-23 04:05:11 -07:00
|
|
|
|
*config->sync_from = xstrdup(argv[1]);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
|
|
|
|
|
if (db->config->model == SM_ACTIVE_BACKUP) {
|
|
|
|
|
free(db->config->source);
|
|
|
|
|
db->config->source = xstrdup(argv[1]);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2016-08-23 04:05:11 -07:00
|
|
|
|
save_config(config);
|
|
|
|
|
|
2016-07-19 14:54:51 -07:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
2016-07-28 11:35:01 -07:00
|
|
|
|
ovsdb_server_get_active_ovsdb_server(struct unixctl_conn *conn,
|
2016-07-19 14:54:51 -07:00
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
void *config_ )
|
2016-07-19 14:54:51 -07:00
|
|
|
|
{
|
2016-08-23 04:05:11 -07:00
|
|
|
|
struct server_config *config = config_;
|
|
|
|
|
|
|
|
|
|
unixctl_command_reply(conn, *config->sync_from);
|
2016-07-19 14:54:51 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
2016-07-28 11:35:01 -07:00
|
|
|
|
ovsdb_server_connect_active_ovsdb_server(struct unixctl_conn *conn,
|
2016-07-19 14:54:51 -07:00
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
void *config_)
|
2016-07-19 14:54:51 -07:00
|
|
|
|
{
|
2016-08-23 04:05:11 -07:00
|
|
|
|
struct server_config *config = config_;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash_node *node;
|
2016-08-23 04:05:11 -07:00
|
|
|
|
char *msg = NULL;
|
2016-08-18 17:20:08 -07:00
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (!*config->sync_from) {
|
2016-08-23 04:05:11 -07:00
|
|
|
|
msg = "Unable to connect: active server is not specified.\n";
|
|
|
|
|
} else {
|
2017-02-06 14:00:22 -08:00
|
|
|
|
const struct uuid *server_uuid;
|
|
|
|
|
server_uuid = ovsdb_jsonrpc_server_get_uuid(config->jsonrpc);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
struct db_config *conf = db->config;
|
|
|
|
|
|
|
|
|
|
/* This command also converts standalone databases to AB. */
|
|
|
|
|
if (conf->model == SM_STANDALONE) {
|
|
|
|
|
conf->model = SM_ACTIVE_BACKUP;
|
|
|
|
|
conf->source = xstrdup(*config->sync_from);
|
|
|
|
|
conf->options = ovsdb_jsonrpc_default_options(conf->source);
|
2024-01-09 23:49:11 +01:00
|
|
|
|
conf->options->rpc.probe_interval =
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
*config->replication_probe_interval;
|
|
|
|
|
conf->ab.sync_exclude =
|
|
|
|
|
nullable_xstrdup(*config->sync_exclude);
|
|
|
|
|
conf->ab.backup = false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP && !conf->ab.backup) {
|
|
|
|
|
replication_set_db(db->db, conf->source, conf->ab.sync_exclude,
|
2024-01-09 23:49:12 +01:00
|
|
|
|
server_uuid, &conf->options->rpc);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
conf->ab.backup = true;
|
|
|
|
|
}
|
2016-08-23 04:05:11 -07:00
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
*config->is_backup = true;
|
|
|
|
|
save_config(config);
|
2016-07-19 14:54:51 -07:00
|
|
|
|
}
|
2016-08-23 04:05:11 -07:00
|
|
|
|
unixctl_command_reply(conn, msg);
|
2016-07-19 14:54:51 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
2016-07-28 11:35:01 -07:00
|
|
|
|
ovsdb_server_disconnect_active_ovsdb_server(struct unixctl_conn *conn,
|
2016-07-19 14:54:51 -07:00
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
void *config_)
|
2016-07-19 14:54:51 -07:00
|
|
|
|
{
|
2016-08-23 04:05:11 -07:00
|
|
|
|
struct server_config *config = config_;
|
2024-01-09 23:49:05 +01:00
|
|
|
|
struct shash_node *node;
|
2016-08-23 04:05:11 -07:00
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-09 23:49:05 +01:00
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct db_config *conf = db->config;
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP && conf->ab.backup) {
|
|
|
|
|
ovsdb_server_replication_remove_db(db);
|
|
|
|
|
}
|
2024-01-09 23:49:05 +01:00
|
|
|
|
}
|
2016-08-23 04:05:11 -07:00
|
|
|
|
*config->is_backup = false;
|
|
|
|
|
save_config(config);
|
2016-07-19 14:54:51 -07:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
}
|
|
|
|
|
|
2020-01-07 10:24:48 +05:30
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_set_active_ovsdb_server_probe_interval(struct unixctl_conn *conn,
|
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[],
|
|
|
|
|
void *config_)
|
|
|
|
|
{
|
|
|
|
|
struct server_config *config = config_;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash_node *node;
|
2020-01-07 10:24:48 +05:30
|
|
|
|
int probe_interval;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (!str_to_int(argv[1], 10, &probe_interval)) {
|
|
|
|
|
unixctl_command_reply_error(
|
2020-01-07 10:24:48 +05:30
|
|
|
|
conn, "Invalid probe interval, integer value expected");
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
const struct uuid *server_uuid;
|
|
|
|
|
server_uuid = ovsdb_jsonrpc_server_get_uuid(config->jsonrpc);
|
|
|
|
|
|
|
|
|
|
*config->replication_probe_interval = probe_interval;
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
struct db_config *conf = db->config;
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP) {
|
2024-01-09 23:49:11 +01:00
|
|
|
|
conf->options->rpc.probe_interval = probe_interval;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (conf->ab.backup) {
|
|
|
|
|
replication_set_db(db->db, conf->source, conf->ab.sync_exclude,
|
2024-01-09 23:49:12 +01:00
|
|
|
|
server_uuid, &conf->options->rpc);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
}
|
|
|
|
|
}
|
2020-01-07 10:24:48 +05:30
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
|
|
|
|
save_config(config);
|
|
|
|
|
unixctl_command_reply(conn, NULL);
|
2020-01-07 10:24:48 +05:30
|
|
|
|
}
|
|
|
|
|
|
2023-07-17 11:06:53 +02:00
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_set_relay_source_interval(struct unixctl_conn *conn,
|
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[],
|
|
|
|
|
void *config_)
|
|
|
|
|
{
|
|
|
|
|
struct server_config *config = config_;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash_node *node;
|
2023-07-17 11:06:53 +02:00
|
|
|
|
int probe_interval;
|
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (!str_to_int(argv[1], 10, &probe_interval)) {
|
2023-07-17 11:06:53 +02:00
|
|
|
|
unixctl_command_reply_error(
|
|
|
|
|
conn, "Invalid probe interval, integer value expected");
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
*config->relay_source_probe_interval = probe_interval;
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
struct db_config *conf = db->config;
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_RELAY) {
|
2024-01-09 23:49:11 +01:00
|
|
|
|
conf->options->rpc.probe_interval = probe_interval;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
}
|
2023-07-17 11:06:53 +02:00
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
|
|
|
|
ovsdb_relay_set_probe_interval(probe_interval);
|
|
|
|
|
save_config(config);
|
|
|
|
|
|
|
|
|
|
unixctl_command_reply(conn, NULL);
|
2023-07-17 11:06:53 +02:00
|
|
|
|
}
|
|
|
|
|
|
2016-07-19 14:54:51 -07:00
|
|
|
|
static void
|
2016-08-23 04:05:11 -07:00
|
|
|
|
ovsdb_server_set_sync_exclude_tables(struct unixctl_conn *conn,
|
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[],
|
|
|
|
|
void *config_)
|
2016-07-19 14:54:51 -07:00
|
|
|
|
{
|
2016-08-23 04:05:11 -07:00
|
|
|
|
struct server_config *config = config_;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash_node *node;
|
2016-08-16 14:56:19 -07:00
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-09 23:49:05 +01:00
|
|
|
|
char *err = parse_excluded_tables(argv[1]);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (err) {
|
|
|
|
|
goto exit;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
const struct uuid *server_uuid;
|
|
|
|
|
server_uuid = ovsdb_jsonrpc_server_get_uuid(config->jsonrpc);
|
|
|
|
|
|
|
|
|
|
free(*config->sync_exclude);
|
|
|
|
|
*config->sync_exclude = xstrdup(argv[1]);
|
|
|
|
|
|
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
struct db_config *conf = db->config;
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP) {
|
|
|
|
|
free(conf->ab.sync_exclude);
|
|
|
|
|
conf->ab.sync_exclude = xstrdup(argv[1]);
|
|
|
|
|
if (conf->ab.backup) {
|
|
|
|
|
replication_set_db(db->db, conf->source, conf->ab.sync_exclude,
|
2024-01-09 23:49:12 +01:00
|
|
|
|
server_uuid, &conf->options->rpc);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
}
|
2016-08-16 14:56:19 -07:00
|
|
|
|
}
|
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
|
|
|
|
save_config(config);
|
|
|
|
|
|
|
|
|
|
exit:
|
2016-08-16 14:56:19 -07:00
|
|
|
|
unixctl_command_reply(conn, err);
|
|
|
|
|
free(err);
|
2016-07-19 14:54:51 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
2016-08-23 04:05:11 -07:00
|
|
|
|
ovsdb_server_get_sync_exclude_tables(struct unixctl_conn *conn,
|
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED,
|
2024-01-09 23:49:05 +01:00
|
|
|
|
void *config_)
|
2016-07-19 14:54:51 -07:00
|
|
|
|
{
|
2024-01-09 23:49:05 +01:00
|
|
|
|
struct server_config *config = config_;
|
|
|
|
|
|
|
|
|
|
unixctl_command_reply(conn, *config->sync_exclude);
|
2016-07-19 14:54:51 -07:00
|
|
|
|
}
|
|
|
|
|
|
2009-11-17 16:02:38 -08:00
|
|
|
|
static void
|
2011-12-02 15:29:19 -08:00
|
|
|
|
ovsdb_server_exit(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED,
|
2009-11-17 16:02:38 -08:00
|
|
|
|
void *exiting_)
|
|
|
|
|
{
|
|
|
|
|
bool *exiting = exiting_;
|
|
|
|
|
*exiting = true;
|
2012-02-14 20:53:59 -08:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
2015-03-21 00:00:49 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_perf_counters_show(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED,
|
|
|
|
|
void *arg_ OVS_UNUSED)
|
|
|
|
|
{
|
|
|
|
|
char *s = perf_counters_to_string();
|
|
|
|
|
|
|
|
|
|
unixctl_command_reply(conn, s);
|
|
|
|
|
free(s);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_perf_counters_clear(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED,
|
|
|
|
|
void *arg_ OVS_UNUSED)
|
|
|
|
|
{
|
|
|
|
|
perf_counters_clear();
|
|
|
|
|
unixctl_command_reply(conn, NULL);
|
2009-11-17 16:02:38 -08:00
|
|
|
|
}
|
|
|
|
|
|
2016-07-18 11:45:55 +03:00
|
|
|
|
/* "ovsdb-server/disable-monitor-cond": makes ovsdb-server drop all of its
|
2015-10-20 12:50:23 -07:00
|
|
|
|
* JSON-RPC connections and reconnect. New sessions will not recognize
|
2016-07-18 11:45:55 +03:00
|
|
|
|
* the 'monitor_cond' method. */
|
2015-10-20 12:50:23 -07:00
|
|
|
|
static void
|
2016-07-18 11:45:55 +03:00
|
|
|
|
ovsdb_server_disable_monitor_cond(struct unixctl_conn *conn,
|
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED,
|
|
|
|
|
void *jsonrpc_)
|
2015-10-20 12:50:23 -07:00
|
|
|
|
{
|
|
|
|
|
struct ovsdb_jsonrpc_server *jsonrpc = jsonrpc_;
|
|
|
|
|
|
2016-07-18 11:45:55 +03:00
|
|
|
|
ovsdb_jsonrpc_disable_monitor_cond();
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_jsonrpc_server_reconnect(
|
2019-01-16 14:50:52 -08:00
|
|
|
|
jsonrpc, true, xstrdup("user ran ovsdb-server/disable-monitor-cond"));
|
2015-10-20 12:50:23 -07:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
}
|
|
|
|
|
|
2010-03-18 11:24:55 -07:00
|
|
|
|
static void
|
2012-09-07 10:07:03 -07:00
|
|
|
|
ovsdb_server_compact(struct unixctl_conn *conn, int argc,
|
|
|
|
|
const char *argv[], void *dbs_)
|
2010-03-18 11:24:55 -07:00
|
|
|
|
{
|
2017-12-22 11:41:11 -08:00
|
|
|
|
const char *db_name = argc < 2 ? NULL : argv[1];
|
2013-06-13 04:30:32 -07:00
|
|
|
|
struct shash *all_dbs = dbs_;
|
2012-09-07 10:07:03 -07:00
|
|
|
|
struct ds reply;
|
2013-06-13 04:30:32 -07:00
|
|
|
|
struct shash_node *node;
|
2012-09-07 10:07:03 -07:00
|
|
|
|
int n = 0;
|
2010-03-18 11:24:55 -07:00
|
|
|
|
|
2017-12-22 11:41:11 -08:00
|
|
|
|
if (db_name && db_name[0] == '_') {
|
|
|
|
|
unixctl_command_reply_error(conn, "cannot compact built-in databases");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2012-09-07 10:07:03 -07:00
|
|
|
|
ds_init(&reply);
|
2013-06-13 04:30:32 -07:00
|
|
|
|
SHASH_FOR_EACH(node, all_dbs) {
|
2017-12-31 21:15:58 -08:00
|
|
|
|
struct db *db = node->data;
|
2017-12-22 11:41:11 -08:00
|
|
|
|
if (db_name
|
|
|
|
|
? !strcmp(node->name, db_name)
|
|
|
|
|
: node->name[0] != '_') {
|
2017-12-31 21:15:58 -08:00
|
|
|
|
if (db->db) {
|
ovsdb: Prepare snapshot JSON in a separate thread.
Conversion of the database data into JSON object, serialization
and destruction of that object are the most heavy operations
during the database compaction. If these operations are moved
to a separate thread, the main thread can continue processing
database requests in the meantime.
With this change, the compaction is split in 3 phases:
1. Initialization:
- Create a copy of the database.
- Remember current database index.
- Start a separate thread to convert a copy of the database
into serialized JSON object.
2. Wait:
- Continue normal operation until compaction thread is done.
- Meanwhile, compaction thread:
* Convert database copy to JSON.
* Serialize resulted JSON.
* Destroy original JSON object.
3. Finish:
- Destroy the database copy.
- Take the snapshot created by the thread.
- Write on disk.
The key for this schema to be fast is the ability to create
a shallow copy of the database. This doesn't take too much
time allowing the thread to do most of work.
Database copy is created and destroyed only by the main thread,
so there is no need for synchronization.
Such solution allows to reduce the time main thread is blocked
by compaction by 80-90%. For example, in ovn-heater tests
with 120 node density-heavy scenario, where compaction normally
takes 5-6 seconds at the end of a test, measured compaction
times was all below 1 second with the change applied. Also,
note that these measured times are the sum of phases 1 and 3,
so actual poll intervals are about half a second in this case.
Only implemented for raft storage for now. The implementation
for standalone databases can be added later by using a file
offset as a database index and copying newly added changes
from the old file to a new one during ovsdb_log_replace().
Reported-at: https://bugzilla.redhat.com/2069108
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-07-01 01:34:07 +02:00
|
|
|
|
struct ovsdb_error *error = NULL;
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
VLOG_INFO("compacting %s database by user request",
|
|
|
|
|
node->name);
|
|
|
|
|
|
ovsdb: Prepare snapshot JSON in a separate thread.
Conversion of the database data into JSON object, serialization
and destruction of that object are the most heavy operations
during the database compaction. If these operations are moved
to a separate thread, the main thread can continue processing
database requests in the meantime.
With this change, the compaction is split in 3 phases:
1. Initialization:
- Create a copy of the database.
- Remember current database index.
- Start a separate thread to convert a copy of the database
into serialized JSON object.
2. Wait:
- Continue normal operation until compaction thread is done.
- Meanwhile, compaction thread:
* Convert database copy to JSON.
* Serialize resulted JSON.
* Destroy original JSON object.
3. Finish:
- Destroy the database copy.
- Take the snapshot created by the thread.
- Write on disk.
The key for this schema to be fast is the ability to create
a shallow copy of the database. This doesn't take too much
time allowing the thread to do most of work.
Database copy is created and destroyed only by the main thread,
so there is no need for synchronization.
Such solution allows to reduce the time main thread is blocked
by compaction by 80-90%. For example, in ovn-heater tests
with 120 node density-heavy scenario, where compaction normally
takes 5-6 seconds at the end of a test, measured compaction
times was all below 1 second with the change applied. Also,
note that these measured times are the sum of phases 1 and 3,
so actual poll intervals are about half a second in this case.
Only implemented for raft storage for now. The implementation
for standalone databases can be added later by using a file
offset as a database index and copying newly added changes
from the old file to a new one during ovsdb_log_replace().
Reported-at: https://bugzilla.redhat.com/2069108
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2022-07-01 01:34:07 +02:00
|
|
|
|
error = ovsdb_snapshot(db->db, trim_memory);
|
|
|
|
|
if (!error && ovsdb_snapshot_in_progress(db->db)) {
|
|
|
|
|
while (ovsdb_snapshot_in_progress(db->db)) {
|
|
|
|
|
ovsdb_snapshot_wait(db->db);
|
|
|
|
|
poll_block();
|
|
|
|
|
}
|
|
|
|
|
error = ovsdb_snapshot(db->db, trim_memory);
|
|
|
|
|
}
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
if (error) {
|
|
|
|
|
char *s = ovsdb_error_to_string(error);
|
|
|
|
|
ds_put_format(&reply, "%s\n", s);
|
|
|
|
|
free(s);
|
|
|
|
|
ovsdb_error_destroy(error);
|
|
|
|
|
}
|
2012-09-07 10:07:03 -07:00
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
n++;
|
2012-09-07 10:07:03 -07:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (!n) {
|
|
|
|
|
unixctl_command_reply_error(conn, "no database by that name");
|
|
|
|
|
} else if (reply.length) {
|
|
|
|
|
unixctl_command_reply_error(conn, ds_cstr(&reply));
|
2010-03-18 11:24:55 -07:00
|
|
|
|
} else {
|
2012-09-07 10:07:03 -07:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
2010-03-18 11:24:55 -07:00
|
|
|
|
}
|
2012-09-07 10:07:03 -07:00
|
|
|
|
ds_destroy(&reply);
|
2010-03-18 11:24:55 -07:00
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Reclaim heap memory after compaction.
Compaction happens at most once in 10 minutes. That is a big time
interval for a heavy loaded ovsdb-server in cluster mode.
In 10 minutes raft logs could grow up to tens of thousands of entries
with tens of gigabytes in total size.
While compaction cleans up raft log entries, the memory in many cases
is not returned to the system, but kept in the heap of running
ovsdb-server process, and it could stay in this condition for a really
long time. In the end one performance spike could lead to a fast
growth of the raft log and this memory will never (for a really long
time) be released to the system even if the database if empty.
Simple example how to reproduce with OVN sandbox:
1. make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run following script that creates 1 port group, adds 4000 acls and
removes all of that in the end:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
3. Stopping one of Northbound DB servers:
ovs-appctl -t $(pwd)/sandbox/nb1 exit
Make sure that ovsdb-server didn't compact the database before
it was stopped. Now we have a db file on disk that contains
4000 fairly big transactions inside.
4. Trying to start same ovsdb-server with this file.
# cd sandbox && ovsdb-server <...> nb1.db
At this point ovsdb-server reads all the transactions from db
file and performs all of them as fast as it can one by one.
When it finishes this, raft log contains 4000 entries and
ovsdb-server consumes (on my system) ~13GB of memory while
database is empty. And libc will likely never return this memory
back to system, or, at least, will hold it for a really long time.
This patch adds a new command 'ovsdb-server/memory-trim-on-compaction'.
It's disabled by default, but once enabled, ovsdb-server will call
'malloc_trim(0)' after every successful compaction to try to return
unused heap memory back to system. This is glibc-specific, so we
need to detect function availability in a build time.
Disabled by default since it adds from 1% to 30% (depending on the
current state) to the snapshot creation time and, also, next memory
allocations will likely require requests to kernel and that might be
slower. Could be enabled by default later if considered broadly
beneficial.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1888829
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-10-24 02:25:48 +02:00
|
|
|
|
/* "ovsdb-server/memory-trim-on-compaction": controls whether ovsdb-server
|
|
|
|
|
* tries to reclaim heap memory back to system using malloc_trim() after
|
|
|
|
|
* compaction. */
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_memory_trim_on_compaction(struct unixctl_conn *conn,
|
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[],
|
|
|
|
|
void *arg OVS_UNUSED)
|
|
|
|
|
{
|
2022-12-14 10:29:16 -06:00
|
|
|
|
bool old_trim_memory = trim_memory;
|
|
|
|
|
static bool have_logged = false;
|
ovsdb-server: Reclaim heap memory after compaction.
Compaction happens at most once in 10 minutes. That is a big time
interval for a heavy loaded ovsdb-server in cluster mode.
In 10 minutes raft logs could grow up to tens of thousands of entries
with tens of gigabytes in total size.
While compaction cleans up raft log entries, the memory in many cases
is not returned to the system, but kept in the heap of running
ovsdb-server process, and it could stay in this condition for a really
long time. In the end one performance spike could lead to a fast
growth of the raft log and this memory will never (for a really long
time) be released to the system even if the database if empty.
Simple example how to reproduce with OVN sandbox:
1. make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run following script that creates 1 port group, adds 4000 acls and
removes all of that in the end:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
3. Stopping one of Northbound DB servers:
ovs-appctl -t $(pwd)/sandbox/nb1 exit
Make sure that ovsdb-server didn't compact the database before
it was stopped. Now we have a db file on disk that contains
4000 fairly big transactions inside.
4. Trying to start same ovsdb-server with this file.
# cd sandbox && ovsdb-server <...> nb1.db
At this point ovsdb-server reads all the transactions from db
file and performs all of them as fast as it can one by one.
When it finishes this, raft log contains 4000 entries and
ovsdb-server consumes (on my system) ~13GB of memory while
database is empty. And libc will likely never return this memory
back to system, or, at least, will hold it for a really long time.
This patch adds a new command 'ovsdb-server/memory-trim-on-compaction'.
It's disabled by default, but once enabled, ovsdb-server will call
'malloc_trim(0)' after every successful compaction to try to return
unused heap memory back to system. This is glibc-specific, so we
need to detect function availability in a build time.
Disabled by default since it adds from 1% to 30% (depending on the
current state) to the snapshot creation time and, also, next memory
allocations will likely require requests to kernel and that might be
slower. Could be enabled by default later if considered broadly
beneficial.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1888829
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-10-24 02:25:48 +02:00
|
|
|
|
const char *command = argv[1];
|
|
|
|
|
|
|
|
|
|
#if !HAVE_DECL_MALLOC_TRIM
|
|
|
|
|
unixctl_command_reply_error(conn, "memory trimming is not supported");
|
|
|
|
|
return;
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
|
if (!strcmp(command, "on")) {
|
|
|
|
|
trim_memory = true;
|
|
|
|
|
} else if (!strcmp(command, "off")) {
|
|
|
|
|
trim_memory = false;
|
|
|
|
|
} else {
|
|
|
|
|
unixctl_command_reply_error(conn, "invalid argument");
|
|
|
|
|
return;
|
|
|
|
|
}
|
2022-12-14 10:29:16 -06:00
|
|
|
|
if (!have_logged || (trim_memory != old_trim_memory)) {
|
|
|
|
|
have_logged = true;
|
|
|
|
|
VLOG_INFO("memory trimming after compaction %s.",
|
|
|
|
|
trim_memory ? "enabled" : "disabled");
|
|
|
|
|
}
|
ovsdb-server: Reclaim heap memory after compaction.
Compaction happens at most once in 10 minutes. That is a big time
interval for a heavy loaded ovsdb-server in cluster mode.
In 10 minutes raft logs could grow up to tens of thousands of entries
with tens of gigabytes in total size.
While compaction cleans up raft log entries, the memory in many cases
is not returned to the system, but kept in the heap of running
ovsdb-server process, and it could stay in this condition for a really
long time. In the end one performance spike could lead to a fast
growth of the raft log and this memory will never (for a really long
time) be released to the system even if the database if empty.
Simple example how to reproduce with OVN sandbox:
1. make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run following script that creates 1 port group, adds 4000 acls and
removes all of that in the end:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
3. Stopping one of Northbound DB servers:
ovs-appctl -t $(pwd)/sandbox/nb1 exit
Make sure that ovsdb-server didn't compact the database before
it was stopped. Now we have a db file on disk that contains
4000 fairly big transactions inside.
4. Trying to start same ovsdb-server with this file.
# cd sandbox && ovsdb-server <...> nb1.db
At this point ovsdb-server reads all the transactions from db
file and performs all of them as fast as it can one by one.
When it finishes this, raft log contains 4000 entries and
ovsdb-server consumes (on my system) ~13GB of memory while
database is empty. And libc will likely never return this memory
back to system, or, at least, will hold it for a really long time.
This patch adds a new command 'ovsdb-server/memory-trim-on-compaction'.
It's disabled by default, but once enabled, ovsdb-server will call
'malloc_trim(0)' after every successful compaction to try to return
unused heap memory back to system. This is glibc-specific, so we
need to detect function availability in a build time.
Disabled by default since it adds from 1% to 30% (depending on the
current state) to the snapshot creation time and, also, next memory
allocations will likely require requests to kernel and that might be
slower. Could be enabled by default later if considered broadly
beneficial.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1888829
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-10-24 02:25:48 +02:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
}
|
|
|
|
|
|
2010-06-24 12:56:30 -07:00
|
|
|
|
/* "ovsdb-server/reconnect": makes ovsdb-server drop all of its JSON-RPC
|
|
|
|
|
* connections and reconnect. */
|
|
|
|
|
static void
|
2011-12-02 15:29:19 -08:00
|
|
|
|
ovsdb_server_reconnect(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED, void *jsonrpc_)
|
2010-06-24 12:56:30 -07:00
|
|
|
|
{
|
|
|
|
|
struct ovsdb_jsonrpc_server *jsonrpc = jsonrpc_;
|
2017-12-31 21:15:58 -08:00
|
|
|
|
ovsdb_jsonrpc_server_reconnect(
|
|
|
|
|
jsonrpc, true, xstrdup("user ran ovsdb-server/reconnect"));
|
2012-02-14 20:53:59 -08:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
2010-06-24 12:56:30 -07:00
|
|
|
|
}
|
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
/* "ovsdb-server/reload": makes ovsdb-server open a configuration file on
|
|
|
|
|
* 'config_file_path', read it and sync the runtime configuration with it. */
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_reload(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
const char *argv[] OVS_UNUSED, void *config_)
|
2024-01-09 23:49:08 +01:00
|
|
|
|
{
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
struct server_config *config = config_;
|
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (!config_file_path) {
|
|
|
|
|
unixctl_command_reply_error(conn,
|
|
|
|
|
"Configuration file was not specified on command line");
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (!reconfigure_ovsdb_server(config)) {
|
2024-01-09 23:49:08 +01:00
|
|
|
|
unixctl_command_reply_error(conn,
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
"Configuration failed. See the log file for details.");
|
|
|
|
|
} else {
|
|
|
|
|
unixctl_command_reply(conn, NULL);
|
2024-01-09 23:49:08 +01:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2013-04-10 09:34:49 -07:00
|
|
|
|
/* "ovsdb-server/add-remote REMOTE": adds REMOTE to the set of remotes that
|
|
|
|
|
* ovsdb-server services. */
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_add_remote(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
2013-06-27 10:27:57 -07:00
|
|
|
|
const char *argv[], void *config_)
|
2013-04-10 09:34:49 -07:00
|
|
|
|
{
|
2013-06-27 10:27:57 -07:00
|
|
|
|
struct server_config *config = config_;
|
2013-04-10 09:34:49 -07:00
|
|
|
|
const char *remote = argv[1];
|
|
|
|
|
|
|
|
|
|
const struct ovsdb_column *column;
|
|
|
|
|
const struct ovsdb_table *table;
|
|
|
|
|
const struct db *db;
|
|
|
|
|
char *retval;
|
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2013-04-10 09:34:49 -07:00
|
|
|
|
retval = (strncmp("db:", remote, 3)
|
|
|
|
|
? NULL
|
2013-06-27 10:27:57 -07:00
|
|
|
|
: parse_db_column(config->all_dbs, remote,
|
2013-04-10 09:34:49 -07:00
|
|
|
|
&db, &table, &column));
|
|
|
|
|
if (!retval) {
|
2024-01-09 23:49:03 +01:00
|
|
|
|
if (add_remote(config->remotes, remote, NULL)) {
|
2013-06-27 10:27:57 -07:00
|
|
|
|
save_config(config);
|
2013-06-13 12:25:39 -07:00
|
|
|
|
}
|
2013-04-10 09:34:49 -07:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
} else {
|
|
|
|
|
unixctl_command_reply_error(conn, retval);
|
|
|
|
|
free(retval);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* "ovsdb-server/remove-remote REMOTE": removes REMOTE frmo the set of remotes
|
|
|
|
|
* that ovsdb-server services. */
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_remove_remote(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
2013-06-27 10:27:57 -07:00
|
|
|
|
const char *argv[], void *config_)
|
2013-04-10 09:34:49 -07:00
|
|
|
|
{
|
2013-06-27 10:27:57 -07:00
|
|
|
|
struct server_config *config = config_;
|
2024-01-09 23:49:03 +01:00
|
|
|
|
struct ovsdb_jsonrpc_options *options;
|
2013-04-10 09:34:49 -07:00
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-09 23:49:03 +01:00
|
|
|
|
options = shash_find_and_delete(config->remotes, argv[1]);
|
|
|
|
|
if (options) {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_jsonrpc_options_free(options);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
save_config(config);
|
2013-04-10 09:34:49 -07:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
} else {
|
|
|
|
|
unixctl_command_reply_error(conn, "no such remote");
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* "ovsdb-server/list-remotes": outputs a list of configured rmeotes. */
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_list_remotes(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED, void *remotes_)
|
|
|
|
|
{
|
2024-01-09 23:49:03 +01:00
|
|
|
|
const struct shash *remotes = remotes_;
|
|
|
|
|
const struct shash_node **list;
|
2013-04-10 09:34:49 -07:00
|
|
|
|
struct ds s;
|
|
|
|
|
|
|
|
|
|
ds_init(&s);
|
|
|
|
|
|
2024-01-09 23:49:03 +01:00
|
|
|
|
list = shash_sort(remotes);
|
|
|
|
|
for (size_t i = 0; i < shash_count(remotes); i++) {
|
|
|
|
|
ds_put_format(&s, "%s\n", list[i]->name);
|
2013-04-10 09:34:49 -07:00
|
|
|
|
}
|
|
|
|
|
free(list);
|
|
|
|
|
|
|
|
|
|
unixctl_command_reply(conn, ds_cstr(&s));
|
|
|
|
|
ds_destroy(&s);
|
|
|
|
|
}
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
|
|
|
|
/* "ovsdb-server/add-db DB": adds the DB to ovsdb-server. */
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_add_database(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[], void *config_)
|
|
|
|
|
{
|
|
|
|
|
struct server_config *config = config_;
|
|
|
|
|
const char *filename = argv[1];
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
const struct shash_node *node;
|
|
|
|
|
struct shash db_conf;
|
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
shash_init(&db_conf);
|
|
|
|
|
add_database_config(&db_conf, filename, *config->sync_from,
|
|
|
|
|
*config->sync_exclude, !config->is_backup);
|
|
|
|
|
ovs_assert(shash_count(&db_conf) == 1);
|
|
|
|
|
node = shash_first(&db_conf);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
char *error = ovsdb_error_to_string_free(open_db(config,
|
|
|
|
|
node->name, node->data));
|
2013-06-27 10:27:57 -07:00
|
|
|
|
if (!error) {
|
|
|
|
|
save_config(config);
|
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
} else {
|
|
|
|
|
unixctl_command_reply_error(conn, error);
|
|
|
|
|
free(error);
|
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
db_config_destroy(node->data);
|
|
|
|
|
shash_destroy(&db_conf);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
2017-12-31 21:15:58 -08:00
|
|
|
|
remove_db(struct server_config *config, struct shash_node *node, char *comment)
|
2013-06-27 10:27:57 -07:00
|
|
|
|
{
|
2017-12-06 11:37:03 -08:00
|
|
|
|
struct db *db = node->data;
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
close_db(config, db, comment);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
shash_delete(config->all_dbs, node);
|
|
|
|
|
|
|
|
|
|
save_config(config);
|
2017-12-22 11:41:11 -08:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_remove_database(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[], void *config_)
|
|
|
|
|
{
|
|
|
|
|
struct server_config *config = config_;
|
|
|
|
|
struct shash_node *node;
|
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
if (check_config_file_on_unixctl(conn)) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2017-12-22 11:41:11 -08:00
|
|
|
|
node = shash_find(config->all_dbs, argv[1]);
|
|
|
|
|
if (!node) {
|
|
|
|
|
unixctl_command_reply_error(conn, "Failed to find the database.");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
if (node->name[0] == '_') {
|
|
|
|
|
unixctl_command_reply_error(conn, "Cannot remove reserved database.");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2017-12-31 21:15:58 -08:00
|
|
|
|
remove_db(config, node, xasprintf("removing %s database by user request",
|
|
|
|
|
node->name));
|
2013-06-27 10:27:57 -07:00
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_list_databases(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED, void *all_dbs_)
|
|
|
|
|
{
|
|
|
|
|
struct shash *all_dbs = all_dbs_;
|
|
|
|
|
const struct shash_node **nodes;
|
|
|
|
|
struct ds s;
|
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
|
|
ds_init(&s);
|
|
|
|
|
|
|
|
|
|
nodes = shash_sort(all_dbs);
|
|
|
|
|
for (i = 0; i < shash_count(all_dbs); i++) {
|
2017-12-31 21:15:58 -08:00
|
|
|
|
const struct shash_node *node = nodes[i];
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
if (db->db) {
|
|
|
|
|
ds_put_format(&s, "%s\n", node->name);
|
|
|
|
|
}
|
2013-06-27 10:27:57 -07:00
|
|
|
|
}
|
|
|
|
|
free(nodes);
|
|
|
|
|
|
|
|
|
|
unixctl_command_reply(conn, ds_cstr(&s));
|
|
|
|
|
ds_destroy(&s);
|
|
|
|
|
}
|
|
|
|
|
|
2022-06-24 11:55:58 +02:00
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_tlog_set(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[], void *all_dbs_)
|
|
|
|
|
{
|
|
|
|
|
struct shash *all_dbs = all_dbs_;
|
|
|
|
|
const char *name_ = argv[1];
|
|
|
|
|
const char *command = argv[2];
|
|
|
|
|
bool log;
|
|
|
|
|
|
|
|
|
|
if (!strcasecmp(command, "on")) {
|
|
|
|
|
log = true;
|
|
|
|
|
} else if (!strcasecmp(command, "off")) {
|
|
|
|
|
log = false;
|
|
|
|
|
} else {
|
|
|
|
|
unixctl_command_reply_error(conn, "invalid command argument");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
char *name = xstrdup(name_);
|
|
|
|
|
char *save_ptr = NULL;
|
|
|
|
|
|
|
|
|
|
const char *db_name = strtok_r(name, ":", &save_ptr); /* "DB" */
|
|
|
|
|
const char *tbl_name = strtok_r(NULL, ":", &save_ptr); /* "TABLE" */
|
|
|
|
|
if (!db_name || !tbl_name || strtok_r(NULL, ":", &save_ptr)) {
|
|
|
|
|
unixctl_command_reply_error(conn, "invalid DB:TABLE argument");
|
|
|
|
|
goto out;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
struct db *db = shash_find_data(all_dbs, db_name);
|
|
|
|
|
if (!db) {
|
|
|
|
|
unixctl_command_reply_error(conn, "no such database");
|
|
|
|
|
goto out;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
struct ovsdb_table *table = ovsdb_get_table(db->db, tbl_name);
|
|
|
|
|
if (!table) {
|
|
|
|
|
unixctl_command_reply_error(conn, "no such table");
|
|
|
|
|
goto out;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
ovsdb_table_logging_enable(table, log);
|
|
|
|
|
unixctl_command_reply(conn, NULL);
|
|
|
|
|
out:
|
|
|
|
|
free(name);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_tlog_list(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED, void *all_dbs_)
|
|
|
|
|
{
|
|
|
|
|
const struct shash_node **db_nodes;
|
|
|
|
|
struct ds s = DS_EMPTY_INITIALIZER;
|
|
|
|
|
struct shash *all_dbs = all_dbs_;
|
|
|
|
|
|
|
|
|
|
ds_put_cstr(&s, "database table logging\n");
|
|
|
|
|
ds_put_cstr(&s, "-------- ----- -------\n");
|
|
|
|
|
|
|
|
|
|
db_nodes = shash_sort(all_dbs);
|
|
|
|
|
for (size_t i = 0; i < shash_count(all_dbs); i++) {
|
|
|
|
|
const struct shash_node *db_node = db_nodes[i];
|
|
|
|
|
struct db *db = db_node->data;
|
|
|
|
|
if (db->db) {
|
|
|
|
|
const struct shash_node **tbl_nodes = shash_sort(&db->db->tables);
|
|
|
|
|
|
|
|
|
|
ds_put_format(&s, "%-16s \n", db_node->name);
|
|
|
|
|
for (size_t j = 0; j < shash_count(&db->db->tables); j++) {
|
|
|
|
|
const char *logging_enabled =
|
|
|
|
|
ovsdb_table_is_logging_enabled(tbl_nodes[j]->data)
|
|
|
|
|
? "ON" : "OFF";
|
|
|
|
|
ds_put_format(&s, " %-27s %s\n",
|
|
|
|
|
tbl_nodes[j]->name, logging_enabled);
|
|
|
|
|
}
|
|
|
|
|
free(tbl_nodes);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
free(db_nodes);
|
|
|
|
|
|
|
|
|
|
unixctl_command_reply(conn, ds_cstr(&s));
|
|
|
|
|
ds_destroy(&s);
|
|
|
|
|
}
|
|
|
|
|
|
2016-08-23 04:05:11 -07:00
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_get_sync_status(struct unixctl_conn *conn, int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[] OVS_UNUSED, void *config_)
|
|
|
|
|
{
|
|
|
|
|
struct server_config *config = config_;
|
|
|
|
|
struct ds ds = DS_EMPTY_INITIALIZER;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
bool any_backup = false;
|
2016-08-23 04:05:11 -07:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
const struct shash_node **db_nodes = shash_sort(config->all_dbs);
|
2016-08-23 04:05:11 -07:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
for (size_t i = 0; i < shash_count(config->all_dbs); i++) {
|
|
|
|
|
const struct db *db = db_nodes[i]->data;
|
2024-01-09 23:49:05 +01:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (db->config->model != SM_ACTIVE_BACKUP) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2024-01-09 23:49:05 +01:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
any_backup = true;
|
|
|
|
|
|
|
|
|
|
ds_put_format(&ds, "database: %s\n", db->db->name);
|
|
|
|
|
ds_put_format(&ds, "state: %s\n",
|
|
|
|
|
db->config->ab.backup ? "backup" : "active");
|
|
|
|
|
if (db->config->ab.backup) {
|
|
|
|
|
ds_put_and_free_cstr(&ds, replication_status(db->db));
|
2024-01-09 23:49:05 +01:00
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
if (i + 1 < shash_count(config->all_dbs)) {
|
|
|
|
|
ds_put_char(&ds, '\n');
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
free(db_nodes);
|
|
|
|
|
|
|
|
|
|
if (!any_backup) {
|
|
|
|
|
ds_put_cstr(&ds, "state: active\n");
|
2016-08-23 04:05:11 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
unixctl_command_reply(conn, ds_cstr(&ds));
|
|
|
|
|
ds_destroy(&ds);
|
|
|
|
|
}
|
|
|
|
|
|
2020-08-03 17:05:28 +02:00
|
|
|
|
static void
|
|
|
|
|
ovsdb_server_get_db_storage_status(struct unixctl_conn *conn,
|
|
|
|
|
int argc OVS_UNUSED,
|
|
|
|
|
const char *argv[],
|
|
|
|
|
void *config_)
|
|
|
|
|
{
|
|
|
|
|
struct server_config *config = config_;
|
|
|
|
|
struct shash_node *node;
|
|
|
|
|
|
|
|
|
|
node = shash_find(config->all_dbs, argv[1]);
|
|
|
|
|
if (!node) {
|
|
|
|
|
unixctl_command_reply_error(conn, "Failed to find the database.");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
struct db *db = node->data;
|
|
|
|
|
|
|
|
|
|
if (!db->db) {
|
|
|
|
|
unixctl_command_reply_error(conn, "Failed to find the database.");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
struct ds ds = DS_EMPTY_INITIALIZER;
|
|
|
|
|
char *error = ovsdb_storage_get_error(db->db->storage);
|
|
|
|
|
|
|
|
|
|
if (!error) {
|
|
|
|
|
ds_put_cstr(&ds, "status: ok");
|
|
|
|
|
} else {
|
|
|
|
|
ds_put_format(&ds, "status: %s", error);
|
|
|
|
|
free(error);
|
|
|
|
|
}
|
|
|
|
|
unixctl_command_reply(conn, ds_cstr(&ds));
|
|
|
|
|
ds_destroy(&ds);
|
|
|
|
|
}
|
|
|
|
|
|
2009-11-04 15:11:44 -08:00
|
|
|
|
static void
|
2017-12-28 13:21:11 -08:00
|
|
|
|
parse_options(int argc, char *argv[],
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash *db_conf, struct shash *remotes,
|
2017-12-28 13:21:11 -08:00
|
|
|
|
char **unixctl_pathp, char **run_command,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
char **sync_from, char **sync_exclude, bool *active)
|
2009-11-04 15:11:44 -08:00
|
|
|
|
{
|
|
|
|
|
enum {
|
2012-09-21 11:16:34 -07:00
|
|
|
|
OPT_REMOTE = UCHAR_MAX + 1,
|
2009-11-17 16:02:38 -08:00
|
|
|
|
OPT_UNIXCTL,
|
2010-02-12 11:17:17 -08:00
|
|
|
|
OPT_RUN,
|
2009-12-21 13:13:48 -08:00
|
|
|
|
OPT_BOOTSTRAP_CA_CERT,
|
2015-08-19 15:42:07 -07:00
|
|
|
|
OPT_PEER_CA_CERT,
|
2016-06-24 17:13:06 -07:00
|
|
|
|
OPT_SYNC_FROM,
|
2016-03-29 11:01:00 -06:00
|
|
|
|
OPT_SYNC_EXCLUDE,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
OPT_ACTIVE,
|
2017-12-28 13:21:11 -08:00
|
|
|
|
OPT_NO_DBS,
|
ovsdb: Use column diffs for ovsdb and raft log entries.
Currently, ovsdb-server stores complete value for the column in a database
file and in a raft log in case this column changed. This means that
transaction that adds, for example, one new acl to a port group creates
a log entry with all UUIDs of all existing acls + one new. Same for
ports in logical switches and routers and more other columns with sets
in Northbound DB.
There could be thousands of acls in one port group or thousands of ports
in a single logical switch. And the typical use case is to add one new
if we're starting a new service/VM/container or adding one new node in a
kubernetes or OpenStack cluster. This generates huge amount of traffic
within ovsdb raft cluster, grows overall memory consumption and hurts
performance since all these UUIDs are parsed and formatted to/from json
several times and stored on disks. And more values we have in a set -
more space a single log entry will occupy and more time it will take to
process by ovsdb-server cluster members.
Simple test:
1. Start OVN sandbox with clustered DBs:
# make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run a script that creates one port group and adds 4000 acls into it:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
4. Check the current memory consumption of ovsdb-server processes and
space occupied by database files:
# ls sandbox/[ns]b*.db -alh
# ps -eo vsz,rss,comm,cmd | egrep '=[ns]b[123].pid'
Test results with current ovsdb log format:
On-disk Nb DB size : ~369 MB
RSS of Nb ovsdb-servers: ~2.7 GB
Time to finish the test: ~2m
In order to mitigate memory consumption issues and reduce computational
load on ovsdb-servers let's store diff between old and new values
instead. This will make size of each log entry that adds single acl to
port group (or port to logical switch or anything else like that) very
small and independent from the number of already existing acls (ports,
etc.).
Added a new marker '_is_diff' into a file transaction to specify that
this transaction contains diffs instead of replacements for the existing
data.
One side effect is that this change will actually increase the size of
file transaction that removes more than a half of entries from the set,
because diff will be larger than the resulted new value. However, such
operations are rare.
Test results with change applied:
On-disk Nb DB size : ~2.7 MB ---> reduced by 99%
RSS of Nb ovsdb-servers: ~580 MB ---> reduced by 78%
Time to finish the test: ~1m27s ---> reduced by 27%
After this change new ovsdb-server is still able to read old databases,
but old ovsdb-server will not be able to read new ones.
Since new servers could join ovsdb cluster dynamically it's hard to
implement any runtime mechanism to handle cases where different
versions of ovsdb-server joins the cluster. However we still need to
handle cluster upgrades. For this case added special command line
argument to disable new functionality. Documentation updated with the
recommended way to upgrade the ovsdb cluster.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-12-11 21:54:47 +01:00
|
|
|
|
OPT_FILE_COLUMN_DIFF,
|
2023-03-27 21:42:59 +02:00
|
|
|
|
OPT_FILE_NO_DATA_CONVERSION,
|
2024-01-09 23:49:08 +01:00
|
|
|
|
OPT_CONFIG_FILE,
|
2009-11-04 15:11:44 -08:00
|
|
|
|
VLOG_OPTION_ENUMS,
|
2016-10-06 16:21:33 -07:00
|
|
|
|
DAEMON_OPTION_ENUMS,
|
|
|
|
|
SSL_OPTION_ENUMS,
|
2021-05-27 15:29:02 +02:00
|
|
|
|
OVS_REPLAY_OPTION_ENUMS,
|
2009-11-04 15:11:44 -08:00
|
|
|
|
};
|
2017-12-28 13:21:11 -08:00
|
|
|
|
|
2013-04-23 16:40:56 -07:00
|
|
|
|
static const struct option long_options[] = {
|
2011-05-04 13:49:42 -07:00
|
|
|
|
{"remote", required_argument, NULL, OPT_REMOTE},
|
|
|
|
|
{"unixctl", required_argument, NULL, OPT_UNIXCTL},
|
2014-02-14 14:07:34 -08:00
|
|
|
|
#ifndef _WIN32
|
2011-05-04 13:49:42 -07:00
|
|
|
|
{"run", required_argument, NULL, OPT_RUN},
|
2014-02-14 14:07:34 -08:00
|
|
|
|
#endif
|
2011-05-04 13:49:42 -07:00
|
|
|
|
{"help", no_argument, NULL, 'h'},
|
|
|
|
|
{"version", no_argument, NULL, 'V'},
|
2009-11-04 15:11:44 -08:00
|
|
|
|
DAEMON_LONG_OPTIONS,
|
|
|
|
|
VLOG_LONG_OPTIONS,
|
2011-05-04 13:49:42 -07:00
|
|
|
|
{"bootstrap-ca-cert", required_argument, NULL, OPT_BOOTSTRAP_CA_CERT},
|
2015-08-19 15:42:07 -07:00
|
|
|
|
{"peer-ca-cert", required_argument, NULL, OPT_PEER_CA_CERT},
|
2016-10-06 16:21:33 -07:00
|
|
|
|
STREAM_SSL_LONG_OPTIONS,
|
2021-05-27 15:29:02 +02:00
|
|
|
|
OVS_REPLAY_LONG_OPTIONS,
|
2016-06-24 17:13:06 -07:00
|
|
|
|
{"sync-from", required_argument, NULL, OPT_SYNC_FROM},
|
2016-03-29 11:01:00 -06:00
|
|
|
|
{"sync-exclude-tables", required_argument, NULL, OPT_SYNC_EXCLUDE},
|
2016-08-23 04:05:11 -07:00
|
|
|
|
{"active", no_argument, NULL, OPT_ACTIVE},
|
2017-12-28 13:21:11 -08:00
|
|
|
|
{"no-dbs", no_argument, NULL, OPT_NO_DBS},
|
ovsdb: Use column diffs for ovsdb and raft log entries.
Currently, ovsdb-server stores complete value for the column in a database
file and in a raft log in case this column changed. This means that
transaction that adds, for example, one new acl to a port group creates
a log entry with all UUIDs of all existing acls + one new. Same for
ports in logical switches and routers and more other columns with sets
in Northbound DB.
There could be thousands of acls in one port group or thousands of ports
in a single logical switch. And the typical use case is to add one new
if we're starting a new service/VM/container or adding one new node in a
kubernetes or OpenStack cluster. This generates huge amount of traffic
within ovsdb raft cluster, grows overall memory consumption and hurts
performance since all these UUIDs are parsed and formatted to/from json
several times and stored on disks. And more values we have in a set -
more space a single log entry will occupy and more time it will take to
process by ovsdb-server cluster members.
Simple test:
1. Start OVN sandbox with clustered DBs:
# make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run a script that creates one port group and adds 4000 acls into it:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
4. Check the current memory consumption of ovsdb-server processes and
space occupied by database files:
# ls sandbox/[ns]b*.db -alh
# ps -eo vsz,rss,comm,cmd | egrep '=[ns]b[123].pid'
Test results with current ovsdb log format:
On-disk Nb DB size : ~369 MB
RSS of Nb ovsdb-servers: ~2.7 GB
Time to finish the test: ~2m
In order to mitigate memory consumption issues and reduce computational
load on ovsdb-servers let's store diff between old and new values
instead. This will make size of each log entry that adds single acl to
port group (or port to logical switch or anything else like that) very
small and independent from the number of already existing acls (ports,
etc.).
Added a new marker '_is_diff' into a file transaction to specify that
this transaction contains diffs instead of replacements for the existing
data.
One side effect is that this change will actually increase the size of
file transaction that removes more than a half of entries from the set,
because diff will be larger than the resulted new value. However, such
operations are rare.
Test results with change applied:
On-disk Nb DB size : ~2.7 MB ---> reduced by 99%
RSS of Nb ovsdb-servers: ~580 MB ---> reduced by 78%
Time to finish the test: ~1m27s ---> reduced by 27%
After this change new ovsdb-server is still able to read old databases,
but old ovsdb-server will not be able to read new ones.
Since new servers could join ovsdb cluster dynamically it's hard to
implement any runtime mechanism to handle cases where different
versions of ovsdb-server joins the cluster. However we still need to
handle cluster upgrades. For this case added special command line
argument to disable new functionality. Documentation updated with the
recommended way to upgrade the ovsdb cluster.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-12-11 21:54:47 +01:00
|
|
|
|
{"disable-file-column-diff", no_argument, NULL, OPT_FILE_COLUMN_DIFF},
|
2023-03-27 21:42:59 +02:00
|
|
|
|
{"disable-file-no-data-conversion", no_argument, NULL,
|
|
|
|
|
OPT_FILE_NO_DATA_CONVERSION},
|
2024-01-09 23:49:08 +01:00
|
|
|
|
{"config-file", required_argument, NULL, OPT_CONFIG_FILE},
|
2011-05-04 13:49:42 -07:00
|
|
|
|
{NULL, 0, NULL, 0},
|
2009-11-04 15:11:44 -08:00
|
|
|
|
};
|
2015-03-16 12:01:55 -04:00
|
|
|
|
char *short_options = ovs_cmdl_long_options_to_short_options(long_options);
|
2017-12-28 13:21:11 -08:00
|
|
|
|
bool add_default_db = true;
|
2009-11-04 15:11:44 -08:00
|
|
|
|
|
2016-08-23 04:05:11 -07:00
|
|
|
|
*sync_from = NULL;
|
|
|
|
|
*sync_exclude = NULL;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
shash_init(db_conf);
|
2024-01-09 23:49:03 +01:00
|
|
|
|
shash_init(remotes);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
for (;;) {
|
|
|
|
|
int c;
|
|
|
|
|
|
|
|
|
|
c = getopt_long(argc, argv, short_options, long_options, NULL);
|
|
|
|
|
if (c == -1) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
switch (c) {
|
2010-01-04 10:05:51 -08:00
|
|
|
|
case OPT_REMOTE:
|
2024-01-09 23:49:03 +01:00
|
|
|
|
add_remote(remotes, optarg, NULL);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
break;
|
|
|
|
|
|
2009-11-17 16:02:38 -08:00
|
|
|
|
case OPT_UNIXCTL:
|
|
|
|
|
*unixctl_pathp = optarg;
|
|
|
|
|
break;
|
|
|
|
|
|
2010-02-12 11:17:17 -08:00
|
|
|
|
case OPT_RUN:
|
|
|
|
|
*run_command = optarg;
|
|
|
|
|
break;
|
|
|
|
|
|
2009-11-04 15:11:44 -08:00
|
|
|
|
case 'h':
|
|
|
|
|
usage();
|
|
|
|
|
|
|
|
|
|
case 'V':
|
2011-08-02 12:16:44 -07:00
|
|
|
|
ovs_print_version(0, 0);
|
2009-11-04 15:11:44 -08:00
|
|
|
|
exit(EXIT_SUCCESS);
|
|
|
|
|
|
|
|
|
|
VLOG_OPTION_HANDLERS
|
|
|
|
|
DAEMON_OPTION_HANDLERS
|
|
|
|
|
|
2010-03-18 17:12:02 -07:00
|
|
|
|
case 'p':
|
|
|
|
|
private_key_file = optarg;
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
case 'c':
|
|
|
|
|
certificate_file = optarg;
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
case 'C':
|
|
|
|
|
ca_cert_file = optarg;
|
|
|
|
|
bootstrap_ca_cert = false;
|
|
|
|
|
break;
|
2009-12-21 13:13:48 -08:00
|
|
|
|
|
2016-10-06 16:21:33 -07:00
|
|
|
|
case OPT_SSL_PROTOCOLS:
|
|
|
|
|
ssl_protocols = optarg;
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
case OPT_SSL_CIPHERS:
|
|
|
|
|
ssl_ciphers = optarg;
|
|
|
|
|
break;
|
|
|
|
|
|
2024-12-09 17:38:53 +01:00
|
|
|
|
case OPT_SSL_CIPHERSUITES:
|
|
|
|
|
ssl_ciphersuites = optarg;
|
|
|
|
|
break;
|
|
|
|
|
|
2009-12-21 13:13:48 -08:00
|
|
|
|
case OPT_BOOTSTRAP_CA_CERT:
|
2010-03-18 17:12:02 -07:00
|
|
|
|
ca_cert_file = optarg;
|
|
|
|
|
bootstrap_ca_cert = true;
|
2009-12-21 13:13:48 -08:00
|
|
|
|
break;
|
|
|
|
|
|
2015-08-19 15:42:07 -07:00
|
|
|
|
case OPT_PEER_CA_CERT:
|
|
|
|
|
stream_ssl_set_peer_ca_cert_file(optarg);
|
|
|
|
|
break;
|
|
|
|
|
|
2021-05-27 15:29:02 +02:00
|
|
|
|
OVS_REPLAY_OPTION_HANDLERS
|
|
|
|
|
|
2016-06-24 17:13:06 -07:00
|
|
|
|
case OPT_SYNC_FROM:
|
2016-08-23 04:05:11 -07:00
|
|
|
|
*sync_from = xstrdup(optarg);
|
2016-06-24 17:13:06 -07:00
|
|
|
|
break;
|
|
|
|
|
|
2016-08-16 14:56:19 -07:00
|
|
|
|
case OPT_SYNC_EXCLUDE: {
|
2024-01-09 23:49:05 +01:00
|
|
|
|
char *err = parse_excluded_tables(optarg);
|
2016-08-16 14:56:19 -07:00
|
|
|
|
if (err) {
|
|
|
|
|
ovs_fatal(0, "%s", err);
|
|
|
|
|
}
|
2016-08-23 04:05:11 -07:00
|
|
|
|
*sync_exclude = xstrdup(optarg);
|
2016-03-29 11:01:00 -06:00
|
|
|
|
break;
|
2016-08-16 14:56:19 -07:00
|
|
|
|
}
|
2016-08-23 04:05:11 -07:00
|
|
|
|
case OPT_ACTIVE:
|
|
|
|
|
*active = true;
|
|
|
|
|
break;
|
2016-03-29 11:01:00 -06:00
|
|
|
|
|
2017-12-28 13:21:11 -08:00
|
|
|
|
case OPT_NO_DBS:
|
|
|
|
|
add_default_db = false;
|
|
|
|
|
break;
|
|
|
|
|
|
ovsdb: Use column diffs for ovsdb and raft log entries.
Currently, ovsdb-server stores complete value for the column in a database
file and in a raft log in case this column changed. This means that
transaction that adds, for example, one new acl to a port group creates
a log entry with all UUIDs of all existing acls + one new. Same for
ports in logical switches and routers and more other columns with sets
in Northbound DB.
There could be thousands of acls in one port group or thousands of ports
in a single logical switch. And the typical use case is to add one new
if we're starting a new service/VM/container or adding one new node in a
kubernetes or OpenStack cluster. This generates huge amount of traffic
within ovsdb raft cluster, grows overall memory consumption and hurts
performance since all these UUIDs are parsed and formatted to/from json
several times and stored on disks. And more values we have in a set -
more space a single log entry will occupy and more time it will take to
process by ovsdb-server cluster members.
Simple test:
1. Start OVN sandbox with clustered DBs:
# make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run a script that creates one port group and adds 4000 acls into it:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
4. Check the current memory consumption of ovsdb-server processes and
space occupied by database files:
# ls sandbox/[ns]b*.db -alh
# ps -eo vsz,rss,comm,cmd | egrep '=[ns]b[123].pid'
Test results with current ovsdb log format:
On-disk Nb DB size : ~369 MB
RSS of Nb ovsdb-servers: ~2.7 GB
Time to finish the test: ~2m
In order to mitigate memory consumption issues and reduce computational
load on ovsdb-servers let's store diff between old and new values
instead. This will make size of each log entry that adds single acl to
port group (or port to logical switch or anything else like that) very
small and independent from the number of already existing acls (ports,
etc.).
Added a new marker '_is_diff' into a file transaction to specify that
this transaction contains diffs instead of replacements for the existing
data.
One side effect is that this change will actually increase the size of
file transaction that removes more than a half of entries from the set,
because diff will be larger than the resulted new value. However, such
operations are rare.
Test results with change applied:
On-disk Nb DB size : ~2.7 MB ---> reduced by 99%
RSS of Nb ovsdb-servers: ~580 MB ---> reduced by 78%
Time to finish the test: ~1m27s ---> reduced by 27%
After this change new ovsdb-server is still able to read old databases,
but old ovsdb-server will not be able to read new ones.
Since new servers could join ovsdb cluster dynamically it's hard to
implement any runtime mechanism to handle cases where different
versions of ovsdb-server joins the cluster. However we still need to
handle cluster upgrades. For this case added special command line
argument to disable new functionality. Documentation updated with the
recommended way to upgrade the ovsdb cluster.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-12-11 21:54:47 +01:00
|
|
|
|
case OPT_FILE_COLUMN_DIFF:
|
|
|
|
|
ovsdb_file_column_diff_disable();
|
|
|
|
|
break;
|
|
|
|
|
|
2023-03-27 21:42:59 +02:00
|
|
|
|
case OPT_FILE_NO_DATA_CONVERSION:
|
|
|
|
|
ovsdb_no_data_conversion_disable();
|
|
|
|
|
break;
|
|
|
|
|
|
2024-01-09 23:49:08 +01:00
|
|
|
|
case OPT_CONFIG_FILE:
|
2025-06-05 16:51:28 +02:00
|
|
|
|
free(config_file_path);
|
2024-01-09 23:49:08 +01:00
|
|
|
|
config_file_path = abs_file_name(ovs_dbdir(), optarg);
|
|
|
|
|
add_default_db = false;
|
|
|
|
|
break;
|
|
|
|
|
|
2009-11-04 15:11:44 -08:00
|
|
|
|
case '?':
|
|
|
|
|
exit(EXIT_FAILURE);
|
|
|
|
|
|
|
|
|
|
default:
|
|
|
|
|
abort();
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
free(short_options);
|
|
|
|
|
|
2017-12-28 13:21:11 -08:00
|
|
|
|
argc -= optind;
|
|
|
|
|
argv += optind;
|
2024-01-09 23:49:08 +01:00
|
|
|
|
|
|
|
|
|
if (config_file_path) {
|
|
|
|
|
if (*sync_from || *sync_exclude || *active) {
|
|
|
|
|
ovs_fatal(0, "--config-file is mutually exclusive with "
|
|
|
|
|
"--sync-from, --sync-exclude and --active");
|
|
|
|
|
}
|
|
|
|
|
if (shash_count(remotes)) {
|
|
|
|
|
ovs_fatal(0, "--config-file is mutually exclusive with --remote");
|
|
|
|
|
}
|
|
|
|
|
if (argc > 0) {
|
|
|
|
|
ovs_fatal(0, "Databases should be specified in a config file");
|
|
|
|
|
}
|
|
|
|
|
} else if (argc > 0) {
|
2017-12-28 13:21:11 -08:00
|
|
|
|
for (int i = 0; i < argc; i++) {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
add_database_config(db_conf, argv[i], *sync_from, *sync_exclude,
|
|
|
|
|
*active);
|
2017-12-28 13:21:11 -08:00
|
|
|
|
}
|
|
|
|
|
} else if (add_default_db) {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
char *filename = xasprintf("%s/conf.db", ovs_dbdir());
|
|
|
|
|
|
|
|
|
|
add_database_config(db_conf, filename, *sync_from, *sync_exclude,
|
|
|
|
|
*active);
|
|
|
|
|
free(filename);
|
2017-12-28 13:21:11 -08:00
|
|
|
|
}
|
2009-11-04 15:11:44 -08:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
usage(void)
|
|
|
|
|
{
|
|
|
|
|
printf("%s: Open vSwitch database server\n"
|
2012-09-07 10:07:03 -07:00
|
|
|
|
"usage: %s [OPTIONS] [DATABASE...]\n"
|
|
|
|
|
"where each DATABASE is a database file in ovsdb format.\n"
|
|
|
|
|
"The default DATABASE, if none is given, is\n%s/conf.db.\n",
|
|
|
|
|
program_name, program_name, ovs_dbdir());
|
2009-11-04 15:11:44 -08:00
|
|
|
|
printf("\nJSON-RPC options (may be specified any number of times):\n"
|
2010-01-04 10:05:51 -08:00
|
|
|
|
" --remote=REMOTE connect or listen to REMOTE\n");
|
2009-12-21 13:13:48 -08:00
|
|
|
|
stream_usage("JSON-RPC", true, true, true);
|
2024-01-09 23:49:08 +01:00
|
|
|
|
printf("\nConfiguration file:\n"
|
|
|
|
|
" --config-file PATH Use configuration file as a source of\n"
|
|
|
|
|
" database and JSON-RPC configuration.\n"
|
|
|
|
|
" Mutually exclusive with the DATABASE,\n"
|
|
|
|
|
" JSON-RPC and Syncing options.\n"
|
|
|
|
|
" Assumes --no-dbs.\n");
|
2009-11-04 15:11:44 -08:00
|
|
|
|
daemon_usage();
|
|
|
|
|
vlog_usage();
|
2016-06-24 17:13:06 -07:00
|
|
|
|
replication_usage();
|
2021-05-27 15:29:02 +02:00
|
|
|
|
ovs_replay_usage();
|
2009-11-04 15:11:44 -08:00
|
|
|
|
printf("\nOther options:\n"
|
2010-02-12 11:17:17 -08:00
|
|
|
|
" --run COMMAND run COMMAND as subprocess then exit\n"
|
2010-03-23 11:22:42 -07:00
|
|
|
|
" --unixctl=SOCKET override default control socket name\n"
|
2024-01-09 23:49:08 +01:00
|
|
|
|
" --no-dbs do not add default database\n"
|
ovsdb: Use column diffs for ovsdb and raft log entries.
Currently, ovsdb-server stores complete value for the column in a database
file and in a raft log in case this column changed. This means that
transaction that adds, for example, one new acl to a port group creates
a log entry with all UUIDs of all existing acls + one new. Same for
ports in logical switches and routers and more other columns with sets
in Northbound DB.
There could be thousands of acls in one port group or thousands of ports
in a single logical switch. And the typical use case is to add one new
if we're starting a new service/VM/container or adding one new node in a
kubernetes or OpenStack cluster. This generates huge amount of traffic
within ovsdb raft cluster, grows overall memory consumption and hurts
performance since all these UUIDs are parsed and formatted to/from json
several times and stored on disks. And more values we have in a set -
more space a single log entry will occupy and more time it will take to
process by ovsdb-server cluster members.
Simple test:
1. Start OVN sandbox with clustered DBs:
# make sandbox SANDBOXFLAGS='--nbdb-model=clustered --sbdb-model=clustered'
2. Run a script that creates one port group and adds 4000 acls into it:
# cat ../memory-test.sh
pg_name=my_port_group
export OVN_NB_DAEMON=$(ovn-nbctl --pidfile --detach --log-file -vsocket_util:off)
ovn-nbctl pg-add $pg_name
for i in $(seq 1 4000); do
echo "Iteration: $i"
ovn-nbctl --log acl-add $pg_name from-lport $i udp drop
done
ovn-nbctl acl-del $pg_name
ovn-nbctl pg-del $pg_name
ovs-appctl -t $(pwd)/sandbox/nb1 memory/show
ovn-appctl -t ovn-nbctl exit
---
4. Check the current memory consumption of ovsdb-server processes and
space occupied by database files:
# ls sandbox/[ns]b*.db -alh
# ps -eo vsz,rss,comm,cmd | egrep '=[ns]b[123].pid'
Test results with current ovsdb log format:
On-disk Nb DB size : ~369 MB
RSS of Nb ovsdb-servers: ~2.7 GB
Time to finish the test: ~2m
In order to mitigate memory consumption issues and reduce computational
load on ovsdb-servers let's store diff between old and new values
instead. This will make size of each log entry that adds single acl to
port group (or port to logical switch or anything else like that) very
small and independent from the number of already existing acls (ports,
etc.).
Added a new marker '_is_diff' into a file transaction to specify that
this transaction contains diffs instead of replacements for the existing
data.
One side effect is that this change will actually increase the size of
file transaction that removes more than a half of entries from the set,
because diff will be larger than the resulted new value. However, such
operations are rare.
Test results with change applied:
On-disk Nb DB size : ~2.7 MB ---> reduced by 99%
RSS of Nb ovsdb-servers: ~580 MB ---> reduced by 78%
Time to finish the test: ~1m27s ---> reduced by 27%
After this change new ovsdb-server is still able to read old databases,
but old ovsdb-server will not be able to read new ones.
Since new servers could join ovsdb cluster dynamically it's hard to
implement any runtime mechanism to handle cases where different
versions of ovsdb-server joins the cluster. However we still need to
handle cluster upgrades. For this case added special command line
argument to disable new functionality. Documentation updated with the
recommended way to upgrade the ovsdb cluster.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2020-12-11 21:54:47 +01:00
|
|
|
|
" --disable-file-column-diff\n"
|
|
|
|
|
" don't use column diff in database file\n"
|
2009-11-04 15:11:44 -08:00
|
|
|
|
" -h, --help display this help message\n"
|
|
|
|
|
" -V, --version display version information\n");
|
|
|
|
|
exit(EXIT_SUCCESS);
|
|
|
|
|
}
|
2013-06-13 12:25:39 -07:00
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
static struct json *
|
|
|
|
|
sset_to_json(const struct sset *sset)
|
|
|
|
|
{
|
|
|
|
|
struct json *array;
|
|
|
|
|
const char *s;
|
|
|
|
|
|
|
|
|
|
array = json_array_create_empty();
|
|
|
|
|
SSET_FOR_EACH (s, sset) {
|
|
|
|
|
json_array_add(array, json_string_create(s));
|
|
|
|
|
}
|
|
|
|
|
return array;
|
|
|
|
|
}
|
|
|
|
|
|
2024-01-09 23:49:03 +01:00
|
|
|
|
static struct json *
|
|
|
|
|
remotes_to_json(const struct shash *remotes)
|
|
|
|
|
{
|
|
|
|
|
const struct shash_node *node;
|
|
|
|
|
struct json *json;
|
|
|
|
|
|
|
|
|
|
json = json_object_create();
|
|
|
|
|
SHASH_FOR_EACH (node, remotes) {
|
|
|
|
|
json_object_put(json, node->name,
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_jsonrpc_options_to_json(node->data, false));
|
|
|
|
|
}
|
|
|
|
|
return json;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static struct json *
|
|
|
|
|
db_config_to_json(const struct db_config *conf)
|
|
|
|
|
{
|
|
|
|
|
struct json *json;
|
|
|
|
|
|
|
|
|
|
json = json_object_create();
|
|
|
|
|
|
|
|
|
|
if (conf->model != SM_UNDEFINED) {
|
|
|
|
|
json_object_put(json, "service-model",
|
|
|
|
|
json_string_create(
|
|
|
|
|
service_model_to_string(conf->model)));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (conf->source) {
|
|
|
|
|
struct json *source = json_object_create();
|
|
|
|
|
|
|
|
|
|
json_object_put(source, conf->source,
|
|
|
|
|
ovsdb_jsonrpc_options_to_json(conf->options, true));
|
|
|
|
|
json_object_put(json, "source", source);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP) {
|
|
|
|
|
if (conf->ab.sync_exclude) {
|
|
|
|
|
struct sset set = SSET_INITIALIZER(&set);
|
|
|
|
|
|
|
|
|
|
sset_from_delimited_string(&set, conf->ab.sync_exclude, " ,");
|
|
|
|
|
json_object_put(json, "exclude-tables", sset_to_json(&set));
|
|
|
|
|
sset_destroy(&set);
|
|
|
|
|
}
|
|
|
|
|
json_object_put(json, "backup", json_boolean_create(conf->ab.backup));
|
|
|
|
|
}
|
|
|
|
|
return json;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static struct json *
|
|
|
|
|
databases_to_json(const struct shash *db_conf)
|
|
|
|
|
{
|
|
|
|
|
const struct shash_node *node;
|
|
|
|
|
struct json *json;
|
|
|
|
|
|
|
|
|
|
json = json_object_create();
|
|
|
|
|
SHASH_FOR_EACH (node, db_conf) {
|
|
|
|
|
json_object_put(json, node->name, db_config_to_json(node->data));
|
2024-01-09 23:49:03 +01:00
|
|
|
|
}
|
|
|
|
|
return json;
|
|
|
|
|
}
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
/* Truncates and replaces the contents of 'config_file' by a representation of
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
* 'remotes', 'db_conf' and a few global replication paramaters. */
|
2013-06-13 12:25:39 -07:00
|
|
|
|
static void
|
2024-01-09 23:49:03 +01:00
|
|
|
|
save_config__(FILE *config_file, const struct shash *remotes,
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
const struct shash *db_conf, const char *sync_from,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
const char *sync_exclude, bool is_backup)
|
2013-06-13 12:25:39 -07:00
|
|
|
|
{
|
2013-06-27 10:27:57 -07:00
|
|
|
|
struct json *obj;
|
2013-06-13 12:25:39 -07:00
|
|
|
|
char *s;
|
|
|
|
|
|
|
|
|
|
if (ftruncate(fileno(config_file), 0) == -1) {
|
2013-06-24 10:54:49 -07:00
|
|
|
|
VLOG_FATAL("failed to truncate temporary file (%s)",
|
|
|
|
|
ovs_strerror(errno));
|
2013-06-13 12:25:39 -07:00
|
|
|
|
}
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
obj = json_object_create();
|
2024-01-09 23:49:03 +01:00
|
|
|
|
json_object_put(obj, "remotes", remotes_to_json(remotes));
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
json_object_put(obj, "databases", databases_to_json(db_conf));
|
|
|
|
|
|
2016-08-23 04:05:11 -07:00
|
|
|
|
if (sync_from) {
|
|
|
|
|
json_object_put(obj, "sync_from", json_string_create(sync_from));
|
|
|
|
|
}
|
|
|
|
|
if (sync_exclude) {
|
|
|
|
|
json_object_put(obj, "sync_exclude",
|
|
|
|
|
json_string_create(sync_exclude));
|
|
|
|
|
}
|
|
|
|
|
json_object_put(obj, "is_backup", json_boolean_create(is_backup));
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
s = json_to_string(obj, 0);
|
|
|
|
|
json_destroy(obj);
|
2013-06-13 12:25:39 -07:00
|
|
|
|
|
|
|
|
|
if (fseek(config_file, 0, SEEK_SET) != 0
|
|
|
|
|
|| fputs(s, config_file) == EOF
|
|
|
|
|
|| fflush(config_file) == EOF) {
|
2013-06-24 10:54:49 -07:00
|
|
|
|
VLOG_FATAL("failed to write temporary file (%s)", ovs_strerror(errno));
|
2013-06-13 12:25:39 -07:00
|
|
|
|
}
|
|
|
|
|
free(s);
|
|
|
|
|
}
|
|
|
|
|
|
2013-06-27 10:27:57 -07:00
|
|
|
|
/* Truncates and replaces the contents of 'config_file' by a representation of
|
|
|
|
|
* 'config'. */
|
2013-06-13 12:25:39 -07:00
|
|
|
|
static void
|
2013-06-27 10:27:57 -07:00
|
|
|
|
save_config(struct server_config *config)
|
|
|
|
|
{
|
|
|
|
|
struct shash_node *node;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash db_conf;
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
if (config_file_path) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
shash_init(&db_conf);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
SHASH_FOR_EACH (node, config->all_dbs) {
|
|
|
|
|
struct db *db = node->data;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
2017-12-22 11:41:11 -08:00
|
|
|
|
if (node->name[0] != '_') {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
shash_add(&db_conf, db->filename, db->config);
|
2017-12-22 11:41:11 -08:00
|
|
|
|
}
|
2013-06-27 10:27:57 -07:00
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
save_config__(config->config_tmpfile, config->remotes, &db_conf,
|
2016-08-23 04:05:11 -07:00
|
|
|
|
*config->sync_from, *config->sync_exclude,
|
|
|
|
|
*config->is_backup);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
shash_destroy(&db_conf);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
static bool
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
remotes_from_json(struct shash *remotes, const struct json *json)
|
2013-06-13 12:25:39 -07:00
|
|
|
|
{
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct ovsdb_jsonrpc_options *options;
|
|
|
|
|
const struct shash_node *node;
|
|
|
|
|
const struct shash *object;
|
2013-06-13 12:25:39 -07:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
free_remotes(remotes);
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovs_assert(json);
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
if (json->type == JSON_NULL) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
if (json->type != JSON_OBJECT) {
|
|
|
|
|
VLOG_WARN("config: 'remotes' is not a JSON object");
|
|
|
|
|
return false;
|
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
|
|
|
|
object = json_object(json);
|
|
|
|
|
SHASH_FOR_EACH (node, object) {
|
|
|
|
|
options = ovsdb_jsonrpc_default_options(node->name);
|
|
|
|
|
shash_add(remotes, node->name, options);
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
|
|
|
|
|
json = node->data;
|
|
|
|
|
if (json->type == JSON_OBJECT) {
|
|
|
|
|
ovsdb_jsonrpc_options_update_from_json(options, node->data, false);
|
|
|
|
|
} else if (json->type != JSON_NULL) {
|
|
|
|
|
VLOG_WARN("%s: JSON-RPC options are not a JSON object or null",
|
|
|
|
|
node->name);
|
|
|
|
|
free_remotes(remotes);
|
|
|
|
|
return false;
|
|
|
|
|
}
|
2013-06-27 10:27:57 -07:00
|
|
|
|
}
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
|
|
|
|
|
return true;
|
2013-06-27 10:27:57 -07:00
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
static struct db_config *
|
|
|
|
|
db_config_from_json(const char *name, const struct json *json)
|
|
|
|
|
{
|
|
|
|
|
const struct json *model, *source, *sync_exclude, *backup;
|
|
|
|
|
struct db_config *conf = xzalloc(sizeof *conf);
|
|
|
|
|
struct ovsdb_parser parser;
|
|
|
|
|
struct ovsdb_error *error;
|
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
conf->model = SM_UNDEFINED;
|
|
|
|
|
|
|
|
|
|
ovs_assert(json);
|
|
|
|
|
if (json->type == JSON_NULL) {
|
|
|
|
|
return conf;
|
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_parser_init(&parser, json, "database %s", name);
|
|
|
|
|
|
|
|
|
|
model = ovsdb_parser_member(&parser, "service-model",
|
|
|
|
|
OP_STRING | OP_OPTIONAL);
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
if (model) {
|
|
|
|
|
conf->model = service_model_from_string(json_string(model));
|
|
|
|
|
if (conf->model == SM_UNDEFINED) {
|
|
|
|
|
ovsdb_parser_raise_error(&parser,
|
|
|
|
|
"'%s' is not a valid service model", json_string(model));
|
|
|
|
|
}
|
|
|
|
|
}
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP) {
|
|
|
|
|
backup = ovsdb_parser_member(&parser, "backup", OP_BOOLEAN);
|
|
|
|
|
conf->ab.backup = backup ? json_boolean(backup) : false;
|
|
|
|
|
|
|
|
|
|
sync_exclude = ovsdb_parser_member(&parser, "exclude-tables",
|
|
|
|
|
OP_ARRAY | OP_OPTIONAL);
|
|
|
|
|
if (sync_exclude) {
|
|
|
|
|
struct sset set = SSET_INITIALIZER(&set);
|
2025-06-24 21:54:33 +02:00
|
|
|
|
size_t n = json_array_size(sync_exclude);
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
|
2025-06-24 21:54:33 +02:00
|
|
|
|
for (size_t i = 0; i < n; i++) {
|
|
|
|
|
const struct json *exclude = json_array_at(sync_exclude, i);
|
|
|
|
|
|
|
|
|
|
if (exclude->type != JSON_STRING) {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
ovsdb_parser_raise_error(&parser,
|
|
|
|
|
"'exclude-tables' must contain strings");
|
|
|
|
|
break;
|
|
|
|
|
}
|
2025-06-24 21:54:33 +02:00
|
|
|
|
sset_add(&set, json_string(exclude));
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
}
|
|
|
|
|
conf->ab.sync_exclude = sset_join(&set, ",", "");
|
|
|
|
|
sset_destroy(&set);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP || conf->model == SM_RELAY) {
|
|
|
|
|
enum ovsdb_parser_types type = OP_OBJECT;
|
|
|
|
|
|
|
|
|
|
if (conf->model == SM_ACTIVE_BACKUP && !conf->ab.backup) {
|
|
|
|
|
/* Active database doesn't have to have a source. */
|
|
|
|
|
type |= OP_OPTIONAL;
|
|
|
|
|
}
|
|
|
|
|
source = ovsdb_parser_member(&parser, "source", type);
|
|
|
|
|
|
|
|
|
|
if (source && shash_count(json_object(source)) != 1) {
|
|
|
|
|
ovsdb_parser_raise_error(&parser,
|
|
|
|
|
"'source' should be an object with exactly one element");
|
|
|
|
|
} else if (source) {
|
|
|
|
|
const struct shash_node *node = shash_first(json_object(source));
|
|
|
|
|
const struct json *options;
|
|
|
|
|
|
|
|
|
|
ovs_assert(node);
|
|
|
|
|
conf->source = xstrdup(node->name);
|
|
|
|
|
options = node->data;
|
|
|
|
|
|
|
|
|
|
conf->options = get_jsonrpc_options(conf->source, conf->model);
|
|
|
|
|
|
|
|
|
|
if (options->type == JSON_OBJECT) {
|
|
|
|
|
ovsdb_jsonrpc_options_update_from_json(conf->options,
|
|
|
|
|
options, true);
|
|
|
|
|
} else if (options->type != JSON_NULL) {
|
|
|
|
|
ovsdb_parser_raise_error(&parser,
|
|
|
|
|
"JSON-RPC options is not a JSON object or null");
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
error = ovsdb_parser_finish(&parser);
|
|
|
|
|
if (error) {
|
|
|
|
|
char *s = ovsdb_error_to_string_free(error);
|
|
|
|
|
|
|
|
|
|
VLOG_WARN("%s", s);
|
|
|
|
|
free(s);
|
|
|
|
|
db_config_destroy(conf);
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return conf;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
static bool
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
databases_from_json(struct shash *db_conf, const struct json *json)
|
2024-01-09 23:49:03 +01:00
|
|
|
|
{
|
|
|
|
|
const struct shash_node *node;
|
|
|
|
|
const struct shash *object;
|
|
|
|
|
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
free_database_configs(db_conf);
|
2024-01-09 23:49:03 +01:00
|
|
|
|
|
|
|
|
|
ovs_assert(json);
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
if (json->type == JSON_NULL) {
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
if (json->type != JSON_OBJECT) {
|
|
|
|
|
VLOG_WARN("config: 'databases' is not a JSON object or null");
|
|
|
|
|
}
|
2024-01-09 23:49:03 +01:00
|
|
|
|
|
|
|
|
|
object = json_object(json);
|
|
|
|
|
SHASH_FOR_EACH (node, object) {
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct db_config *conf = db_config_from_json(node->name, node->data);
|
|
|
|
|
|
|
|
|
|
if (conf) {
|
|
|
|
|
shash_add(db_conf, node->name, conf);
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
} else {
|
|
|
|
|
free_database_configs(db_conf);
|
|
|
|
|
return false;
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
}
|
2024-01-09 23:49:03 +01:00
|
|
|
|
}
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
return true;
|
2024-01-09 23:49:03 +01:00
|
|
|
|
}
|
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
/* Clears and replaces 'remotes' and 'db_conf' by a configuration read from
|
|
|
|
|
* 'config_file', which must have been previously written by save_config()
|
|
|
|
|
* or provided by the user with --config-file.
|
|
|
|
|
* Returns 'true', if parsing was successful, 'false' otherwise. */
|
|
|
|
|
static bool
|
2024-01-09 23:49:03 +01:00
|
|
|
|
load_config(FILE *config_file, struct shash *remotes,
|
ovsdb-server: Database config isolation.
Add a new structure 'db_config' that holds the user-provided
configuration of the database. And attach this configuration
to each of the databases on the server.
Each database has a service model: standalone, clustered, relay
or active-backup. Relays and A-B databases have a source, each
source has its own set of JSON-RPC session options. A-B also
have an indicator of it being active or backup and an optional
list of tables to exclude from replication.
All of that should be stored per database in the temporary
configuration file that is used in order to restore the config
after the OVSDB crash. For that, the save/load functions are
also updates.
This change is written in generic way assuming all the databases
can have different configuration including service model.
The only user-visible change here is a slight modification of
the ovsdb-server/sync-status appctl, since it now needs to
skip databases that are not active-backup and also should report
active-backup databases that are currently active, i.e. not
added to the replication module.
If the service model is not defined in the configuration, it
is assumed to be standalone or clustered, and determined from
the storage type while opening the database. If the service
model is defined, but doesn't match the actual storage type
in the database file, ovsdb-server will fail to open the
database. This should never happen with internally generated
config file, but may happen in the future with user-provided
configuration files. In this case the service model is used
for verification purposes only, if administrator wants to
assert a particular model.
Since the database 'source' connections can't use 'role' or
'read-only' options, a new flag added to corresponding JSON
parsing functions to skip these fields.
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:07 +01:00
|
|
|
|
struct shash *db_conf, char **sync_from,
|
2024-01-09 23:49:03 +01:00
|
|
|
|
char **sync_exclude, bool *is_backup)
|
2013-06-27 10:27:57 -07:00
|
|
|
|
{
|
|
|
|
|
struct json *json;
|
2013-06-13 12:25:39 -07:00
|
|
|
|
|
|
|
|
|
if (fseek(config_file, 0, SEEK_SET) != 0) {
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
VLOG_WARN("config: file seek failed (%s)", ovs_strerror(errno));
|
|
|
|
|
return false;
|
2013-06-13 12:25:39 -07:00
|
|
|
|
}
|
|
|
|
|
json = json_from_stream(config_file);
|
|
|
|
|
if (json->type == JSON_STRING) {
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
VLOG_WARN("config: reading JSON failed (%s)", json_string(json));
|
|
|
|
|
json_destroy(json);
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
if (json->type != JSON_OBJECT) {
|
|
|
|
|
VLOG_WARN("configuration in a file must be a JSON object");
|
|
|
|
|
json_destroy(json);
|
|
|
|
|
return false;
|
2013-06-13 12:25:39 -07:00
|
|
|
|
}
|
2013-06-27 10:27:57 -07:00
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
if (!remotes_from_json(remotes,
|
|
|
|
|
shash_find_data(json_object(json), "remotes"))) {
|
|
|
|
|
VLOG_WARN("config: failed to parse 'remotes'");
|
|
|
|
|
json_destroy(json);
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
if (!databases_from_json(db_conf, shash_find_data(json_object(json),
|
|
|
|
|
"databases"))) {
|
|
|
|
|
VLOG_WARN("config: failed to parse 'databases'");
|
|
|
|
|
free_remotes(remotes);
|
|
|
|
|
json_destroy(json);
|
|
|
|
|
return false;
|
|
|
|
|
}
|
2016-08-23 04:05:11 -07:00
|
|
|
|
|
|
|
|
|
struct json *string;
|
|
|
|
|
string = shash_find_data(json_object(json), "sync_from");
|
|
|
|
|
free(*sync_from);
|
|
|
|
|
*sync_from = string ? xstrdup(json_string(string)) : NULL;
|
|
|
|
|
|
|
|
|
|
string = shash_find_data(json_object(json), "sync_exclude");
|
|
|
|
|
free(*sync_exclude);
|
|
|
|
|
*sync_exclude = string ? xstrdup(json_string(string)) : NULL;
|
|
|
|
|
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
struct json *boolean = shash_find_data(json_object(json), "is_backup");
|
|
|
|
|
*is_backup = boolean ? json_boolean(boolean) : false;
|
2016-08-23 04:05:11 -07:00
|
|
|
|
|
2013-06-13 12:25:39 -07:00
|
|
|
|
json_destroy(json);
|
ovsdb-server: Allow user-provided config files.
OVSDB server maintains a temporary file with the current database
configuration for the case it is restarted by a monitor process
after a crash. On startup the configuration from command line
arguments is stored there in a JSON format, also whenever user
changes the configuration with different UnixCtl commands, those
changes are getting added to the file. When restarted from the
crash it reads the configuration from the file and continues
with all the necessary remotes and databases.
This change allows it to be an external user-provided file that
OVSDB server will read the configuration from. The file can be
specified with a --config-file command line argument and it is
mutually exclusive with most other command line arguments that
set up remotes or databases, it is also mutually exclusive with
use of appctl commands that modify same configurations, e.g.
add/remove-db or add/remove-remote.
If the user wants to change the configuration of a running server,
they may change the file and call ovsdb-server/reload appctl.
OVSDB server will open a file, read and parse it, compare the
new configuration with the current one and adjust the running
configuration as needed. OVSDB server will try to keep existing
databases and connections intact, if the change can be applied
without disrupting the normal operation.
User-provided files are not trustworthy, so extra checks were
added to ensure a correct file format. If the file cannot be
correctly parsed, e.g. contains invalid JSON, no changes will
be applied and the server will keep using the previous
configuration until the next reload.
If config-file is provided for active-backup databases, permanent
disconnection of one of the backup databases no longer leads to
switching all other databases to 'active'. Only the disconnected
one will transition, since all of them have their own records in
the configuration file.
With this change, users can run all types of databases within
the same ovsdb-server process at the same time.
Simple configuration may look like this:
{
"remotes": {
"punix:db.sock": {},
"pssl:6641": {
"inactivity-probe": 16000,
"read-only": false,
"role": "ovn-controller"
}
},
"databases": {
"conf.db": {},
"sb.db": {
"service-model": "active-backup",
"backup": true,
"source": {
"tcp:127.0.0.1:6644": null
}
},
"OVN_Northbound": {
"service-model": "relay",
"source": {
"ssl:[fe:::1]:6642,ssl:[fe:::2]:6642": {
"max-backoff": 8000,
"inactivity-probe": 10000
}
}
}
}
}
Acked-by: Dumitru Ceara <dceara@redhat.com>
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
2024-01-09 23:49:15 +01:00
|
|
|
|
return true;
|
2013-06-13 12:25:39 -07:00
|
|
|
|
}
|