when the QPDB is implemented, we will need to have both qpdb_p.h and
rbtdb_p.h. in order to prevent name collisions or code duplication,
this commit adds a generic private header file, db_p.h, containing
structures and macros that will be used by both databases.
some functions and structs have been renamed to more specifically refer
to the RBT database, in order to avoid namespace collision with similar
things that will be needed by the QPDB later.
refactor the wildcard matching code to make it a bit easier to
understand, in hopes that it will reduce the difficulty of converting
from RBTDB to QPDB later.
there are also some minor optimizations: previously, after stepping
backward to find the predecessor, we stepped back foward *from* the
predecessor to find the successor. we now reset the rbtnode chain to
its original starting point before stepping forward; this eliminates
some unnecessary processing. and, if neither predecessor nor successor
is found, we return early rather than carrying on with an unnecessary
effort to match labels.
Coverity detected that address->type.sa was too small when copying
a struct sockaddr_sin6, use the alterative union element
address->type.sin6 instead.
We were missing a test where a single owner name would have multiple
types with a different case. The generated RRSIGs and NSEC records will
then have different case than the signed records and message parser have
to cope with that and treat everything as the same owner.
The case insensitive matching in isc_ht was basically completely broken
as only the hashvalue computation was case insensitive, but the key
comparison was always case sensitive.
The isc_log_t contains a isc_logconfig_t that is swapped, dereferenced
or accessed its fields through a mutex. Instead of protecting it with a
rwlock, use RCU.
When shutting down the whole server, the reading could stop and detach
from controlconnection before sending is done. If send callback then
detaches from the last controlconnection handle, the ccmsg would be
invalidated after the send callback and thus we must not access ccmsg
after calling the send_cb().
We need to stop reading when calling isc_ccmsg_disconnect() as the
reading handle doesn't have to be last because sending might be in
progress. After that, we can safely remove .reading member because the
reading would not be called after the disconnect has been called.
The ccmsg_senddone() should also not call the recv callback if the
sending failed, that's the job of the caller's send callback - in fact
it already does that, so the code in ccmsg_senddone() was superfluous.
To reduce memory pressure, we can add light per-loop (netmgr worker)
memory pools for isc_nmsocket_t structures. This will help in
situations where there's a lot of churn creating and destroying the
nmsockets.
Embedding isc_nmsocket_h2_t directly inside isc_nmsocket_t had increased
the size of isc_nmsocket_t to 1840 bytes. Making the isc_nmsocket_h2_t
to be a pointer to the structure and allocated on demand allows us to
reduce the size to 1208 bytes. While there are still some possible
reductions in the isc_nmsocket_t (embedded tlsstream, streamdns
structures), this was the far biggest drop in the memory usage.
The uv_req union member of struct isc__nm_uvreq contained libuv request
types that we don't use. Turns out that uv_getnameinfo_t is 1000 bytes
big and unnecessarily enlarged the whole structure. Remove all the
unused members from the uv_req union.
After removing sockaddr_unix from isc_sockaddr, we can also remove
sockaddr_storage and reduce the isc_sockaddr size from 152 bytes to just
48 bytes needed to hold IPv6 addresses.
When running `make check` on a platform which has older (but still
supported) pytest, e.g. 3.4.2 on EL8, the junit to trs conversion would
fail because the junit format has different structure. Make the junit
XML processing more lenient to support both the older and newer junit
XML formats.
The ditch.pl script is used to generate burst traffic without waiting
for the responses. When running other tests in parallel, this can result
in a ephemeral port clash, since the ditch.pl process closes the socket
immediately. In rare occasions when the message ID also clashes with
other tests' queries, it might result in an UnexpectedSource error from
dnspython.
Use a dedicated port EXTRAPORT8 which is reserved for each test as a
source port for the burst traffic.
As it was pointed out, the alignas() can't be used on objects larger
than `max_align_t` otherwise the compiler might miscompile the code to
use auto-vectorization on unaligned memory.
As we were only using alignas() as a way to prevent false memory
sharing, we can use manual padding in the affected structures.
In the benchmarks, DNS_QPGC_ALL was trying to hard to cleanup QP
and this was slowing down QP too much. Use DNS_QPGC_MAYBE instead
that we are going to use anyway for more realistic load - this also
shows the memory usage matching the real loads.
Stop the cname_and_other_data processing if we already know that the
result is true. Also, we know that CNAME will be placed in the priority
headers, so we can stop looking for CNAME if we haven't found CNAME and
we are past the priority headers.
Mark the infrastructure RRTypes as "priority" types and place them at
the beginning of the rdataslab header data graph. The non-priority
types either go right after the priority types (if any).
The cachedb was missing piece of code (already found in zonedb) that
would make lookups in the slabheaders to miss the RRSIGs for CNAME if
the order of CNAME and RRSIG(CNAME) was reversed in the node->data.
With _exit() instead of exit() in place, we don't need
isc__tls_setfatalmode() mechanism as the atexit() calls will not be
executed including OpenSSL atexit hooks.
Instead of crude 5x rcu_barrier() call in the isc__mem_destroy(), change
the mechanism to call rcu_barrier() until the memory use and references
stops decreasing. This should deal with any number of nested call_rcu()
levels.
Additionally, don't destroy the contextslock if the list of the contexts
isn't empty. Destroying the lock could make the late threads crash.
Since the fatal() isn't a correct but rather abrupt termination of the
program, we want to skip the various atexit() calls because not all
memory might be freed during fatal() call, etc. Using _exit() instead
of exit() has this effect - the program will end, but no destructors or
atexit routines will be called.