Because rcu_barrier() needs to be called as many times as the number of
nested call_rcu() calls (call_rcu() calls made from call_rcu thread),
and currently there's no mechanism to detect whether there are more
call_rcu callbacks scheduled, we simply call the rcu_barrier() multiple
times. The overhead is negligible and it prevents rare assertion
failures caused by the check for memory leaks in isc__mem_destroy().
After the dns_badcache refactoring, the dns_badcache_destroy() would
call call_rcu(). The dns_message_checksig cleanup which calls
dns_view_detach() happens in the atexit handler, so there might be
call_rcu threads started very late in the process. The liburcu
registers library destructor that destroys the data structured internal
to liburcu and this clashes with the call_rcu thread that just got
started in the atexit() handler causing either (depending on timing):
- a normal run
- a straight segfault
- an assertion failure from liburcu
Instead of trying to cleanup the dns_message_checksig unit, ignore the
leaked memory as we do with all the other fuzzing tests.
The load-names benchmark was originally only measuring single thread
performance of the data structures. As this is not how those are used
in the real life, it was refactored to be multi-threaded with proper
protections in place (rwlock for ht, hashmap and rbt; transactions for
qp).
The qp test has been extended to see effect of the dns_qp_compact() and
rcu_barrier() on the overall speed and memory consumption.
The dns_badcache unit had (yet another) own locked hashtable
implementation. Replace the hashtable used by dns_badcache with
lock-free cds_lfht implementation from liburcu.
When dns_request was canceled via dns_requestmgr_shutdown() the cancel
event would be propagated on different loop (loop 0) than the loop where
request was created on. In turn this would propagate down to isc_netmgr
where we require all the events to be called from the matching isc_loop.
Pin the dns_requests to the loops and ensure that all the events are
called on the associated loop. This in turn allows us to remove the
hashed locks on the requests and change the single .requests list to be
a per-loop list for the request accounting.
Additionally, do some extra cleanup because some race condititions are
now not possible as all events on the dns_request are serialized.
With ThreadSanitizer support added to the Userspace RCU, we no longer
need to wrap the call_rcu and caa_container_of with
__tsan_{acquire,release} hints. Remove the direct calls to
__tsan_{acquire,release} and the isc_urcu_{container,cleanup} macros.
The cds_lfht_for_each_entry and cds_lfht_for_each_entry_duplicate macros
had a code that operated on the NULL pointer, at the end of the list it
was calling caa_container_of() on the NULL pointer in the init-clause
and iteration-expression, but the result wasn't actually used anywhere
because the cond-expression in the for loop has prevented executing
loop-statement. This made AddressSanitizer notice the invalid operation
and rightfully complain.
This was reported to the upstream and fixed there. Pull the upstream
fix into our <isc/urcu.h> header, so our CI checks pass.
When stub_glue_response() is called, the associated data is stored in
newly allocated struct stub_glue_request. The allocated structure is
never freed in the callback, thus we leak a little bit of memory.
The stub_request_nameserver_address() used 'request' as name for
struct stub_glue_request leading to confusion between 'request'
(stub_glue_request) and 'request->request' (dns_request_t).
Unify the name to 'sgr' already used in struct stub_glue_response().
The isc_stats_create() can no longer return anything else than
ISC_R_SUCCESS. Refactor isc_stats_create() and its variants in libdns,
libns and named to just return void.
Add a unit test to check if the overmem purging in the RBTDB is
effective when mixed size RR data is inserted into the database.
Co-authored-by: Ondřej Surý <ondrej@isc.org>
Co-authored-by: Jinmei Tatuya <jtatuya@infoblox.com>
The conditions that trigger the crash:
- a stale record is in cache
- stale-answer-client-timeout is 0
- multiple clients query for the stale record, enough of them to exceed
the recursive-clients quota
- the response from the authoritative is sufficiently delayed so that
recursive-clients quota is exceeded first
The reproducer attempts to simulate this situation. However, it hasn't
proven to be 100 % reproducible, especially in CI. When reproducing
locally, the priming query also seems to sometimes interfere and prevent
the crash. When the reproducer is ran twice, it appears to be more
reliable in reproducing the issue.
The keys directory should be cleaned up in clean.sh. Doing that in the
test itself isn't reliable which may lead to failing mkdir which causes
the test to fail with set -e.
After commit f4eb3ba4, that is part of removing 'auto-dnssec', the
inline system test started to fail in FIPS CI jobs. This is because
the 'nsec3-loop' zone started to use a RSASHA256 key size of 1024 and
this is not FIPS compliant.
This commit changes the key size from 1024 to 4096, in order to
become FIPS compliant again.
The catz module has a fail-safe code to recreate a member zone
that was expected to exist but was not found.
Improve a test case where the fail-safe code is expected to execute
to check that the log message exists.
Add a test case where the fail-safe code is not expected to execute
to check that the log message does not exist.
1. Change the _new, _add and _copy functions to return the new object
instead of returning 'void' (or always ISC_R_SUCCESS)
2. Cleanup the isc_ht_find() + isc_ht_add() usage - the code is always
locked with catzs->lock (mutex), so when isc_ht_find() returns
ISC_R_NOTFOUND, the isc_ht_add() must always succeed.
3. Instead of returning direct iterator for the catalog zone entries,
add dns_catz_zone_for_each_entry2() function that calls callback
for each catalog zone entry and passes two extra arguments to the
callback. This will allow changing the internal storage for the
catalog zone entries.
4. Cleanup the naming - dns_catz_<fn>_<obj> -> dns_catz_<obj>_<fn>, as an
example dns_catz_new_zone() gets renamed to dns_catz_zone_new().
When a primary server is not responding, mark it as temporarialy
unreachable. This will prevent too many zones queuing up on a
unreachable server and allow the refresh process to move onto
the next primary sooner once it has been so marked.
After the refactoring the condition whether to use DNAME or NS for the
zonecut was incorrectly simplified and the !IS_STUB() condition was
removed. This was flagged by Coverity as:
/lib/dns/rbt-zonedb.c: 192 in zone_zonecut_callback()
186 found = ns_header;
187 search->zonecut_sigheader = NULL;
188 } else if (dname_header != NULL) {
189 found = dname_header;
190 search->zonecut_sigheader = sigdname_header;
191 } else if (ns_header != NULL) {
>>> CID 462773: Control flow issues (DEADCODE)
>>> Execution cannot reach this statement: "found = ns_header;".
192 found = ns_header;
193 search->zonecut_sigheader = NULL;
194 }
195
196 if (found != NULL) {
197 /*
Instead of removing the extra block, restore the !IS_STUB() condition
for the first if block.