Multiple zones should be able to read the same key and signing policy
at the same time. Since writing the kasp lock only happens during
reconfiguration, and the complete kasp list is being replaced, there
is actually no need for a lock. Reference counting ensures that a kasp
structure is not destroyed when still being attached to one or more
zones.
This significantly improves the load configuration time.
When transitioning from NSEC3 to NSEC the added records where not
being signed because the wrong time was being used to determine if
a key should be used or not. Check that these records are actually
signed.
When kasp support was added 'inception' was used as a proxy for
'now' and resulted in signatures not being generated or the wrong
signatures being generated. 'inception' is the time to be set
in the signatures being generated and is usually in the past to
allow for clock skew. 'now' determines what keys are to be used
for signing.
Resolve "dig in 9.19.13 crashes, when cancelling (with CTRL+C) a pending query to to a not reachable TCP port"
Closes#4138
See merge request isc-projects/bind9!8554
Remove the CFG_CLAUSEFLAG_EXPERIMENTAL flag from the
"trust-anchor-telemetry" statement as the behavior of the latter has not
been changed since its initial implementation and there are currently no
plans to do so. This silences a relevant log message that was emitted
even when the feature was explicitly disabled.
Each function queuing a do_nsfetch() call using isc_async_run() is
expected to increase the given zone's internal reference count
(zone->irefs), which is then correspondingly decreased in either
do_nsfetch() itself (when the dns_resolver_createfetch() fails) or in
nsfetch_done() (when recursion is finished).
However, do_nsfetch() can also return early if either the zone itself or
the relevant view's resolver object is being shut down. In that case,
do_nsfetch() simply returns without decreasing the internal reference
count for the zone. This leaves a dangling zone reference around, which
leads to hangs during named shutdown.
Fix by executing the same cleanup code for early returns from
do_nsfetch() as for a failed dns_resolver_createfetch() call in that
function as the reference count will not be decreased in nsfetch_done()
in any of these cases.
The loop in shutdown_listener() assumes that the reference count for
every controlconnection_t object on the listener->connections linked
list will drop down to zero after the conn_shutdown() call in the loop's
body. However, when the timing is just right, some netmgr callbacks for
a given control connection may still be awaiting processing by the same
event loop that executes shutdown_listener() when the latter is run.
Since these netmgr callbacks must be run in order for the reference
count for the relevant controlconnection_t objects to drop to zero, when
the scenario described above happens, shutdown_listener() runs into an
infinite loop due to one of the controlconnection_t objects on the
listener->connections linked list never going away from the head of that
list.
Fix by safely iterating through the listener->connections list and
initiating shutdown for all controlconnection_t objects found. This
allows any pending netmgr callbacks to be run by the same event loop in
due course, i.e. after shutdown_listener() returns.
The check_loaded() function compares the zone's loadtime value and
an expected loadtime value, which is based on the zone file's mtime
extracted from the filesystem.
For the secondary zones there may be cases, when the zone file isn't
ready yet before the zone transfer is complete and the zone file is
dumped to the disk, so a so zero value mtime is retrieved.
In such cases wait one second and retry until timeout. Also modify
the affected check to allow a possible difference of the same amount
of seconds as the chosen timeout value.
The atomic_init() function makes sense to use with structure's
members when creating a new instance of a strucutre. In other
places, use atomic store operations instead, in order to avoid
data races.