Add a new built-in policy "insecure", to be used to gracefully unsign
a zone. Previously you could just remove the 'dnssec-policy'
configuration from your zone statement, or remove it.
The built-in policy "none" (or not configured) now actually means
no DNSSEC maintenance for the corresponding zone. So if you
immediately reconfigure your zone from whatever policy to "none",
your zone will temporarily be seen as bogus by validating resolvers.
This means we can remove the functions 'dns_zone_use_kasp()' and
'dns_zone_secure_to_insecure()' again. We also no longer have to
check for the existence of key state files to figure out if a zone
is transitioning to insecure.
* The location of the digest type field has changed to where the
reserved field was.
* The reserved field is now called scheme and is where the digest
type field was.
* Digest type 2 has been defined (SHA256).
When reducing the number of NSEC3 iterations to 150, commit
aa26cde2ae added tests for dnssec-policy
to check that a too high iteration count is a configuration failure.
The test is not sufficient because 151 was always too high for
ECDSAP256SHA256. The test should check for a different algorithm.
There was an existing test case that checks for NSEC3 iterations.
Update the test with the new maximum values.
Update the code in 'kaspconf.c' to allow at most 150 iterations.
When answering a query, named should never attempt to add the same RRset
to the ANSWER section more than once. However, such a situation may
arise when chasing DNAME records: one of the DNAME records placed in the
ANSWER section may turn out to be the final answer to a client query,
but there is no way to know that in advance. Tweak the relevant INSIST
assertion in query_respond() so that it handles this case properly.
qctx->rdataset is freed later anyway, so there is no need to clean it up
in query_respond().
If a zone transfer results in a zone not having any NS records, named
stops serving it because such a zone is broken. Do the same if an
incoming zone transfer results in a zone lacking an SOA record at the
apex or containing more than one SOA record.
An IXFR containing SOA records with owner names different than the
transferred zone's origin can result in named serving a version of that
zone without an SOA record at the apex. This causes a RUNTIME_CHECK
assertion failure the next time such a zone is refreshed. Fix by
immediately rejecting a zone transfer (either an incremental or
non-incremental one) upon detecting an SOA record not placed at the apex
of the transferred zone.
While working on the serve-stale backports, I noticed the following
oddities:
1. In the serve-stale system test, in one case we keep track of the
time how long it took for dig to complete. In commit
aaed7f9d8c, the code removed the
exception to check for result == ISC_R_SUCCESS on stale found
answers, and adjusted the test accordingly. This failed to update
the time tracking accordingly. Move the t1/t2 time track variables
back around the two dig commands to ensure the lookups resolved
faster than the resolver-query-timeout.
2. We can remove the setting of NS_QUERYATTR_STALEOK and
DNS_RDATASETATTR_STALE_ADDED on the "else if (stale_timeout)"
code path, because they are added later when we know we have
actually found a stale answer on a stale timeout lookup.
3. We should clear the NS_QUERYATTR_STALEOK flag from the client
query attributes instead of DNS_RDATASETATTR_STALE_ADDED (that
flag is set on the rdataset attributes).
4. In 'bin/named/config.c' we should set the configuration options
in alpabetical order.
5. In the ARM, in the backports we have added "(stale)" between
"cached" and "RRset" to make more clear a stale RRset may be
returned in this scenario.
The malloc attribute allows compiler to do some optmizations on
functions that behave like malloc/calloc, like assuming that the
returned pointer do not alias other pointers.
There is no possibility for mpctx->items to be NULL at the point where
the code was removed, since we enforce that fillcount > 0, if
mpctx->items == NULL when isc_mempool_get is called, then we will
allocate fillcount more items and add to the mpctx->items list.
If there happens to be a RRSIG(SOA) that is not at the zone apex
for any reason it should not be considered as a stopping condition
for incremental zone signing.
When the keymgr needs to create new keys, it is possible it needs to
create multiple keys. The keymgr checks for keyid conflicts with
already existing keys, but it should also check against that it just
created.
as with TLS, the destruction of a client stream on failed read
needs to be conditional: if we reached failed_read_cb() as a
result of a timeout on a timer which has subsequently been
reset, the stream must not be closed.
the destruction of the socket in tls_failed_read_cb() needs to be
conditional; if reached due to a timeout on a timer that has
subsequently been reset, the socket must not be destroyed.
this is similar in structure to the UDP timeout recovery test.
this commit adds a new mechanism to the netmgr test allowing the
listen socket to accept incoming TCP connections but never send
a response. this forces the client to time out on read.
when running read callbacks, if the event result is not ISC_R_SUCCESS,
the callback is always run asynchronously. this is a problem on timeout,
because there's no chance to reset the timer before the socket has
already been destroyed. this commit allows read callbacks to run
synchronously for both ISC_R_SUCCESS and ISC_R_TIMEDOUT result codes.
this test sets up a server socket that listens for UDP connections
but never responds. the client will always time out; it should retry
five times before giving up.
This commit changes the taskmgr to run the individual tasks on the
netmgr internal workers. While an effort has been put into keeping the
taskmgr interface intact, couple of changes have been made:
* The taskmgr has no concept of universal privileged mode - rather the
tasks are either privileged or unprivileged (normal). The privileged
tasks are run as a first thing when the netmgr is unpaused. There
are now four different queues in in the netmgr:
1. priority queue - netievent on the priority queue are run even when
the taskmgr enter exclusive mode and netmgr is paused. This is
needed to properly start listening on the interfaces, free
resources and resume.
2. privileged task queue - only privileged tasks are queued here and
this is the first queue that gets processed when network manager
is unpaused using isc_nm_resume(). All netmgr workers need to
clean the privileged task queue before they all proceed normal
operation. Both task queues are processed when the workers are
finished.
3. task queue - only (traditional) task are scheduled here and this
queue along with privileged task queues are process when the
netmgr workers are finishing. This is needed to process the task
shutdown events.
4. normal queue - this is the queue with netmgr events, e.g. reading,
sending, callbacks and pretty much everything is processed here.
* The isc_taskmgr_create() now requires initialized netmgr (isc_nm_t)
object.
* The isc_nm_destroy() function now waits for indefinite time, but it
will print out the active objects when in tracing mode
(-DNETMGR_TRACE=1 and -DNETMGR_TRACE_VERBOSE=1), the netmgr has been
made a little bit more asynchronous and it might take longer time to
shutdown all the active networking connections.
* Previously, the isc_nm_stoplistening() was a synchronous operation.
This has been changed and the isc_nm_stoplistening() just schedules
the child sockets to stop listening and exits. This was needed to
prevent a deadlock as the the (traditional) tasks are now executed on
the netmgr threads.
* The socket selection logic in isc__nm_udp_send() was flawed, but
fortunatelly, it was broken, so we never hit the problem where we
created uvreq_t on a socket from nmhandle_t, but then a different
socket could be picked up and then we were trying to run the send
callback on a socket that had different threadid than currently
running.
When we are reading from the xfrin socket, and the transfer would be
shutdown, the shutdown function would call `xfrin_fail()` which in turns
calls `xfrin_cancelio()` that causes the read callback to be invoked
with `ISC_R_CANCELED` status code and that caused yet another
`xfrin_fail()` call.
The fix here is to ensure the `xfrin_fail()` would be run only once
properly using better synchronization on xfr->shuttingdown flag.
Since all the libraries are internal now, just cleanup the ISCAPI remnants
in isc_socket, isc_task and isc_timer APIs. This means, there's one less
layer as following changes have been done:
* struct isc_socket and struct isc_socketmgr have been removed
* struct isc__socket and struct isc__socketmgr have been renamed
to struct isc_socket and struct isc_socketmgr
* struct isc_task and struct isc_taskmgr have been removed
* struct isc__task and struct isc__taskmgr have been renamed
to struct isc_task and struct isc_taskmgr
* struct isc_timer and struct isc_timermgr have been removed
* struct isc__timer and struct isc__timermgr have been renamed
to struct isc_timer and struct isc_timermgr
* All the associated code that dealt with typing isc_<foo>
to isc__<foo> and back has been removed.
When setnsec3param() is schedule from zone_postload() there's no
guarantee that `zone->db` is not `NULL` yet. Thus when the
setnsec3param() is called, we need to check for `zone->db` existence and
reschedule the task, because calling `rss_post()` on a zone with empty
`.db` ends up with no-op (the function just returns).
Previously, the taskmgr, timermgr and socketmgr had a constructor
variant, that would create the mgr on top of existing appctx. This was
no longer true and isc_<*>mgr was just calling isc_<*>mgr_create()
directly without any extra code.
This commit just cleans up the extra function.
"resolve" is used by the resolver system tests, and I'm not
certain whether delv exercises the same code, so rather than
remove it, I moved it to bin/tests/system.
sample code for export libraries is no longer needed and
this code is not used for any internal tests. also, sample-gai.c
had already been removed but there were some dangling references.
the libdns client API is no longer being maintained for
external use, we can remove the code that isn't being used
internally, as well as the related tests.
Too much logic was cramped inside the dns_journal_rollforward() that
made it harder to follow. The dns_journal_rollforward() was refactored
to work over already opened journal and some of the previous logic was
moved to new static zone_journal_rollforward() that separates the
journal "rollforward" logic from the "zone" logic.
when dns_journal_rollforward returned ISC_R_RECOVERABLE the distintion
between 'up to date' and 'success' was lost, as a consequence
zone_needdump() was called writing out the zone file when it shouldn't
have been. This change restores that distintion. Adjust system
test to reflect visible changes.
It fixes a corner case which was causing dig to print annoying
messages like:
14-Apr-2021 18:48:37.099 SSL error in BIO: 1 TLS error (errno:
0). Arguments: received_data: (nil), send_data: (nil), finish: false
even when all the data was properly processed.
Before this fix underlying TCP sockets could remain opened for longer
than it is actually required, causing unit tests to fail with lots of
ISC_R_TOOMANYOPENFILES errors.
The change also enables graceful SSL shutdown (before that it would
happen only in the case when isc_nm_cancelread() were called).
This commit merges TLS tests into the common Network Manager unit
tests suite and extends the unit test framework to include support for
additional "ping-pong" style tests where all data could be sent via
lesser number of connections (the behaviour of the old test
suite). The tests for TCP and TLS were extended to make use of the new
mode, as this mode better translates to how the code is used in DoH.
Both TLS and TCP tests now share most of the unit tests' code, as they
are expected to function similarly from a users's perspective anyway.
Additionally to the above, the TLS test suite was extended to include
TLS tests using the connections quota facility.
The draft says that the NSEC(3) TTL must have the same TTL value
as the minimum of the SOA MINIMUM field and the SOA TTL. This was
always the intended behaviour.
Update the zone structure to also track the SOA TTL. Whenever we
use the MINIMUM value to determine the NSEC(3) TTL, use the minimum
of MINIMUM and SOA TTL instead.
There is no specific test for this, however two tests need adjusting
because otherwise they failed: They were testing for NSEC3 records
including the TTL. Update these checks to use 600 (the SOA TTL),
rather than 3600 (the SOA MINIMUM).
It is more intuitive to have the countdown 'max-stale-ttl' as the
RRset TTL, instead of 0 TTL. This information was already available
in a comment "; stale (will be retained for x more seconds", but
Support suggested to put it in the TTL field instead.
Before binding an RRset, check the time and see if this record is
stale (or perhaps even ancient). Marking a header stale or ancient
happens only when looking up an RRset in cache, but binding an RRset
can also happen on other occasions (for example when dumping the
database).
Check the time and compare it to the header. If according to the
time the entry is stale, but not ancient, set the STALE attribute.
If according to the time is ancient, set the ANCIENT attribute.
We could mark the header stale or ancient here, but that requires
locking, so that's why we only compare the current time against
the rdh_ttl.
Adjust the test to check the dump-db before querying for data. In the
dumped file the entry should be marked as stale, despite no cache
lookup happened since the initial query.
When introducing change 5149, "rndc dumpdb" started to print a line
above a stale RRset, indicating how long the data will be retained.
At that time, I thought it should also be possible to load
a cache from file. But if a TTL has a value of 0 (because it is stale),
stale entries wouldn't be loaded from file. So, I added the
'max-stale-ttl' to TTL values, and adjusted the $DATE accordingly.
Since we actually don't have a "load cache from file" feature, this
is premature and is causing confusion at operators. This commit
changes the 'max-stale-ttl' adjustments.
A check in the serve-stale system test is added for a non-stale
RRset (longttl.example) to make sure the TTL in cache is sensible.
Also, the comment above stale RRsets could have nonsensical
values. A possible reason why this may happen is when the RRset was
marked a stale but the 'max-stale-ttl' has passed (and is actually an
RRset awaiting cleanup). This would lead to the "will be retained"
value to be negative (but since it is stored in an uint32_t, you would
get a nonsensical value (e.g. 4294362497).
To mitigate against this, we now also check if the header is not
ancient. In addition we check if the stale_ttl would be negative, and
if so we set it to 0. Most likely this will not happen because the
header would already have been marked ancient, but there is a possible
race condition where the 'rdh_ttl + serve_stale_ttl' has passed,
but the header has not been checked for staleness.
Even if a call to gss_accept_sec_context() fails, it might still cause a
GSS-API response token to be allocated and left for the caller to
release. Make sure the token is released before an early return from
dst_gssapi_acceptctx().