The TCP dispatch connected callbacks could be called synchronously which
in turn could destroy xfrin before we return from dns_xfrin_create().
Delay the calling the callback called from tcp_dispatch_connect() by
calling it always asynchronously.
The current dispatch code could reuse the TCP connection when
dns_dispatch_gettcp() would be used first. This is problematic as the
dns_resolver doesn't use TCP connection sharing, but dns_request could
get the TCP stream that was created outside of the dns_request.
Add new DNS_DISPATCHOPT_UNSHARED option to dns_dispatch_createtcp() that
would prevent the TCP stream to be reused. Use that option in the
dns_resolver call to dns_dispatch_createtcp() to prevent dns_request
from reusing the TCP connections created by dns_resolver.
Additionally, the dns_xfrin unit added TCP connection sharing for
incoming transfers. While interleaving *xfr streams on a TCP connection
should work this should be a deliberate change and be property of the
server that can be controlled. Additionally some level of parallel TCP
streams is desirable. Revert to the old behaviour by removing the
dns_dispatch_gettcp() calls from dns_xfrin and use the new option to
prevent from sharing the transfer streams with dns_request.
The QID table hashing used a custom merging of the sockaddr, port and id
into a single hashvalue. Normalize the QID table hashing function to
use isc_hash32 API for all the values.
Reusing TCP connections with dns_dispatch_gettcp() used linear linked
list to lookup existing outgoing TCP connections that could be reused.
Replace the linked list with per-loop cds_lfht hashtable to speedup the
lookups. We use cds_lfht because it allows non-unique node insertion
that we need to check for dispatches in different connection states.
Instead of high number of dispatches (4 * named_g_udpdisp)[1], make the
dispatches bound to threads and make dns_dispatchset_t create a dispatch
for each thread (event loop).
This required couple of other changes:
1. The dns_dispatch_createudp() must be called on loop, so the isc_tid()
is already initialized - changes to nsupdate and mdig were required.
2. The dns_requestmgr had only a single dispatch per v4 and v6. Instead
of using single dispatch, use dns_dispatchset_t for each protocol -
this is same as dns_resolver.
Looking up unique message ID in the dns_dispatch has been using custom
hash tables. Rewrite the custom hashtable to use cds_lfht API, removing
one extra lock in the cold-cache resolver hot path.
store a pointer to the running loop when creating a dispatch entry
with dns_dispatch_add(), and use isc_loop_now() to get the timestamp for
the current event loop tick when we initialize the dispentry start time
and check for timeouts.
when a TCP dispatch times out, we call tcp_recv() with a result
value of ISC_R_TIMEDOUT; this cancels the oldest dispatch
entry in the dispatch's active queue, plus any additional entries
that have waited longer than their configured timeouts. if, at
that point, there were more dispatch entries still on the active
queue, it resumes reading, but until now it failed to restart
the timer.
this has been corrected: we now calculate a new timeout
based on the oldest dispatch entry still remaining. this
requires us to initialize the start time of each dispatch entry
when it's first added to the queue.
in order to ensure that the handling of timed-out requests is
consistent, we now calculate the runtime of each dispatch
entry based on the same value for 'now'.
incidentally also fixed a compile error that turned up when
DNS_DISPATCH_TRACE was turned on.
When retrying in the DNS dispatch, the local port would be forgotten on
ISC_R_ADDRINUSE, keep the configured source-port even when retrying.
Additionally, treat ISC_R_NOPERM same as ISC_R_ADDRINUSE.
Closes: #3986
The isc_time_now() and isc_time_now_hires() were used inconsistently
through the code - either with status check, or without status check,
or via TIME_NOW() macro with RUNTIME_CHECK() on failure.
Refactor the isc_time_now() and isc_time_now_hires() to always fail when
getting current time has failed, and return the isc_time_t value as
return value instead of passing the pointer to result in the argument.
when a message arrives over a TCP connection matching an expected
QID, the dispatch is updated so it no longer expects that QID,
but continues reading. subsequent messages with the same QID are
ignored, unless the dispatch entry has called dns_dispatch_getnext()
or dns_dispatch_resume().
however, a coding error caused those functions to have no effect
when the dispatch was reading, so streams of messages with the same
QID could not be received over a single TCP connection, breaking *XFR.
this has been corrected by changing the order of operations in
tcp_dispatch_getnext() so that disp->reading isn't checked until
after the dispatch entry has been reactivated.
the dns_xfrin module was still using the network manager directly to
manage TCP connections and send and receive messages. this commit
changes it to use the dispatch manager instead.
the optional 'port' option, when used with notify-source,
transfer-source, etc, is used to set up UDP dispatches with a
particular source port, but when the actual UDP connection was
established the port would be overridden with a random one. this
has been fixed.
(configuring source ports is deprecated in 9.20 and slated for
removal in 9.22, but should still work correctly until then.)
DSCP has not been fully working since the network manager was
introduced in 9.16, and has been completely broken since 9.18.
This seems to have caused very few difficulties for anyone,
so we have now marked it as obsolete and removed the
implementation.
To ensure that old config files don't fail, the code to parse
dscp key-value pairs is still present, but a warning is logged
that the feature is obsolete and should not be used. Nothing is
done with configured values, and there is no longer any
range checking.
Previously, dns_dispatch_gettcp() could pick a TCP connection created by
different thread - this breaks our contractual promise to DNS dispatch
by using the TCP connection on a different thread than it was created.
Add .tid member to the dns_dispatch_t struct and skip the dispatches
from other threads when looking up a TCP dispatch that we can reuse in
dns_request.
NOTE: This is going to be properly refactored, but this change could be
also backported to 9.18 for better stability and thread-affinity.
The dns_request code is very sensitive about calling the connected and
deadlocks when the timing is "right" in several places. Move the call
to the connected callback to the (udp|tcp)_connected() functions, so
they are called asynchronously instead of directly from
the (udp|tcp)_dispentry_cancel() functions.
The TCP dispatches are removed from the dispatchmgr->list in the
dispatch_destroy() and there's a brief period of time where
dns_dispatch_gettcp() can find a dispatch in connected state that's
being destroyed.
Set the dispatch state to DNS_DISPATCHSTATE_NONE in the TCP connection
callback if there are no responses waiting, and ignore TCP dispatches
with zero references in dns_dispatch_gettcp().
In tcp_connected() a typo has turned a DbC check into an assignment
breaking the state machine and making the dns_dispatch_gettcp() try to
attach to dispatch in process of destruction.
The TCP dispatches in DNS_DISPATCHSTATE_NONE could be either very
fresh or those could be dispatches that failed connecting to the
destination. Ignore them when trying to connect to an existing
TCP dispatch via dns_dispatch_gettcp().
The dispatches are not thread-bound, and used freely between various
threads (see the dns_resolver and dns_request units for details).
This refactoring make sure that all non-const dns_dispatch_t and
dns_dispentry_t members are accessed under a lock, and both object now
track their internal state (NONE, CONNECTING, CONNECTED, CANCELED)
instead of guessing the state from the state of various struct members.
During the refactoring, the artificial limit DNS_DISPATCH_SOCKSQUOTA on
UDP sockets per dispatch was removed as the limiting needs to happen and
happens on in dns_resolver and limiting the number of UDP sockets
artificially in dispatch could lead to unpredictable behaviour in case
one dispatch has the limit exhausted by others are idle.
The TCP artificial limit of DNS_DISPATCH_MAXREQUESTS makes even less
sense as the TCP connections are only reused in the dns_request API
that's not a heavy user of the outgoing connections.
As a side note, the fact that UDP and TCP dispatch pretends to be same
thing, but in fact the connected UDP is handled from dns_dispentry_t and
dns_dispatch_t acts as a broker, but connected TCP is handled from
dns_dispatch_t and dns_dispatchmgr_t acts as a broker doesn't really
help the clarity of this unit.
This refactoring kept to API almost same - only dns_dispatch_cancel()
and dns_dispatch_done() were merged into dns_dispatch_done() as we need
to cancel active netmgr handles in any case to not leave dangling
connections around. The functions handling UDP and TCP have been mostly
split to their matching counterparts and the dns_dispatch_<function>
functions are now thing wrappers that call <udp|tcp>_dispatch_<function>
based on the socket type.
More debugging-level logging was added to the unit to accomodate for
this fact.
This change prepares ground for sending DNS requests using DoT,
which, in particular, will be used for forwarding dynamic updates
to TLS-enabled primaries.
When a thread calls dns_dispatch_connect() on an unconnected TCP socket
it sets `tcpstate` from `DNS_DISPATCHSTATE_NONE` to `_CONNECTING`.
Previously, it then INSISTed that there were no pending connections
before calling isc_nm_tcpdnsconnect().
If a second thread called dns_dispatch_connect() during that window
of time, it could add a pending connection to the list, and trigger
an assertion failure.
This commit removes the INSIST since the condition is actually
harmless.
it's a style violation to have REQUIRE or INSIST contain code that
must run for the server to work. this was being done with some
atomic_compare_exchange calls. these have been cleaned up. uses
of atomic_compare_exchange in assertions have been replaced with
a new macro atomic_compare_exchange_enforced, which uses RUNTIME_CHECK
to ensure that the exchange was successful.
There is a possibility for `udp_recv()` to be called with `eresult`
being `ISC_R_SUCCESS`, but nevertheless with already deactivated `resp`,
which can happen when the request has been canceled in the meantime.
Previously, it was possible to assign a bit of memory space in the
nmhandle to store the client data. This was complicated and prevents
further refactoring of isc_nmhandle_t caching (future work).
Instead of caching the data in the nmhandle, allocate the hot-path
ns_client_t objects from per-thread clientmgr memory context and just
assign it to the isc_nmhandle_t via isc_nmhandle_set().
Historically, the inline keyword was a strong suggestion to the compiler
that it should inline the function marked inline. As compilers became
better at optimising, this functionality has receded, and using inline
as a suggestion to inline a function is obsolete. The compiler will
happily ignore it and inline something else entirely if it finds that's
a better optimisation.
Therefore, remove all the occurences of the inline keyword with static
functions inside single compilation unit and leave the decision whether
to inline a function or not entirely on the compiler
NOTE: We keep the usage the inline keyword when the purpose is to change
the linkage behaviour.
Previously, the unreachable code paths would have to be tagged with:
INSIST(0);
ISC_UNREACHABLE();
There was also older parts of the code that used comment annotation:
/* NOTREACHED */
Unify the handling of unreachable code paths to just use:
UNREACHABLE();
The UNREACHABLE() macro now asserts when reached and also uses
__builtin_unreachable(); when such builtin is available in the compiler.
Gcc 7+ and Clang 10+ have implemented __attribute__((fallthrough)) which
is explicit version of the /* FALLTHROUGH */ comment we are currently
using.
Add and apply FALLTHROUGH macro that uses the attribute if available,
but does nothing on older compilers.
In one case (lib/dns/zone.c), using the macro revealed that we were
using the /* FALLTHROUGH */ comment in wrong place, remove that comment.
- certain TCP result codes, including ISC_R_EOF and
ISC_R_CONNECTIONRESET, were being mapped to ISC_R_SHUTTINGDOWN
before calling the response handler in tcp_recv_cancelall().
the result codes should be passed through to the response handler
without being changed.
- the response handlers, resquery_response() and req_response(), had
code to return immediately if encountering ISC_R_EOF, but this is
not the correct behavior; that should only happen in the case of
ISC_R_CANCELED when it was the caller that canceled the operation
- ISC_R_CONNECTIONRESET was not being caught in rctx_dispfail().
- removed code in rctx_dispfail() to retry queries without EDNS
when receiving ISC_R_EOF; this is now treated the same as any
other connection failure.
This commit converts the license handling to adhere to the REUSE
specification. It specifically:
1. Adds used licnses to LICENSES/ directory
2. Add "isc" template for adding the copyright boilerplate
3. Changes all source files to include copyright and SPDX license
header, this includes all the C sources, documentation, zone files,
configuration files. There are notes in the doc/dev/copyrights file
on how to add correct headers to the new files.
4. Handle the rest that can't be modified via .reuse/dep5 file. The
binary (or otherwise unmodifiable) files could have license places
next to them in <foo>.license file, but this would lead to cluttered
repository and most of the files handled in the .reuse/dep5 file are
system test files.
the 'dipsatchmgr->state' was never set, so the MGR_IS_SHUTTINGDOWN
macro was always false. both of these have been removed.
renamed the 'dispatch->state' field to 'tcpstate' to make its purpose
less ambiguous.
changed an FCTXTRACE log message from "response did not match question"
to the more correctly descriptive "invalid question section".
When a non-matching DNS response is received by the resolver,
it calls dns_dispatch_getnext() to resume reading. This is necessary
for UDP but not for TCP, because TCP connections automatically
resume reading after any valid DNS response.
This commit adds a 'tcpreading' flag to TCP dispatches, so that
`dispatch_getnext()` can be called multiple times without subsequent
calls having any effect.
A TCP connection may be held open past its proper timeout if it's
receiving a stream of DNS responses that don't match any queries.
In this case, we now check whether the oldest query should have timed
out.
When the outgoing TCP dispatch times-out active response, we might still
receive the answer during the lifetime of the connection. Previously,
we would just ignore any non-matching DNS answers, which would allow the
server to feed us with otherwise valid DNS answer and keep the
connection open.
Add a counter for timed-out DNS queries over TCP and tear down the whole
TCP connection if we receive unexpected number of DNS answers.
Previously, when invalid DNS message is received over TCP we throw the
garbage DNS message away and continued looking for valid DNS message
that would match our outgoing queries. This logic makes sense for UDP,
because anyone can send DNS message over UDP.
Change the logic that the TCP connection is closed when we receive
garbage, because the other side is acting malicious.
When outgoing TCP connection was prematurely terminated (f.e. with
connection reset), the dispatch code would not cleanup the resources
used by such connection leading to dangling dns_dispentry_t entries.
When a UDP dispatch receives a mismatched response, it checks whether
there is still enough time to wait for the correct one to arrive before
the timeout fires. If there is not, the result code is set to
ISC_R_TIMEDOUT, but it is not subsequently used anywhere as 'response'
is set to NULL a few lines earlier. This results in the higher-level
read callback (resquery_response() in case of resolver code) not being
called. However, shortly afterwards, a few levels up the call chain,
isc__nm_udp_read_cb() calls isc__nmsocket_timer_stop() on the dispatch
socket, effectively disabling read timeout handling for that socket.
Combined with the fact that reading is not restarted in such a case
(e.g. by calling dispatch_getnext() from udp_recv()), this leads to the
higher-level query structure remaining referenced indefinitely because
the dispatch socket it uses will neither be read from nor closed due to
a timeout. This in turn causes fetch contexts to linger around
indefinitely, which in turn i.a. prevents certain cache nodes (those
containing rdatasets used by fetch contexts, like fctx->nameservers)
from being cleaned.
Fix by making sure the higher-level callback does get invoked with the
ISC_R_TIMEDOUT result code when udp_recv() determines there is no more
time left to receive the correct UDP response before the timeout fires.
This allows the higher-level callback to clean things up, preventing the
reference leak described above.
there was a race possible in which a dispatch was put into
the 'connected' state before it had a TCP handle attached,
which could cause an assertion failure in dns_dispatch_gettcp().