Also disable the semantic patch as the code needs tweaks here and there because
some destroy functions might not destroy the object and return early if the
object is still in use.
Found by LGTM.com (see below for description), and while it should not
happen as EDNS OPT RDLEN is uint16_t, the fix is easy. A little bit
of cleanup is included too.
> In a loop condition, comparison of a value of a narrow type with a value
> of a wide type may result in unexpected behavior if the wider value is
> sufficiently large (or small). This is because the narrower value may
> overflow. This can lead to an infinite loop.
FCTX_ATTR_SHUTTINGDOWN needs to be set and tested while holding the node
lock but the rest of the attributes don't as they are task locked. Making
fctx->attributes atomic allows both behaviours without races.
This prevents races on fctx->client whenever a new fetch joins a existing
fetch (by calling fctx_join) as it is now invariant for the active life of
fctx.
The dns_adb_beginudpfetch() is called only for UDP queries, but
the dns_adb_endudpfetch() is called for all queries, including
TCP. This messages the quota counting in adb.c.
If a TCP connection fails while attempting to send a query to a server,
the fetch context will be restarted without marking the target server as
a bad one. If this happens for a server which:
- was already marked with the DNS_FETCHOPT_EDNS512 flag,
- responds to EDNS queries with the UDP payload size set to 512 bytes,
- does not send response packets larger than 512 bytes,
and the response for the query being sent is larger than 512 byes, then
named will pointlessly alternate between sending UDP queries with EDNS
UDP payload size set to 512 bytes (which are responded to with truncated
answers) and TCP connections until the fetch context retry limit is
reached. Prevent such query loops by marking the server as bad for a
given fetch context if the advertised EDNS UDP payload size for that
server gets reduced to 512 bytes and it is impossible to reach it using
TCP.
This commit was done by hand to add the RUNTIME_CHECK() around stray
dns_name_copy() calls with NULL as third argument. This covers the edge cases
that doesn't make sense to write a semantic patch since the usage pattern was
unique or almost unique.
This second commit uses second semantic patch to replace the calls to
dns_name_copy() with NULL as third argument where the result was stored in a
isc_result_t variable. As the dns_name_copy(..., NULL) cannot fail gracefully
when the third argument is NULL, it was just a bunch of dead code.
Couple of manual tweaks (removing dead labels and unused variables) were
manually applied on top of the semantic patch.
This commit add RUNTIME_CHECK() around all simple dns_name_copy() calls where
the third argument is NULL using the semantic patch from the previous commit.
isc_event_allocate() calls isc_mem_get() to allocate the event structure. As
isc_mem_get() cannot fail softly (e.g. it never returns NULL), the
isc_event_allocate() cannot return NULL, hence we remove the (ret == NULL)
handling blocks using the semantic patch from the previous commit.
Commit 9da902a201b6d0e1bdbac0af067a59bb0a489c9c removed locking around
the fctx_decreference() call inside resume_dslookup(). This allows
fctx_unlink() to be called without the bucket lock being held, which
must never happen. Ensure the bucket lock is held by resume_dslookup()
before it calls fctx_decreference().
1. Restore locking in the fctx_decreference() code, because the insides of the
function needs to be protected when fctx->references drops to 0.
2. Restore locking in the dns_resolver_attach() code, because two variables are
accessed at the same time and there's slight chance of data race.
Although the struct dns_resolver.exiting member is protected by stdatomics, we
actually need to wait for whole dns_resolver_shutdown() to finish before
destroying the resolver object. Otherwise, there would be a data race and some
fctx objects might not be destroyed yet at the time we tear down the
dns_resolver object.
This commit changes the BIND cookie algorithms to match
draft-sury-toorop-dnsop-server-cookies-00. Namely, it changes the Client Cookie
algorithm to use SipHash 2-4, adds the new Server Cookie algorithm using SipHash
2-4, and changes the default for the Server Cookie algorithm to be siphash24.
Add siphash24 cookie algorithm, and make it keep legacy aes as
qname minimization, even in relaxed mode, can fail on
some very broken domains. In relaxed mode, instead of
asking for "foo.bar NS" ask for "_.foo.bar A" to either
get a delegation or NXDOMAIN. It will require more queries
than regular mode for proper NXDOMAINs.
If named is configured to perform DNSSEC validation and also forwards
all queries ("forward only;") to validating resolvers, negative trust
anchors do not work properly because the CD bit is not set in queries
sent to the forwarders. As a result, instead of retrieving bogus DNSSEC
material and making validation decisions based on its configuration,
named is only receiving SERVFAIL responses to queries for bogus data.
Fix by ensuring the CD bit is always set in queries sent to forwarders
if the query name is covered by an NTA.
When sending an udp query (resquery_send) we first issue an asynchronous
isc_socket_connect and increment query->connects, then isc_socket_sendto2
and increment query->sends.
If we happen to cancel this query (fctx_cancelquery) we need to cancel
all operations we might have issued on this socket. If we are under very high
load the callback from isc_socket_connect (resquery_udpconnected) might have
not yet been fired. In this case we only cancel the CONNECT event on socket,
and ignore the SEND that's waiting there (as there is an `else if`).
Then we call dns_dispatch_removeresponse which kills the dispatcher socket
and calls isc_socket_close - but if system is under very high load, the send
we issued earlier might still not be complete - which triggers an assertion
because we're trying to close a socket that's still in use.
The fix is to always check if we have incomplete sends on the socket and cancel
them if we do.
If we try to fetch a record from cache and need to look into
hints database we assume that the resolver is not primed and
start dns_resolver_prime(). Priming query is supposed to return
NSes for "." in ANSWER section and glue records for them in
ADDITIONAL section, so that we can fill that info in 'regular'
cache and not use hints db anymore.
However, if we're using a forwarder the priming query goes through
it, and if it's configured to return minimal answers we won't get
the addresses of root servers in ADDITIONAL section. Since the
only records for root servers we have are in hints database we'll
try to prime the resolver with every single query.
This patch adds a DNS_FETCHOPT_NOFORWARD flag which avoids using
forwarders if possible (that is if we have forward-first policy).
Using this flag on priming fetch fixes the problem as we get the
proper glue. With forward-only policy the problem is non-existent,
as we'll never ask for root server addresses because we'll never
have a need to query them.
Also added a test to confirm priming queries are not forwarded.
go back to regular resolution. When this happens the fetch timer is
already running, and we might end up in a situation where we we create
a fetch for qname-minimized query and after that the timer is triggered
and the query is retried (fctx_try) - which causes relaunching of
qname-minimization fetch - and since we already have a qmin fetch
for this fctx - assertion failure.
This fix stops the timer when doing qname minimization - qmin fetch
internal timer should take care of all the possible timeouts.
Since following a delegation resets most fetch context state, address
marks (FCTX_ADDRINFO_MARK) set inside lib/dns/resolver.c are not
preserved when a delegation is followed. This is fine for full
recursive resolution but when named is configured with "forward first;"
and one of the specified forwarders times out, triggering a fallback to
full recursive resolution, that forwarder should no longer be consulted
at each delegation point subsequently reached within a given fetch
context.
Add a new badnstype_t enum value, badns_forwarder, and use it to mark a
forwarder as bad when it times out in a "forward first;" configuration.
Since the bad server list is not cleaned when a fetch context follows a
delegation, this prevents a forwarder from being queried again after
falling back to full recursive resolution. Yet, as each fetch context
maintains its own list of bad servers, this change does not cause a
forwarder timeout to prevent that forwarder from being used by other
fetch contexts.
If we know that we'll have a task pool doing specific thing it's better
to use this knowledge and bind tasks to task queues, this behaves better
than randomly choosing the task queue.
- use bound resolver tasks - we have a pool of tasks doing resolutions,
we can spread the load evenly using isc_task_create_bound
- quantum set universally to 25