The various factors like NS_PER_MS are now defined in a single place
and the names are no longer inconsistent. I chose the _PER_SEC names
rather than _PER_S because it is slightly more clear in isolation;
but the smaller units are always NS, US, and MS.
When using dual-stack-servers the covering namespace to check whether
answers are in scope or not should be fctx->domain. To do this we need
to be able to distingish forwarding due to forwarders clauses and
dual-stack-servers. A new flag FCTX_ADDRINFO_DUALSTACK has been added
to signal this.
Replace the use of isc_ht API with isc_hashmap API in the dns_resolver
implementation. This requires extending the fctxbucket_t structure to
include keysize and copy of the key because the isc_hashmap API needs
the raw key in case of resizing the hashmap table.
ARM states that the "eligibility" TTL is the smallest original TTL
value that is accepted for a record to be eligible for prefetching,
but the code, which implements the condition doesn't behave in that
manner for the edge case when the TTL is equal to the configured
eligibility value.
Fix the code to check that the TTL is greater than, or equal to the
configured eligibility value, instead of just greater than it.
For UDP queries, after calling dns_adb_beginudpfetch() in fctx_query(),
make sure that dns_adb_endudpfetch() is also called on error path, in
order to adjust the quota back.
It is currently possible that dns_adb_endudpfetch() is not
called in fctx_cancelquery() for a UDP query, which results
in quotas not being adjusted back.
Always call dns_adb_endudpfetch() for UDP queries.
In the cleanup code of fctx_query() function there is a code path
where 'query' is linked to 'fctx' and it is being destroyed.
Make sure that 'query' is unlinked before destroying it.
Mostly generated automatically with the following semantic patch,
except where coccinelle was confused by #ifdef in lib/isc/net.c
@@ expression list args; @@
- UNEXPECTED_ERROR(__FILE__, __LINE__, args)
+ UNEXPECTED_ERROR(args)
@@ expression list args; @@
- FATAL_ERROR(__FILE__, __LINE__, args)
+ FATAL_ERROR(args)
All we need for compression is a very small hash set of compression
offsets, because most of the information we need (the previously added
names) can be found in the message using the compression offsets.
This change combines dns_compress_find() and dns_compress_add() into
one function dns_compress_name() that both finds any existing suffix,
and adds any new prefix to the table. The old split led to performance
problems caused by duplicate names in the compression context.
Compression contexts are now either small or large, which the caller
chooses depending on the expected size of the message. There is no
dynamic resizing.
There is a behaviour change: compression now acts on all the labels in
each name, instead of just the last few.
A small benchmark suggests this is about 2x faster.
sizeof(dns_name_t) did not change but the boolean attributes are now
separated as one-bit structure members. This allows debuggers to
pretty-print dns_name_t attributes without any special hacks, plus we
got rid of manual bit manipulation code.
Getting the recorded value of 'edns-udp-size' from the resolver requires
strong attach to the dns_view because we are accessing `view->resolver`.
This is not the case in places (f.e. dns_zone unit) where `.udpsize` is
accessed. By moving the .udpsize field from `struct dns_resolver` to
`struct dns_view`, we can access the value directly even with weakly
attached dns_view without the need to lock the view because `.udpsize`
can be accessed after the dns_view object has been shut down.
While refactoring the isc_mem_getx(...) usage, couple places were
identified where the memory was resized manually. Use the
isc_mem_reget(...) that was introduced in [GL !5440] to resize the
arrays via function rather than a custom code.
Add new semantic patch to replace the straightfoward uses of:
ptr = isc_mem_{get,allocate}(..., size);
memset(ptr, 0, size);
with the new API call:
ptr = isc_mem_{get,allocate}x(..., size, ISC_MEM_ZERO);
Formerly, the isc_hash32() would have to change the key in a local copy
to make it case insensitive. Change the isc_siphash24() and
isc_halfsiphash24() functions to lowercase the input directly when
reading it from the memory and converting the uint8_t * array to
64-bit (respectively 32-bit numbers).
Instead of creating dns_resolver .spillattimer when the dns_resolver_t
object is created, create it on the current loop as needed and destroy
it as soon as the timer has finished its job. This avoids the need to
manipulate the timer from a different thread.
This change prepares ground for sending DNS requests using DoT,
which, in particular, will be used for forwarding dynamic updates
to TLS-enabled primaries.
Limit the amount of database lookups that can be triggered in
fctx_getaddresses() (i.e. when determining the name server addresses to
query next) by setting a hard limit on the number of NS RRs processed
for any delegation encountered. Without any limit in place, named can
be forced to perform large amounts of database lookups per each query
received, which severely impacts resolver performance.
The limit used (20) is an arbitrary value that is considered to be big
enough for any sane DNS delegation.
Previously:
* applications were using isc_app as the base unit for running the
application and signal handling.
* networking was handled in the netmgr layer, which would start a
number of threads, each with a uv_loop event loop.
* task/event handling was done in the isc_task unit, which used
netmgr event loops to run the isc_event calls.
In this refactoring:
* the network manager now uses isc_loop instead of maintaining its
own worker threads and event loops.
* the taskmgr that manages isc_task instances now also uses isc_loopmgr,
and every isc_task runs on a specific isc_loop bound to the specific
thread.
* applications have been updated as necessary to use the new API.
* new ISC_LOOP_TEST macros have been added to enable unit tests to
run isc_loop event loops. unit tests have been updated to use this
where needed.
* isc_timer was rewritten using the uv_timer, and isc_timermgr_t was
completely removed; isc_timer objects are now directly created on the
isc_loop event loops.
* the isc_timer API has been simplified. the "inactive" timer type has
been removed; timers are now stopped by calling isc_timer_stop()
instead of resetting to inactive.
* isc_manager now creates a loop manager rather than a timer manager.
* modules and applications using isc_timer have been updated to use the
new API.
Clean up dns_rdatalist_tordataset() and dns_rdatalist_fromrdataset()
functions by making them return void, because they cannot fail.
Clean up other functions that subsequently cannot fail.
Cumulative fetch limit logging happens on an event of a dropped
fetch if 60 seconds have been passed since the previous log message.
This change makes the log message different for the initial event
and for the later cumulative events to provide more useful information
to the system administrator.
When initially hitting the `fetches-per-zone` value, a log message
is being generated for the event of dropping the first fetch, then
any further log events occur only when another fetch is being dropped
and 60 seconds have been passed since the last logged message.
That logic isn't ideal because when the counter of the outstanding
fetches reaches zero, the structure holding the counters' values will
get deleted, and the information about the dropped fetches accumulated
during the last minute will not be logged.
Improve the fcount_logspill() function to makie sure that the final
values are getting logged before the counter object gets destroyed.
The BUFSIZ value varies between platforms, it could be 8K on Linux and
512 bytes on mingw. Make sure the buffers are always big enough for the
output data to prevent truncation of the output by appropriately
enlarging or sizing the buffers.
Commit 7b2ea97e46034ec3db4c950100708297798826af introduced a logic bug
in resume_dslookup(): that function now only conditionally checks
whether DS chasing can still make progress. Specifically, that check is
only performed when the previous resume_dslookup() call invokes
dns_resolver_createfetch() with the 'nameservers' argument set to
something else than NULL, which may not always be the case. Failing to
perform that check may trigger assertion failures as a result of
dns_resolver_createfetch() attempting to resolve an invalid name.
Example scenario that leads to such outcome:
1. A validating resolver is configured to forward all queries to
another resolver. The latter returns broken DS responses that
trigger DS chasing.
2. rctx_chaseds() calls dns_resolver_createfetch() with the
'nameservers' argument set to NULL.
3. The fetch fails, so resume_dslookup() is called. Due to
fevent->result being set to e.g. DNS_R_SERVFAIL, the default branch
is taken in the switch statement.
4. Since 'nameservers' was set to NULL for the fetch which caused the
resume_dslookup() callback to be invoked
(fctx->nsfetch->private->nameservers), resume_dslookup() chops off
one label off fctx->nsname and calls dns_resolver_createfetch()
again, for a name containing one label less than before.
5. Steps 3-4 are repeated (i.e. all attempts to find the name servers
authoritative for the DS RRset being chased fail) until fctx->nsname
becomes stripped down the the root name.
6. Since resume_dslookup() does not check whether DS chasing can still
make progress, it strips off a label off the root name and continues
its attempts at finding the name servers authoritative for the DS
RRset being chased, passing an invalid name to
dns_resolver_createfetch().
Fix by ensuring resume_dslookup() always checks whether DS chasing can
still make progress when a name server fetch fails. Update code
comments to ensure the purpose of the relevant dns_name_equal() check is
clear.
"rndc fetchlimit" now also prints a list of domain names that are
currently rate-limited by "fetches-per-zone".
The "fetchlimit" system test has been updated to use this feature
to check that domain limits are applied correctly.
previously, when an iterative query returned FORMERR, resolution
would be stopped under the assumption that other servers for
the same domain would likely have the same capabilities. this
assumption is not correct; some domains have been reported for
which some but not all servers will return FORMERR to a given
query; retrying allows recursion to succeed.
it's a style violation to have REQUIRE or INSIST contain code that
must run for the server to work. this was being done with some
atomic_compare_exchange calls. these have been cleaned up. uses
of atomic_compare_exchange in assertions have been replaced with
a new macro atomic_compare_exchange_enforced, which uses RUNTIME_CHECK
to ensure that the exchange was successful.
Add isc_mutex_destroy() and isc_rwlock_destroy() calls missing from the
commits that introduced the relevant isc_mutex_init() and
isc_rwlock_init() calls:
- 76bcb4d16b776e25cc67937f7d1a2fe6e365cfd7
- 15953043124416ab1dbc857f6885ecdb167401bb
- 857f3bede37ccb419dac3816a0f96fa490af7d92
None of these omissions affect any hot paths, so they are not expected
to cause operational issues; correctness is the only concern here.
When processing a catalog zone member zone make sure that there is no
configured pre-existing forward zone with that name.
Refactor the `dns_fwdtable_find()` function to not alter the
`DNS_R_PARTIALMATCH` result (coming from `dns_rbt_findname()`) into
`DNS_R_SUCCESS`, so that now the caller can differentiate partial
and exact matches. Patch the calling sites to expect and process
the new return value.
The aim is to get rid of the obsolete term "GLOBAL14" and instead just
refer to DNS name compression.
This is mostly mechanically renaming
from dns_(de)compress_(get|set)methods()
to dns_(de)compress_(get|set)permitted()
and replacing the related enum by a simple flag, because compression
is either on or off.
There was a proposal in the late 1990s that it might, but it turned
out to be unworkable. See RFC 6891, Extension Mechanisms for
DNS (EDNS(0)), section 5, Extended Label Types.
The remnants of the code that supported this in BIND are redundant.
Previously, tasks could be created either unbound or bound to a specific
thread (worker loop). The unbound tasks would be assigned to a random
thread every time isc_task_send() was called. Because there's no logic
that would assign the task to the least busy worker, this just creates
unpredictability. Instead of random assignment, bind all the previously
unbound tasks to worker 0, which is guaranteed to exist.
Since the fctx hash table is now self-resizing, and resolver tasks are
selected to match the thread that created the fetch context, there
shouldn't be any significant advantage to having multiple tasks per CPU;
a single task per thread should be sufficient.
Additionally, the fetch context is always pinned to the calling netmgr
thread to minimize the contention just to coalesced fetches - if two
threads starts the same fetch, it will be pinned to the first one to get
the bucket.
The dns_message_gettempname(), dns_message_gettemprdata(),
dns_message_gettemprdataset(), and dns_message_gettemprdatalist() always
succeeds because the memory allocation cannot fail now. Change the API
to return void and cleanup all the use of aforementioned functions.
weakly attaching and detaching when creating and destroying the
resolver obviates the need to have a callback event to do the weak
detach. remove the dns_resolver_whenshutdown() mechanism, as it is
now unused.
for better object separation, ADB and resolver statistics counters
are now stored in the ADB and resolver objects themsevles, rather than
in the associated view.
there's no longer any need for a parameter to specify whether the
function is called while holding the bucket lock, because all
unlocked uses have been removed.
After removing the isc_task_onshutdown(), the isc_task_shutdown() and
isc_task_destroy() became obsolete.
Remove calls to isc_task_shutdown() and replace the calls to
isc_task_destroy() with isc_task_detach().
Simplify the internal logic to destroy the task when the last reference
is removed.