when serve-stale is enabled, NXDOMAIN cache entries are no longer
preserved after the normal negative cache TTL, in order to reduce
unnecessary cache memory consumption.
The dns_message_gettempname(), dns_message_gettemprdata(),
dns_message_gettemprdataset(), and dns_message_gettemprdatalist() always
succeeds because the memory allocation cannot fail now. Change the API
to return void and cleanup all the use of aforementioned functions.
Fibonacci hashing was implemented in four separate places (rbt.c,
rbtdb.c, resolver.c, zone.c). This commit combines them into a single
implementation. The hash_32() function is now replaced with
isc_hash_bits32().
Historically, the inline keyword was a strong suggestion to the compiler
that it should inline the function marked inline. As compilers became
better at optimising, this functionality has receded, and using inline
as a suggestion to inline a function is obsolete. The compiler will
happily ignore it and inline something else entirely if it finds that's
a better optimisation.
Therefore, remove all the occurences of the inline keyword with static
functions inside single compilation unit and leave the decision whether
to inline a function or not entirely on the compiler
NOTE: We keep the usage the inline keyword when the purpose is to change
the linkage behaviour.
Previously, the unreachable code paths would have to be tagged with:
INSIST(0);
ISC_UNREACHABLE();
There was also older parts of the code that used comment annotation:
/* NOTREACHED */
Unify the handling of unreachable code paths to just use:
UNREACHABLE();
The UNREACHABLE() macro now asserts when reached and also uses
__builtin_unreachable(); when such builtin is available in the compiler.
Gcc 7+ and Clang 10+ have implemented __attribute__((fallthrough)) which
is explicit version of the /* FALLTHROUGH */ comment we are currently
using.
Add and apply FALLTHROUGH macro that uses the attribute if available,
but does nothing on older compilers.
In one case (lib/dns/zone.c), using the macro revealed that we were
using the /* FALLTHROUGH */ comment in wrong place, remove that comment.
This is an optimisation as we can skip a lot of pointless work when we
know there is a DNAME there.
When we have a partial match and a DNAME above the QNAME, the closest
encloser has the same owner as the DNAME, will have the DNAME bit set
in the type map, and we wouldn't use it as we would return the
DNAME + RRSIG(DNAME) instead.
So there is no point in looking for it nor in attempting to check that
it is valid for the QNAME.
'setup_delegation' depends on 'foundname' being the value returned
by 'dns_rbt_findnode' in the cache and 'find_coveringnsec' was
modifying 'foundname' when a covering NSEC was not found.
The C17 standard deprecated ATOMIC_VAR_INIT() macro (see [1]). Follow
the suite and remove the ATOMIC_VAR_INIT() usage in favor of simple
assignment of the value as this is what all supported stdatomic.h
implementations do anyway:
* MacOSX.plaform: #define ATOMIC_VAR_INIT(__v) {__v}
* Gcc stdatomic.h: #define ATOMIC_VAR_INIT(VALUE) (VALUE)
1. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1138r0.pdf
Previously, the function(s) in the commit subject could fail for various
reasons - mostly allocation failures, or other functions returning
different return code than ISC_R_SUCCESS. Now, the aforementioned
function(s) cannot ever fail and they would always return ISC_R_SUCCESS.
Change the function(s) to return void and remove the extra checks in
the code that uses them.
By default C promotes short unsigned values to signed int which
leads to undefined behaviour when the value is shifted by too much.
Force unsigned arithmetic to be perform by explicitly casting to a
unsigned type.
This commit converts the license handling to adhere to the REUSE
specification. It specifically:
1. Adds used licnses to LICENSES/ directory
2. Add "isc" template for adding the copyright boilerplate
3. Changes all source files to include copyright and SPDX license
header, this includes all the C sources, documentation, zone files,
configuration files. There are notes in the doc/dev/copyrights file
on how to add correct headers to the new files.
4. Handle the rest that can't be modified via .reuse/dep5 file. The
binary (or otherwise unmodifiable) files could have license places
next to them in <foo>.license file, but this would lead to cluttered
repository and most of the files handled in the .reuse/dep5 file are
system test files.
1) when after processing a node there where no headers that
contained active records.
When
if (check_stale_header(node, header, &locktype, lock, &search,
&header_prev);
succeeds or
if (EXISTS(header) && !ANCIENT(header))
fails for all entries in the list leading to 'empty_node' remaining
true.
If there is are no active records we know nothing about the
current state of the name so we treat is as ISC_R_NOTFOUND.
2) when there was a covering NOQNAME proof found or all the
active headers where negative.
When
if (header->noqname != NULL &&
header->trust == dns_trust_secure)
succeeds or
if (!NEGATIVE(header))
never succeeds. Under these conditions there could (should be for
found_noqname) be a covering NSEC earlier in the tree.
dns_db_nodecount can now be used to get counts from the auxilary
rbt databases. The existing node count is returned by
tree=dns_dbtree_main. The nsec and nsec3 node counts by dns_dbtree_nsec
and dns_dbtree_nsec3 respectively.
Note when synthesising answer involving wildcards we look in the
cache multiple times, once for the QNAME and once for the wildcard
name which is constucted by looking at the names from the covering
NSEC return by the QNAME miss.
this improves the performance of looking for NSEC and RRSIG(NSEC)
records in the cache by skipping lots of nodes in the main trees
in the cache without these records present. This is a simplified
version of previous_closest_nsec() which uses the same underlying
mechanism to look for NSEC and RRSIG(NSEC) records in authorative
zones.
The auxilary NSEC tree was already being maintained as a side effect
of looking for the covering NSEC in large zones where there can be
lots of glue records that needed to be skipped. Nodes are added
to the tree whenever a NSEC record is added to the primary tree.
They are removed when the corresponding node is removed from the
primary tree.
Having nodes in the NSEC tree w/o NSEC records in the primary tree
should not impact on synth-from-dnssec efficiency as that node would
have held the NSEC we would have been needed to synthesise the
response. Removing the node when the NSEC RRset expires would only
cause rbtdb to return a NSEC which would be rejected at a higher
level.
The __builtin_expect() can be used to provide the compiler with branch
prediction information. The Gcc manual says[1] on the subject:
In general, you should prefer to use actual profile feedback for
this (-fprofile-arcs), as programmers are notoriously bad at
predicting how their programs actually perform.
Stop using __builtin_expect() and ISC_LIKELY() and ISC_UNLIKELY() macros
to provide the branch prediction information as the performance testing
shows that named performs better when the __builtin_expect() is not
being used.
1. https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html#index-_005f_005fbuiltin_005fexpect
Originally, the hash table used in RBT database would be resized when it
reached certain number of elements (defined by overcommit). This was
causing resolution brownouts for busy resolvers, because the rehashing
could take several seconds to complete. This was mitigated by
pre-allocating the hash table in the RBT database used for caching to be
large-enough as determined by max-cache-size. The downside of this
solution was that the pre-allocated hash table could take a significant
chunk of the memory even when the resolver cache would be otherwise
empty because the default value for max-cache-size is 90% of available
memory.
Implement incremental resizing[1] to perform the rehashing gradually:
1. During the resize, allocate the new hash table, but keep the old
table unchanged.
2. In each lookup or delete operation, check both tables.
3. Perform insertion operations only in the new table.
4. At each insertion also move r elements from the old table to the new
table.
5. When all elements are removed from the old table, deallocate it.
To ensure that the old table is completely copied over before the new
table itself needs to be enlarged, it is necessary to increase the
size of the table by a factor of at least (r + 1)/r during resizing.
In our implementation r is equal to 1.
The downside of this approach is that the old table and the new table
could stay in memory for longer when there are no new insertions into
the hash table for prolonged periods of time as the incremental
rehashing happens only during the insertions.
The upside of this approach is that it's no longer necessary to
pre-allocate large hash table, because the RBT hash table rehashing
doesn't cause resolution brownouts anymore and thus we can use the
memory as needed.
1. https://en.m.wikipedia.org/wiki/Hash_table#Dynamic_resizing
Remove the dynamic registration of result codes. Convert isc_result_t
from unsigned + #defines into 32-bit enum type in grand unified
<isc/result.h> header. Keep the existing values of the result codes
even at the expense of the description and identifier tables being
unnecessary large.
Additionally, add couple of:
switch (result) {
[...]
default:
break;
}
statements where compiler now complains about missing enum values in the
switch statement.
as libdns is no longer exported, it's not necessary to have
init and shutdown functions. the only purpose they served
was to create a private mctx and run dst_lib_init(), which
can be called directly instead.
- fixed a size comparison using "signed int" that failed if the file
size was more than 2GB, since that was treated as a negative number.
- incidentally renamed deserialize32() to just deserialize(). we no
longer have separate 32 and 64 bit rbtdb implementations.
The additional processing method has been expanded to take the
owner name of the record, as HTTPS and SVBC need it to process "."
in service form.
The additional section callback can now return the RRset that was
added. We use this when adding CNAMEs. Previously, the recursion
would stop if it detected that a record you added already exists. With
CNAMEs this rule doesn't work, as you ultimately care about the RRset
at the target of the CNAME and not the presence of the CNAME itself.
Returning the record allows the caller to restart with the target
name. As CNAMEs can form loops, loop protection was added.
As HTTPS and SVBC can produce infinite chains, we prevent this by
tracking recursion depth and stopping if we go too deep.
When looking up a zonecut in cache, we use 'dns_rbt_findnode' to find
the closest matching node. This function however does not take into
account stale nodes. When we do find a stale node and use it, this
has implications for subsequent lookups. For example, this may break
QNAME minimization because we are using a deeper zonecut than we should
have.
Check the header for staleness and if so, and stale entries are not
accepted, look for the deepest zonecut from this node up.
There are some occurrences where we check if a header exists in the
rbtdb. These cases require that the header is also not marked as
ancient (aka ready for cleanup). These cases involve finding certain
data in cache.
Increasing the nodelock count had major impact on the memory footprint
in scenarios where multiple rbtdb structure would be created like
hosting many zones in a single server.
This reverts commit 03446843856a3efbaaec41409b5eea5542501118 and sets
the nodelock count to previously used values.
The isc/platform.h header was left empty which things either already
moved to config.h or to appropriate headers. This is just the final
cleanup commit.
In the code that rdataset_setownercase() and rdataset_getownercase() we
now use tolower()/toupper()/isupper() functions appropriately instead of
rolling our own code.
The Windows support has been completely removed from the source tree
and BIND 9 now no longer supports native compilation on Windows.
We might consider reviewing mingw-w64 port if contributed by external
party, but no development efforts will be put into making BIND 9 compile
and run on Windows again.
Rather than having an expensive 'expired' (fka 'stale_ttl') in the
rdataset structure, that is only used to be printed in a comment on
ancient RRsets, reuse the TTL field of the RRset.
Commit a83c8cb0afd88d54b9cf67239f2495c9b0391e97 updated masterdump so
that stale records in "rndc dumpdb" output no longer shows 0 TTLs. In
this commit we change the name of the `rdataset->stale_ttl` field to
`rdataset->expired` to make its purpose clearer, and set it to zero in
cases where it's unused.
Add 'rbtdb->serve_stale_ttl' to various checks so that stale records
are not purged from the cache when they've been stale for RBTDB_VIRTUAL
(300) seconds.
Increment 'ns_statscounter_usedstale' when a stale answer is used.
Note: There was a question of whether 'overmem_purge' should be
purging ancient records, instead of stale ones. It is left as purging
stale records, since stale records could take up the majority of the
cache.
This submission is copyrighted Akamai Technologies, Inc. and provided
under an MPL 2.0 license.
This commit was originally authored by Kevin Chen, and was updated by
Matthijs Mekking to match recent serve-stale developments.
configuring with --enable-mutex-atomics flagged these incorrectly
initialised variables on systems where pthread_mutex_init doesn't
just zero out the structure.
The last rdataset_getownercase() left it in a state where the code was
mix of microoptimizations (manual loop unrolling, complicated bitshifts)
with a code that would always rewrite the character even if it stayed
the same after transformation.
This commit makes sure that we modify only the characters that actually
need to change, removes the manual loop unrolling, and replaces the
weird bit arithmetics with a simple shift and bit-and.
dns_message_gettempname() now returns a pointer to an initialized
name associated with a dns_fixedname_t object. it is no longer
necessary to allocate a buffer for temporary names associated with
the message object.
It is more intuitive to have the countdown 'max-stale-ttl' as the
RRset TTL, instead of 0 TTL. This information was already available
in a comment "; stale (will be retained for x more seconds", but
Support suggested to put it in the TTL field instead.
Before binding an RRset, check the time and see if this record is
stale (or perhaps even ancient). Marking a header stale or ancient
happens only when looking up an RRset in cache, but binding an RRset
can also happen on other occasions (for example when dumping the
database).
Check the time and compare it to the header. If according to the
time the entry is stale, but not ancient, set the STALE attribute.
If according to the time is ancient, set the ANCIENT attribute.
We could mark the header stale or ancient here, but that requires
locking, so that's why we only compare the current time against
the rdh_ttl.
Adjust the test to check the dump-db before querying for data. In the
dumped file the entry should be marked as stale, despite no cache
lookup happened since the initial query.
When introducing change 5149, "rndc dumpdb" started to print a line
above a stale RRset, indicating how long the data will be retained.
At that time, I thought it should also be possible to load
a cache from file. But if a TTL has a value of 0 (because it is stale),
stale entries wouldn't be loaded from file. So, I added the
'max-stale-ttl' to TTL values, and adjusted the $DATE accordingly.
Since we actually don't have a "load cache from file" feature, this
is premature and is causing confusion at operators. This commit
changes the 'max-stale-ttl' adjustments.
A check in the serve-stale system test is added for a non-stale
RRset (longttl.example) to make sure the TTL in cache is sensible.
Also, the comment above stale RRsets could have nonsensical
values. A possible reason why this may happen is when the RRset was
marked a stale but the 'max-stale-ttl' has passed (and is actually an
RRset awaiting cleanup). This would lead to the "will be retained"
value to be negative (but since it is stored in an uint32_t, you would
get a nonsensical value (e.g. 4294362497).
To mitigate against this, we now also check if the header is not
ancient. In addition we check if the stale_ttl would be negative, and
if so we set it to 0. Most likely this will not happen because the
header would already have been marked ancient, but there is a possible
race condition where the 'rdh_ttl + serve_stale_ttl' has passed,
but the header has not been checked for staleness.
The dboption DNS_DBFIND_STALEONLY caused confusion because it implies
we are looking for stale data **only** and ignore any active RRsets in
the cache. Rename it to DNS_DBFIND_STALETIMEOUT as it is more clear
the option is related to a lookup due to "stale-answer-client-timeout".
Rename other usages of "staleonly", instead use "lookup due to...".
Also rename related function and variable names.
*** CID 318094: Null pointer dereferences (REVERSE_INULL)
/lib/dns/rbtdb.c: 1389 in newversion()
1383 version->xfrsize = rbtdb->current_version->xfrsize;
1384 RWUNLOCK(&rbtdb->current_version->rwlock, isc_rwlocktype_read);
1385 rbtdb->next_serial++;
1386 rbtdb->future_version = version;
1387 RBTDB_UNLOCK(&rbtdb->lock, isc_rwlocktype_write);
1388
CID 318094: Null pointer dereferences (REVERSE_INULL)
Null-checking "version" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
1389 if (version == NULL) {
1390 return (result);
1391 }
1392
1393 *versionp = version;
1394
This commit fix a race that could happen when two or more threads have
failed to refresh the same RRset, the threads could simultaneously
attempt to update the header->last_refresh_fail_ts field in
check_stale_header, a field used to implement stale-refresh-time.
By making this field atomic we avoid such race.