- there are now two functions for getting rdataslab size:
dns_rdataslab_size() is for full slabs and dns_rdataslab_sizeraw()
for raw slabs. there is no longer a need for a reservelen parameter.
- dns_rdataslab_count() also no longer takes a reservelen parameter.
(currently it's never used for raw slabs, so there is no _countraw()
function.)
- dns_rdataslab_rdatasize() has been removed, because
dns_rdataslab_sizeraw() can do the same thing.
- dns_rdataslab_merge() and dns_rdataslab_subtract() both take
slabheader parameters instead of character buffers, and the
reservelen parameter has been removed.
if both rdataslabs being compared have zero length, return true.
also, since these functions are only ever called on slabheaders
with sizeof(dns_slabheader_t) as the reserve length, we can
simplify the API: remove the reservelen argument, and pass the
slabs as type dns_slabheader_t * instead of unsigned char *.
Database versions are not used in cache databases. Some places in
qpcache.c required the version argument to be NULL; others marked it
as UNUSED. Unify all cases to require version to be NULL.
Add new related_headers() function that simplifies the code
flow in qpcache_findrdataset(). Also use check_stale_header() function
to remove code duplication.
Add a helper function both_headers() that unifies the slabheader
matching for simple type: it returns true when both the type and
the matching RRSIG have been found.
The new maybe_update_headers() function unifies the LRU updates to the
slabheaders that was scattered all over the place. More calls to update
headers after bindrdatasets() were also added for completeness.
This removes code duplication between the dual bindrdataset() calls. It
also unifies the handling as there were small differences between the
calls: one variant was checking for !NEGATIVE(found) condition and one
wasn't, and it is technically ok to do the check for all variants.
The check_stale_header() function now updates header_prev directly
so it doesn't have to be handled in the outer loop; it's always
set to the correct value of the previous header in the chain.
some code was left in the cache database implementation after
it was separated from the zone database, and can be cleaned up
and refactored now:
- the DNS_SLABHEADERATTR_IGNORE flag is never set in the cache
- support for loading the cache from was removed, but the add()
function still had a 'loading' flag that's always false
- two different macros were used for checking the
DNS_SLABHEADERATTR_NONEXISTENT flag - EXISTS() and NONEXISTENT().
it's clearer to just use EXISTS().
- the cache doesn't support versions, so it isn't necessary to
walk down the 'down' pointer chain when iterating through the
cache or looking for a header to update. 'down' now only points
to records that are deleted from the cache but have not yet been
purged from memory. this allows us to simplify both the iterator
and the add() function.
Reduce the number of qpzone_ref() and qpzone_unref() calls in
qpzone_detachnode() by relying on the call_rcu to delay
the destruction of the lock buckets.
Instead of having many node_lock_count * sizeof(<member>) arrays, pack
all the members into a qpcache_bucket_t struct that is cacheline aligned
and have a single array of those.
Additionaly, make both the head and the tail of isc_queue_t padded, not
just the head, to prevent false sharing of the lock-free structure with
the lock that follows it.
In #1870, the expiration time of ANCIENT records were printed, but
actually the ancient records are very short lived, and the information
carries a little value.
Instead of printing the expiration of ANCIENT records, print the
expiration time of STALE records.
The find_coveringnsec() was getting the 'now' from two sources -
search->now and separate now argument. Things like this are ticking
bombs, remove the extra 'now' argument and use single source of 'now'.
When the mark_ancient() helper function was introduced, couple of places
with duplicate (or almost duplicate) code was missed. Move the
mark_ancient() function closer to the top of the file, and correctly use
it in places that mark the header as ANCIENT.
If we know that the header has ZEROTTL set, the server should never send
stale records for it and the TTL should never be anything else than 0.
The comment was already there, but the code was not matching the
comment.
The old name was misleading as it never meant time-to-live, e.g. number
of seconds from now when the header should expire. The true meaning was
an expiration time e.g. now + ttl. This was the original design bug
that caused the slip when we assigned header->ttl to rdataset->ttl.
Because the name was matching, nobody has questioned the correctness of
the code both during the MR review and during the numerous re-reviews
when we were searching for the cause of the 54 year TTL.
When the header has been marked as ANCIENT, but the ttl hasn't been
reset (this happens in couple of places), the rdataset TTL would be
set to the header timestamp instead to a reasonable TTL value.
Since this header has been already expired (ANCIENT is set), set the
rdataset TTL to 0 and don't reuse this field to print the expiration
time when dumping the cache. Instead of printing the time, we now
just print 'expired (awaiting cleanup'.
the search for the deepest known zone cut in the cache could
improperly reject a node containing stale data, even if the
NS rdataset wasn't the data that was stale.
this change also improves the efficiency of the search by
stopping it when both NS and RRSIG(NS) have been found.
Change the names of the node reference counting functions
and add comments to make the mechanism easier to understand:
- newref() and decref() are now called qpcnode_acquire()/
qpznode_acquire() and qpcnode_release()/qpznode_release()
respectively; this reflects the fact that they modify both
the internal and external reference counters for a node.
- qpcnode_newref() and qpznode_newref() are now called
qpcnode_erefs_increment() and qpznode_erefs_increment(), and
qpcnode_decref() and qpznode_decref() are now called
qpcnode_erefs_decrement() and qpznode_erefs_decrement(),
to reflect that they only increase and decrease the node's
external reference counters, not internal.
This removes the db_nodelock_t structure and changes the node_locks
array to be composed only of isc_rwlock_t pointers. The .reference
member has been moved to qpdb->references in addition to
common.references that's external to dns_db API users. The .exiting
members has been completely removed as it has no use when the reference
counting is used correctly.
The origin_node in qpcache was always NULL, so we can remove the
getoriginode() function and origin_node pointer as the
dns_db_getoriginnode() correctly returns ISC_R_NOTFOUND when the
function is not implemented.
Cleanup the pattern in the decref() functions in both qpcache.c and
qpzone.c, so it follows the similar patter as we already have in
newref() function.
Previously, this function always acquires a node write lock if it
might need node cleanup in case the reference decrements to 0. In
fact, the lock is unnecessary if the reference is larger than 1 and it
can be optimized as an "easy" case. This optimization could even be
"necessary". In some extreme cases, many worker threads could repeat
acquring and releasing the reference on the same node, resulting in
severe lock contention for nothing (as the ref wouldn't decrement to 0
in most cases). This change would prevent noticeable performance
drop like query timeout for such cases.
Co-authored-by: JINMEI Tatuya <jtatuya@infoblox.com>
Co-authored-by: Ondřej Surý <ondrej@isc.org>
All the database implementations share the same names for the methods
implementing the database. That has some advantages like knowing what
to expect, but it turns out that any time such method shows up in any
kind of tracing - be it perf record, backtrace or anything else that
uses symbol names, it is very hard to distinguish whether the find()
belongs to qpcache, qpzone, builtin or sdlz implementation.
Make at least the names for qpzone and qpcache unique.
The new log message is emitted when adding or updating an RRset
fails due to exceeding the max-records-per-type limit. The log includes
the owner name and type, corresponding zone name, and the limit value.
It will be emitted on loading a zone file, inbound zone transfer
(both AXFR and IXFR), handling a DDNS update, or updating a cache DB.
It's especially helpful in the case of zone transfer, since the
secondary side doesn't have direct access to the offending zone data.
It could also be used for max-types-per-name, but this change
doesn't implement it yet as it's much less likely to happen
in practice.
Originally, the dns_dbnode_t was typedef'ed to void type. This allowed
some flexibility, but using (void *) just removes any type-checking that
C might have. Instead of using:
typedef void dns_dbnode_t;
use a trick to define the type to non-existing structure:
typedef struct dns_dbnode dns_dbnode_t;
This allows the C compilers to employ the type-checking while the
structure itself doesn't have to be ever defined because the actual
'storage' is never accessed using dns_dbnode_t type.
when the QP cache was adapted from the RBT database, some names
weren't changed. this could be confusing, so let's change them now.
also, we no longer need to include rbt.h.
Remove the complicated mechanism that could be (in theory) used by
external libraries to register new categories and modules with
statically defined lists in <isc/log.h>. This is similar to what we
have done for <isc/result.h> result codes. All the libraries are now
internal to BIND 9, so we don't need to provide a mechanism to register
extra categories and modules.
Instead of outright refusing to add new RR types to the cache, be a bit
smarter:
1. If the new header type is in our priority list, we always add either
positive or negative entry at the beginning of the list.
2. If the new header type is negative entry, and we are over the limit,
we mark it as ancient immediately, so it gets evicted from the cache
as soon as possible.
3. Otherwise add the new header after the priority headers (or at the
head of the list).
4. If we are over the limit, evict the last entry on the normal header
list.
Add HTTPS, SVCB, SRV, PTR, NAPTR, DNSKEY and TXT records to the list of
the priority types that are put at the beginning of the slabheader list
for faster access and to avoid eviction when there are more types than
the max-types-per-name limit.
Previously, the number of RR types for a single owner name was limited
only by the maximum number of the types (64k). As the data structure
that holds the RR types for the database node is just a linked list, and
there are places where we just walk through the whole list (again and
again), adding a large number of RR types for a single owner named with
would slow down processing of such name (database node).
Add a configurable limit to cap the number of the RR types for a single
owner. This is enforced at the database (rbtdb, qpzone, qpcache) level
and configured with new max-types-per-name configuration option that
can be configured globally, per-view and per-zone.
Previously, the number of RRs in the RRSets were internally unlimited.
As the data structure that holds the RRs is just a linked list, and
there are places where we just walk through all of the RRs, adding an
RRSet with huge number of RRs inside would slow down processing of said
RRSets.
Add a configurable limit to cap the number of the RRs in a single RRSet.
This is enforced at the database (rbtdb, qpzone, qpcache) level and
configured with new max-records-per-type configuration option that can
be configured globally, per-view and per-zone.
Replace the ISC_LIST based deadnodes implementation with isc_queue which
is wait-free and we don't have to acquire neither the tree nor node lock
to append nodes to the queue and the cleaning process can also
copy (splice) the list into a local copy without acquiring the list.
Currently, there's little benefit to this as we need to hold those
locks anyway, but in the future as we move to RCU based implementation,
this will be ready.
To align the cleaning with our event loop based model, remove the
hardcoded count for the node locks and use the number of the event loops
instead. This way, each event loop can have its own cleaning as part of
the process. Use uniform random numbers to spread the nodes evenly
between the buckets (instead of hashing the domain name).
The expireheader() call in the expire_ttl_headers() function
is erroneous as it passes the 'nlocktypep' and 'tlocktypep'
arguments in a wrong order, which then causes an assertion
failure.
Fix the order of the arguments so it corresponds to the function's
prototype.
there were some structure names used in qpcache.c and qpzone.c that
were too similar to each other and could be confusing when debugging.
they have been changed as follows:
in qcache.c:
- changed_t was unused, and has been removed
- search_t -> qpc_search_t
- qpdb_rdatasetiter_t -> qpc_rditer_t
- qpdb_dbiterator_t -> qpc_dbiter_t
in qpzone.c:
- qpdb_changed_t -> qpz_changed_t
- qpdb_changedlist_t -> qpz_changedlist_t
- qpdb_version_t -> qpz_version_t
- qpdb_versionlist_t -> qpz_versionlist_t
- qpdb_search_t -> qpz_search_t
- qpdb_load_t -> qpz_search_t
when calling dns_qp_lookup() from qpcache, instead of passing
'foundname' so that a name would be constructed from the QP key,
we now just use the name field in the node data. this makes
dns_qp_lookup() run faster.
the same optimization has also been added to qpzone.
the documentation for dns_qp_lookup() has been updated to
discuss this performance consideration.
when the cache is over memory, we purge from the LRU list until
we've freed the approximate amount of memory to be added. this
approximation could fail because the memory allocated for nodenames
wasn't being counted.
add a dns_name_size() function so we can look up the size of nodenames,
then add that to the purgesize calculation.
in a cache database, unlike zones, NSEC3 records are stored in
the main tree. it is not necessary to maintain a separate 'nsec3'
tree, nor to have code in the dbiterator implementation to traverse
from one tree to another.
(if we ever implement synth-from-dnssec using NSEC3 records, we'll
need to revert this change. in the meantime, simpler code is better.)
the QP database doesn't support relative names as the RBTDB did, so
there's no need for a 'new_origin' flag or to handle `DNS_R_NEWORIGIN`
result codes.
- remove unneeded struct members and misleading comments.
- remove unused parameters for static functions.
- rename 'find_callback' to 'delegating' for consistency with qpzone;
the find callback mechanism is not used in QP databases.