ALPN are defined as 1*255OCTET in RFC 9460. commatxt_fromtext was not
rejecting invalid inputs produces by missing a level of escaping
which where later caught be dns_rdata_fromwire on reception.
These inputs should have been rejected
svcb in svcb 1 1.svcb alpn=\,abc
svcb1 in svcb 1 1.svcb alpn=a\,\,abc
and generated 00 03 61 62 63 and 01 61 00 02 61 62 63 respectively.
The correct inputs to include commas in the alpn requires double
escaping.
svcb in svcb 1 1.svcb alpn=\\,abc
svcb1 in svcb 1 1.svcb alpn=a\\,\\,abc
and generate 04 2C 61 62 63 and 06 61 2C 2C 61 62 63 respectively.
The key lifetime should no longer be adjusted if the key is being
retired earlier, for example because a manual rollover was started.
This would falsely be seen as a dnssec-policy lifetime reconfiguration,
and would adjust the retire/removed time again.
This also means we should update the status output, and the next
rollover scheduled is now calculated using (retire-active) instead of
key lifetime.
If dnssec-policy is reconfigured and the key lifetime has changed,
update existing keys with the new lifetime and adjust the retire
and removed timing metadata accordingly.
If the key has no lifetime yet, just initialize the lifetime. It
may be that the retire/removed timing metadata has already been set.
Skip keys which goal is not set to omnipresent. These keys are already
in the progress of retiring, or still unused.
Instead of outright refusing to add new RR types to the cache, be a bit
smarter:
1. If the new header type is in our priority list, we always add either
positive or negative entry at the beginning of the list.
2. If the new header type is negative entry, and we are over the limit,
we mark it as ancient immediately, so it gets evicted from the cache
as soon as possible.
3. Otherwise add the new header after the priority headers (or at the
head of the list).
4. If we are over the limit, evict the last entry on the normal header
list.
Add HTTPS, SVCB, SRV, PTR, NAPTR, DNSKEY and TXT records to the list of
the priority types that are put at the beginning of the slabheader list
for faster access and to avoid eviction when there are more types than
the max-types-per-name limit.
Add support for using the offload threadpool to perform message
signature verifications. This should allow check SIG(0)-signed
messages without affecting the worker threads.
This is a tiny helper function which is used only once and can be
replaced with two function calls instead. Removing this makes
supporting asynchronous signature checking less complicated.
By default we log a rekey failure on debug level. We should probably
change the log level to error. We make an exception for when the zone
is not loaded yet, it often happens at startup that a rekey is
run before the zone is fully loaded.
when signatures were not added because of too many types already
existing at a node, the diff was not being cleaned up; this led to
a memory leak being reported at shutdown.
Previously, the number of RR types for a single owner name was limited
only by the maximum number of the types (64k). As the data structure
that holds the RR types for the database node is just a linked list, and
there are places where we just walk through the whole list (again and
again), adding a large number of RR types for a single owner named with
would slow down processing of such name (database node).
Add a configurable limit to cap the number of the RR types for a single
owner. This is enforced at the database (rbtdb, qpzone, qpcache) level
and configured with new max-types-per-name configuration option that
can be configured globally, per-view and per-zone.
Previously, the number of RRs in the RRSets were internally unlimited.
As the data structure that holds the RRs is just a linked list, and
there are places where we just walk through all of the RRs, adding an
RRSet with huge number of RRs inside would slow down processing of said
RRSets.
Add a configurable limit to cap the number of the RRs in a single RRSet.
This is enforced at the database (rbtdb, qpzone, qpcache) level and
configured with new max-records-per-type configuration option that can
be configured globally, per-view and per-zone.
Replace the ISC_LIST based deadnodes implementation with isc_queue which
is wait-free and we don't have to acquire neither the tree nor node lock
to append nodes to the queue and the cleaning process can also
copy (splice) the list into a local copy without acquiring the list.
Currently, there's little benefit to this as we need to hold those
locks anyway, but in the future as we move to RCU based implementation,
this will be ready.
To align the cleaning with our event loop based model, remove the
hardcoded count for the node locks and use the number of the event loops
instead. This way, each event loop can have its own cleaning as part of
the process. Use uniform random numbers to spread the nodes evenly
between the buckets (instead of hashing the domain name).
When the cache's memory context was in over memory state when the
cache was flushed it resulted in LRU cleaning removing newly entered
data in the new cache straight away until the old cache had been
destroyed enough to take it out of over memory state. When flushing
the cache create a new memory context for the new db to prevent this.
The `axfr_makedb()` didn't set the loop on the newly created database,
effectively killing delayed cleaning on such database. Move the
database creation into dns_zone API that knows all the gory details of
creating new database suitable for the zone.
The rdataslab.c was full of code like this:
length = raw[0] * 256 + raw[1];
and
count2 = *current2++ * 256;
count2 += *current2++;
Refactor code like this into peek_uint16() and get_uint16 macros
to prevent code repetition and possible mistakes when copy and
pasting the same code over and over.
As a side note for an entertainment of a careful reader of the commit
messages: The byte manipulation was changed from multiplication and
addition to shift with or.
The difference in the assembly looks like this:
MUL and ADD:
movzx eax, BYTE PTR [rdi]
movzx edi, BYTE PTR [rdi+1]
sal eax, 8
or edi, eax
SHIFT and OR:
movzx edi, WORD PTR [rdi]
rol di, 8
movzx edi, di
If the result and/or buffer is then being used after the macro call,
there's more differences in favor of the SHIFT+OR solution.
- duplicated question
- duplicated answer
- qtype as an answer
- two question types
- question names
- nsec3 bad owner name
- short record
- short question
- mismatching question class
- bad record owner name
- mismatched class in record
- mismatched KEY class
- OPT wrong owner name
- invalid RRSIG "covers" type
- UPDATE malformed delete type
- TSIG wrong class
- TSIG not the last record
there were TSAN error reports because of conflicting uses of
node->dirty and node->nsec, which were in the same qword.
this could be resolved by separating them, but we could also
make them into atomic values and remove some node locking.
The fix_iterator() function had a lot of bugs in it and while fixing
them, the number of corner cases and the complexity of the function
got out of hand. Rewrite the function with the following modifications:
The function now requires that the iterator is pointing to a leaf node.
This removes the cases we have to deal when the iterator was left on a
dead branch.
From the leaf node, pop up the iterator stack until we encounter the
branch where the offset point is before the point where the search key
differs. This will bring us to the right branch, or at the first
unmatched node, in which case we pop up to the parent branch. From
there it is easier to retrieve the predecessor.
Once we are at the right branch, all we have to do is find the right
twig (which is either the twig for the character at the position where
the search key differs, or the previous twig) and walk down from there
to the greatest leaf or, in case there is no good twig, get the
previous twig from the successor and get the greatest leaf from there.
If there is no previous twig to select in this branch, because every
leaf from this branch node is greater than the one we wanted, we need
to pop up the stack again and resume at the parent branch. This is
achieved by calling prevleaf().
Move the fix_iterator out of the loop and only call it when we found
a leaf node. This leaf node may be the wrong leaf node, but fix_iterator
should correct that.
Also, when we don't need to set the iterator, just get any leaf. We
only need to have a leaf for the qpkey_compare and the end result does
not matter if compare was against an ancestor leaf or any leaf below
that point.
When searching for a requested name in dns_qp_lookup(), we may add
a leaf node to the QP chain, then subsequently determine that the
branch we were on was a dead end. When that happens, the chain can be
left holding a pointer to a node that is *not* an ancestor of the
requested name.
We correct for this by unwinding any chain links with an offset
value greater or equal to that of the node we found.
The code below the if/else construction could only be run if the 'if'
code path was taken. Move the code into the 'if' code block so that
it is more easier to read.
When parsing a zonefile named-checkzone (and others) could loop
infinitely if a directory was $INCLUDED. Record the error and treat
as EOF when looking for multiple errors.
This was found by Eric Sesterhenn from X41.
In nibble mode if the value to be converted was negative the parser
would loop forever. Process the value as an unsigned int instead
of as an int to prevent sign extension when shifting.
This was found by Eric Sesterhenn from X41.
The expireheader() call in the expire_ttl_headers() function
is erroneous as it passes the 'nlocktypep' and 'tlocktypep'
arguments in a wrong order, which then causes an assertion
failure.
Fix the order of the arguments so it corresponds to the function's
prototype.
in QP keys, characters that are not common in DNS names are
encoded as two-octet sequences. this caused a glitch in iterator
positioning when some lookups failed.
consider the case where we're searching for "\009" (represented
in a QP key as {0x03, 0x0c}) and a branch exists for "\000"
(represented as {0x03, 0x03}). we match on the 0x03, and continue
to search down. at the point where we find we have no match,
we need to pop back up to the branch before the 0x03 - which may
be multiple levels up the stack - before we position the iterator.
there were some structure names used in qpcache.c and qpzone.c that
were too similar to each other and could be confusing when debugging.
they have been changed as follows:
in qcache.c:
- changed_t was unused, and has been removed
- search_t -> qpc_search_t
- qpdb_rdatasetiter_t -> qpc_rditer_t
- qpdb_dbiterator_t -> qpc_dbiter_t
in qpzone.c:
- qpdb_changed_t -> qpz_changed_t
- qpdb_changedlist_t -> qpz_changedlist_t
- qpdb_version_t -> qpz_version_t
- qpdb_versionlist_t -> qpz_versionlist_t
- qpdb_search_t -> qpz_search_t
- qpdb_load_t -> qpz_search_t