Running jobs which were entered into the isc_quota queue is the
responsibility of the isc_quota_release() function, which, when
releasing a previously acquired quota, checks whether the queue
is empty, and if it's not, it runs a job from the queue without touching
the 'quota->used' counter. This mechanism is susceptible to a possible
hangup of a newly queued job in case when between the time a decision
has been made to queue it (because used >= max) and the time it was
actually queued, the last quota was released. Since there is no more
quotas to be released (unless arriving in the future), the newly
entered job will be stuck in the queue.
Fix the issue by adding checks in both isc_quota_release() and
isc_quota_acquire_cb() to make sure that the described hangup does
not happen. Also see code comments.
Closes#4965
Backport of MR !10082
Merge branch 'backport-4965-isc_quota-bug-fix-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10139
Running jobs which were entered into the isc_quota queue is the
responsibility of the isc_quota_release() function, which, when
releasing a previously acquired quota, checks whether the queue
is empty, and if it's not, it runs a job from the queue without touching
the 'quota->used' counter. This mechanism is susceptible to a possible
hangup of a newly queued job in case when between the time a decision
has been made to queue it (because used >= max) and the time it was
actually queued, the last quota was released. Since there is no more
quotas to be released (unless arriving in the future), the newly
entered job will be stuck in the queue.
Fix the wrong memory ordering for 'quota->used', as the relaxed
ordering doesn't ensure that data modifications made by one thread
are visible in other threads.
Add checks in both isc_quota_release() and isc_quota_acquire_cb()
to make sure that the described hangup does not happen. Also see
code comments.
(cherry picked from commit c6529891bb)
A new option 'min-transfer-rate-in <bytes> <minutes>' has been added
to the view and zone configurations. It can abort incoming zone
transfers which run very slowly due to network related issues, for
example. The default value is set to 10240 bytes in 5 minutes.
Closes#3914
Backport of MR !9098
Merge branch 'backport-3914-detect-and-restart-stalled-zone-transfers-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10137
Expose the average transfer rate (in bytes-per-second) during the
last full 'min-transfer-rate-in <bytes> <minutes>' minutes interval.
If no such interval has passed yet, then the overall average rate is
reported instead.
(cherry picked from commit c701b590e4)
Add a new big zone, run a zone transfer in slow mode, and check
whether the zone transfer gets canceled because 100000 bytes are
not transferred in 5 seconds (as it's running in slow mode).
(cherry picked from commit b9c6aa24f8)
This new option sets a minimum amount of transfer rate for
an incoming zone transfer that will abort a transfer, which
for some network related reasons run very slowly.
(cherry picked from commit 91ea156203)
The cache has been updated so that if new data is rejected - for example, because there was already existing data at a higher trust level - then its covering RRSIG will also be rejected.
Closes#5132
Backport of MR !9999
Merge branch 'backport-5132-improve-cd-behavior-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10134
add a zone with different NS RRsets in the parent and child,
and test resolver and forwarder behavior with and without +CD.
(cherry picked from commit e4652a0444)
Add a new dns_rdataset_equals() function to check whether two
rdatasets are equal in DNSSEC terms.
When an rdataset being cached is rejected because its trust
level is lower than the existing rdataset, we now check to see
whether the rejected data was identical to the existing data.
This allows us to cache a potentially useful RRSIG when handling
CD=1 queries, while still rejecting RRSIGs that would definitely
have resulted in a validation failure.
(cherry picked from commit 6aba56ae89)
The value returned by http_send_outgoing() is not used anywhere, so we
make it not return anything (void). Probably it is an omission from
older times.
(cherry picked from commit 2adabe835a)
When handling outgoing data, there were a couple of rarely executed
code paths that would not take into account that the callback MUST be
called.
It could lead to potential memory leaks and consequent shutdown hangs.
(cherry picked from commit 8b8f4d500d)
This commit changes the way how the number of active HTTP streams is
calculated and allows it to scale with the values of the maximum
amount of streams per connection, instead of effectively capping at
STREAM_CLIENTS_PER_CONN.
The original limit, which is intended to define the pipelining limit
for TCP/DoT. However, it appeared to be too restrictive for DoH, as it
works quite differently and implements pipelining at protocol level by
the means of multiplexing multiple streams. That renders each stream
to be effectively a separate connection from the point of view of the
rest of the codebase.
(cherry picked from commit a22bc2d7d4)
Previously we would limit the amount of incoming data to process based
solely on the presence of not completed send requests. That worked,
however, it was found to severely degrade performance in certain
cases, as was revealed during extended testing.
Now we switch to keeping track of how much data is in flight (or ready
to be in flight) and limit the amount of processed incoming data when
the amount of in flight data surpasses the given threshold, similarly
to like we do in other transports.
(cherry picked from commit 05e8a50818)
In the qpzone implementation of `dns_db_closeversion()`, if there are changed nodes that have no remaining data, delete them.
Closes#5169
Backport of MR !10089
Merge branch 'backport-5169-qpzone-delete-dead-nodes-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10124
The host command was extended to also query for the HTTPS RR type by default.
Backport of MR !8642
Merge branch 'backport-feature/main/host-rr-https-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10123
Unless explicitly specified type from host command, do fourth query for
type HTTPS RR. It is expected it will become more common and some
systems already query that record for every name.
(cherry picked from commit 82069a5700)
Backport of !10017.
Merge branch '5145-artem-fix-wrong-logging-severity-in-do_nsfetch-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10118
Remove code in the QP zone database to handle failures of `dns_qp_insert()` which can't actually happen.
Closes#5171
Backport of MR !10088
Merge branch 'backport-5171-qpzone-insert-checks-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10114
in some places there were checks for failures of dns_qp_insert()
after dns_qp_getname(). such failures could only happen if another
thread inserted a node between the two calls, and that can't happen
because the calls are serialized with dns_qpmulti_write(). we can
simplify the code and just add an INSIST.
(cherry picked from commit fffa150df3)
When processing a query with the "checking disabled" bit set (CD=1), `named` stores the unvalidated result in the cache, marked "pending". When the same query is sent with CD=0, the cached data is validated, and either accepted as an answer, or ejected from the cache as invalid. This deferred validation was not attempted for DS and DNSKEY records if they had no cached signatures, causing spurious validation failures. We now complete the deferred validation in this scenario.
Also, if deferred validation fails, we now re-query the data to find out whether the zone has been corrected since the invalid data was cached.
Closes#5066
Backport of MR !10104
Merge branch 'backport-5066-fix-strip-dnssec-rrsigs-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10105
If a deferred validation on data that was originally queried with
CD=1 fails, we now repeat the query, since the zone data may have
changed in the meantime.
(cherry picked from commit 04b1484ed8)
When a query is made with CD=1, we store the result in the
cache marked pending so that it can be validated later, at
which time it will either be accepted as an answer or removed
from the cache as invalid. Deferred validation was not
attempted when there were no cached RRSIGs for DNSKEY and
DS. We now complete the deferred validation in this scenario.
(cherry picked from commit 8b900d1808)
An incorrect optimization caused "CNAME and other data" errors not to be detected if certain types were at the same node as a CNAME. This has been fixed.
Closes#5150
Backport of MR !10033
Merge branch 'backport-5150-cname-and-other-data-check-not-applied-to-all-types-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10100
prio_type was being used in the wrong place to optimize cname_and_other.
We have to first exclude and accepted types and we also have to
determine that the record exists before we can check if we are at
a point where a later CNAME cannot appear.
(cherry picked from commit 5e49a9e4ae)
This is a complement to the already present system test "stress" test.
Backport of MR !9474
Merge branch 'backport-mnowak/generate-tsan-unit-stress-tests-9.20' into 'bind-9.20'
See merge request isc-projects/bind9!10094