Apply the semantic patch to catch all the places where we pass 'char' to
the <ctype.h> family of functions (isalpha() and friends, toupper(),
tolower()).
Instead of copying address back and forth when hashing addr+port, we can
use incremental hashing. Additionally, switch from 64-bit
isc_hash_function to 32-bit isc_hash32() as the resulting value is
32-bit.
The Unix Domain Sockets support in BIND 9 has been completely disabled
since BIND 9.18 and it has been a fatal error since then. Cleanup the
code and the documentation that suggest that Unix Domain Sockets are
supported.
When iterating the table, we can't add new nodes to the hashmap because
we can't assure that we are not adding the new node before the iterator.
This also applies to rehashing - which might be triggered by both
isc_hashmap_add() and isc_hashmap_delete(), but not
isc_hashmap_iter_delcurrent_next().
When isc_hashmap_iter_delcurrent_next calls hashmap_delete_node
nodes from the front of the table could be added to the end of
the table resulting in them being returned twice. Detect when
this is happening and prevent those nodes being returned twice
buy reducing the effective size of the table by one each time
it happens.
Instead of high number of dispatches (4 * named_g_udpdisp)[1], make the
dispatches bound to threads and make dns_dispatchset_t create a dispatch
for each thread (event loop).
This required couple of other changes:
1. The dns_dispatch_createudp() must be called on loop, so the isc_tid()
is already initialized - changes to nsupdate and mdig were required.
2. The dns_requestmgr had only a single dispatch per v4 and v6. Instead
of using single dispatch, use dns_dispatchset_t for each protocol -
this is same as dns_resolver.
Looking up unique message ID in the dns_dispatch has been using custom
hash tables. Rewrite the custom hashtable to use cds_lfht API, removing
one extra lock in the cold-cache resolver hot path.
Refactor isc_hashmap to allow custom matching functions. This allows us
to have better tailored keys that don't require fixed uint8_t arrays,
but can be composed of more fields from the stored data structure.
Add support for incremental hashing to the isc_hash unit, both 32-bit
and 64-bit incremental hashing is now supported.
This is commit second in series adding incremental hashing to libisc.
When inserting items into hashtables (hashmaps), we might have a
fragmented key (as an example we might want to hash DNS name + class +
type). We either need to construct continuous key in the memory and
then hash it en bloc, or incremental hashing is required.
This incremental version of SipHash 2-4 algorithm is the first building
block.
As SipHash 2-4 is often used in the hot paths, I've turned the
implementation into header-only version in the process.
We now depend on explicitly creating memory arenas and disabling tcache
on those, and these features are not available with jemalloc < 4.
Instead of working around these issues, make the jemalloc >= 4.0.0 hard
requirement by looking for sdallocx() symbol that's only available from
that version.
The jemalloc < 4 was only used by RHEL 7 which is not supported since
BIND 9.19+.
This commit extends the internal memory management middleware code in
BIND so that memory contexts backed by dedicated jemalloc arenas can
be created. A new function (isc_mem_create_arena()) is added for that.
Moreover, it extends the existing code so that specialised memory
contexts can be created easily, should we need that functionality for
other future purposes. We have achieved that by passing the flags to
the underlying jemalloc-related calls. See the above
isc_mem_create_arena(), which can serve as an example of this.
Having this opens up possibilities for creating memory contexts tuned
for specific needs.
Use the new isc_mem_c*() calloc-like API for allocations that are
zeroed.
In turn, this also fixes couple of incorrect usage of the ISC_MEM_ZERO
for structures that need to be zeroed explicitly.
There are few places where isc_mem_cput() is used on structures with a
flexible member (or similar).
Add new isc_mem_cget(), isc_mem_creget(), and isc_mem_cput() macros to
complement the isc_mem_callocate() (which works like calloc()).
The overflow checks are implemented as macros in the <isc/mem.h>, so
that the compiler can see that the element size is constant: it should
always be `sizeof(something)`.
Before calling isc_buffer_putmem(), there is a condition to check
that 'buf_size' is greater than 0. At this point 'buf_size' is
guaranteed to be greater than zero, so either the condition is
redundant, or 'unprocessed_size' should be checked instead, which
seems more logical, because calling isc_buffer_putmem() with
'unprocessed_size' being zero is not useful, although harmless.
The isc_dnsstream_assembler_incoming() inline function expects that
when 'buf_size' is zero, then 'buf' must be NULL. The expectation is
not correct, because those values come from the libuv read callback,
and its documentation notes[1] that 'nread' ('buf_size' here) might
be 0, which does not indicate an error or EOF, but is equivalent to
EAGAIN or EWOULDBLOCK under read(2).
Change the isc_dnsstream_assembler_incoming() inline function to
remove the invalid expectation.
[1] https://docs.libuv.org/en/v1.x/stream.html#c.uv_read_cb
There used to be an extra layer of indirection in the memory functions
for certain dynamic linking scenarios. This involved variant spellings
like isc__mem and isc___mem. The isc___mem variants were removed in
commit 7de846977b so the token pasting is no longer needed and
only serves to obfuscate.
Add tracing probes to isc_job unit:
* libisc:job_cb_before - before the job callback is called
* libisc:job_cb_after - after the job callback is called
Add tracing probes to ISC own isc_rwlock implementation to allow
fine-grained tracing. The pthread rwlock already has probes inside
glibc, and it's difficult to add probes to headers included from the
other libraries.
This adds support for User Statically Defined Tracing (USDT). On
Linux, this uses the header from SystemTap and dtrace utility, but the
support is universal as long as dtrace is available.
Also add the required infrastructure to add probes to libisc, libdns and
libns libraries, where most of the probes will be.
Instead of growing and never shrinking the list of the inactive
handles (to be reused mostly on the UDP connections), limit the number
of maximum number of inactive handles kept to 64. Instead of caching
the inactive handles for all listening sockets, enable the caching on on
UDP listening sockets. For TCP, the handles were cached for each
accepted socket thus reusing the handles only for long-standing TCP
connections, but not reusing the handles across different TCP streams.
The SET_IF_NOT_NULL() macro avoids a fair amount of tedious boilerplate,
checking pointer parameters to see if they're non-NULL and updating
them if they are. The macro was already in the dns_zone unit, and this
commit moves it to the <isc/util.h> header.
I have included a Coccinelle semantic patch to use SET_IF_NOT_NULL()
where appropriate. The patch needs an #include in `openssl_shim.c`
in order to work.
Because rcu_barrier() needs to be called as many times as the number of
nested call_rcu() calls (call_rcu() calls made from call_rcu thread),
and currently there's no mechanism to detect whether there are more
call_rcu callbacks scheduled, we simply call the rcu_barrier() multiple
times. The overhead is negligible and it prevents rare assertion
failures caused by the check for memory leaks in isc__mem_destroy().
When dns_request was canceled via dns_requestmgr_shutdown() the cancel
event would be propagated on different loop (loop 0) than the loop where
request was created on. In turn this would propagate down to isc_netmgr
where we require all the events to be called from the matching isc_loop.
Pin the dns_requests to the loops and ensure that all the events are
called on the associated loop. This in turn allows us to remove the
hashed locks on the requests and change the single .requests list to be
a per-loop list for the request accounting.
Additionally, do some extra cleanup because some race condititions are
now not possible as all events on the dns_request are serialized.
With ThreadSanitizer support added to the Userspace RCU, we no longer
need to wrap the call_rcu and caa_container_of with
__tsan_{acquire,release} hints. Remove the direct calls to
__tsan_{acquire,release} and the isc_urcu_{container,cleanup} macros.
The cds_lfht_for_each_entry and cds_lfht_for_each_entry_duplicate macros
had a code that operated on the NULL pointer, at the end of the list it
was calling caa_container_of() on the NULL pointer in the init-clause
and iteration-expression, but the result wasn't actually used anywhere
because the cond-expression in the for loop has prevented executing
loop-statement. This made AddressSanitizer notice the invalid operation
and rightfully complain.
This was reported to the upstream and fixed there. Pull the upstream
fix into our <isc/urcu.h> header, so our CI checks pass.
The isc_stats_create() can no longer return anything else than
ISC_R_SUCCESS. Refactor isc_stats_create() and its variants in libdns,
libns and named to just return void.
to reduce the amount of common code that will need to be shared
between the separated cache and zone database implementations,
clean up unused portions of dns_db.
the methods dns_db_dump(), dns_db_isdnssec(), dns_db_printnode(),
dns_db_resigned(), dns_db_expirenode() and dns_db_overmem() were
either never called or were only implemented as nonoperational stub
functions: they have now been removed.
dns_db_nodefullname() was only used in one place, which turned out
to be unnecessary, so it has also been removed.
dns_db_ispersistent() and dns_db_transfernode() are used, but only
the default implementation in db.c was ever actually called. since
they were never overridden by database methods, there's no need to
retain methods for them.
in rbtdb.c, beginload() and endload() methods are no longer defined for
the cache database, because that was never used (except in a few unit
tests which can easily be modified to use the zone implementation
instead). issecure() is also no longer defined for the cache database,
as the cache is always insecure and the default implementation of
dns_db_issecure() returns false.
for similar reasons, hashsize() is no longer defined for zone databases.
implementation functions that are shared between zone and cache are now
prepended with 'dns__rbtdb_' so they can become nonstatic.
serve_stale_ttl is now a common member of dns_db.