Previously, the isc_mem_debugging would be single global variable that
would affect the behavior of the memory context whenever it would be
changed which could be after some allocation were already done.
Change the memory debugging options to be local to the memory context
and immutable, so all allocations within the same memory context are
treated the same.
it's a style violation to have REQUIRE or INSIST contain code that
must run for the server to work. this was being done with some
atomic_compare_exchange calls. these have been cleaned up. uses
of atomic_compare_exchange in assertions have been replaced with
a new macro atomic_compare_exchange_enforced, which uses RUNTIME_CHECK
to ensure that the exchange was successful.
Historically, the inline keyword was a strong suggestion to the compiler
that it should inline the function marked inline. As compilers became
better at optimising, this functionality has receded, and using inline
as a suggestion to inline a function is obsolete. The compiler will
happily ignore it and inline something else entirely if it finds that's
a better optimisation.
Therefore, remove all the occurences of the inline keyword with static
functions inside single compilation unit and leave the decision whether
to inline a function or not entirely on the compiler
NOTE: We keep the usage the inline keyword when the purpose is to change
the linkage behaviour.
Previously, the unreachable code paths would have to be tagged with:
INSIST(0);
ISC_UNREACHABLE();
There was also older parts of the code that used comment annotation:
/* NOTREACHED */
Unify the handling of unreachable code paths to just use:
UNREACHABLE();
The UNREACHABLE() macro now asserts when reached and also uses
__builtin_unreachable(); when such builtin is available in the compiler.
The C17 standard deprecated ATOMIC_VAR_INIT() macro (see [1]). Follow
the suite and remove the ATOMIC_VAR_INIT() usage in favor of simple
assignment of the value as this is what all supported stdatomic.h
implementations do anyway:
* MacOSX.plaform: #define ATOMIC_VAR_INIT(__v) {__v}
* Gcc stdatomic.h: #define ATOMIC_VAR_INIT(VALUE) (VALUE)
1. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1138r0.pdf
IBM power architecture has L1 cache line size equal to 128. Take
advantage of that on that architecture, do not force more common value
of 64. When it is possible to detect higher value, use that value
instead. Keep the default to be 64.
This commit converts the license handling to adhere to the REUSE
specification. It specifically:
1. Adds used licnses to LICENSES/ directory
2. Add "isc" template for adding the copyright boilerplate
3. Changes all source files to include copyright and SPDX license
header, this includes all the C sources, documentation, zone files,
configuration files. There are notes in the doc/dev/copyrights file
on how to add correct headers to the new files.
4. Handle the rest that can't be modified via .reuse/dep5 file. The
binary (or otherwise unmodifiable) files could have license places
next to them in <foo>.license file, but this would lead to cluttered
repository and most of the files handled in the .reuse/dep5 file are
system test files.
There are some situations where having aligned allocations would be
useful, so we don't have to play tricks with padding the data to the
cacheline sizes.
Add isc_mem_{get,put,reget,putanddetach}_aligned() functions that has
alignment and size as last argument mimicking the POSIX posix_memalign()
functions on systems with jemalloc (see the documentation on
MALLOX_ALIGN() for more details). On systems without jemalloc, those
functions are same as non-aligned variants.
Previously, whole isc_mempool_get() and isc_mempool_set() would be
replaced by simpler version when run with address sanitizer.
Change the code to limit the fillcount to 1 and freemax to 0. This
change will make isc_mempool_get() to always allocate and use a single
new item and isc_mempool_put() will always return the item to the
allocator.
The __builtin_expect() can be used to provide the compiler with branch
prediction information. The Gcc manual says[1] on the subject:
In general, you should prefer to use actual profile feedback for
this (-fprofile-arcs), as programmers are notoriously bad at
predicting how their programs actually perform.
Stop using __builtin_expect() and ISC_LIKELY() and ISC_UNLIKELY() macros
to provide the branch prediction information as the performance testing
shows that named performs better when the __builtin_expect() is not
being used.
1. https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html#index-_005f_005fbuiltin_005fexpect
It was discovered that named could crash due to a segmentation fault
when jemalloc was in use and memory allocation failed. This was not
intended to happen as jemalloc's "xmalloc" option was set to "true" in
the "malloc_conf" configuration variable. However, that variable was
only set after jemalloc was already done with parsing it, which
effectively caused setting that variable to have no effect.
While investigating this issue, it was also discovered that enabling the
"xmalloc" option makes jemalloc use a slow processing path, decreasing
its performance by about 25%. [1]
Additionally, further testing (carried out after fixing the way
"malloc_conf" was set) revealed that the non-default configuration
options do not have any measurable effect on either authoritative or
recursive DNS server performance.
Replace code setting various jemalloc options to non-default values with
assertion checks of mallocx()/rallocx() return values.
[1] https://github.com/jemalloc/jemalloc/pull/523
Previously, the zero-sized allocations would return NULL pointer and the
caller had to make sure to not dereference such pointer. The C standard
defines the zero-sized calls to malloc() as implementation specific and
jemalloc mallocx() with zero size would be undefined behaviour. This
complicated the code as it had to handle such cases in a special manner
in all allocator and deallocator functions.
Now, for realloc(), the situation is even more complicated. In C
standard up to C11, the behavior would be implementation defined, and
actually some implementation would free to orig ptr and some would not.
Since C17 (via DR400) would deprecate such usage and since C23, the
behaviour would be undefined.
This commits changes helper mem_get(), mem_put() and mem_realloc()
functions to grow the zero-allocation from 0 to sizeof(void *).
This way we get a predicable behaviour that all the allocations will
always return valid pointer.
The isc_mem_get() and isc_mem_put() functions are leaving the memory
allocation size tracking to the users of the API, while
isc_mem_allocate() and isc_mem_free() would track the sizes internally.
This allowed to have isc_mem_rellocate() to manipulate the memory
allocations by the later set, but not the former set of the functions.
This commit introduces isc_mem_reget(ctx, old_ptr, old_size, new_size)
function that operates on the memory allocations with external size
tracking completing the API.
On the isc_mem water change the old water_t structure could be used
after free. Instead of introducing reference counting on the hot-path
we are going to introduce additional constraints on the
isc_mem_setwater. Once it's set for the first time, the additional
calls have to be made with the same water and water_arg arguments.
This commit refactors the water mechanism in the isc_mem API to use
single pointer to a water_t structure that can be swapped with
atomic_exchange operation instead of having four different
values (water, water_arg, hi_water, lo_water) in the flat namespace.
This reduces the need for locking and prevents a race when water and
water_arg could be desynchronized.
Calls to jemalloc extended API with size == 0 ends up in undefined
behaviour. This commit makes the isc_mem_get() and friends calls
more POSIX aligned:
If size is 0, either a null pointer or a unique pointer that can be
successfully passed to free() shall be returned.
We picked the easier route (which have been already supported in the old
code) and return NULL on calls to the API where size == 0.
This commit adds support for systems where the jemalloc library is not
available as a package, here's the quick summary:
* On Linux - the jemalloc is usually available as a package, if
configured --without-jemalloc, the shim would be used around
malloc(), free(), realloc() and malloc_usable_size()
* On macOS - the jemalloc is available from homebrew or macports, if
configured --without-jemalloc, the shim would be used around
malloc(), free(), realloc() and malloc_size()
* On FreeBSD - the jemalloc is *the* system allocator, we just need
to check for <malloc_np.h> header to get access to non-standard API
* On NetBSD - the jemalloc is *the* system allocator, we just need to
check for <jemalloc/jemalloc.h> header to get access to non-standard
API
* On a system hostile to users and developers (read OpenBSD) - the
jemalloc API is emulated by using ((size_t *)ptr)[-1] field to hold
the size information. The OpenBSD developers care only for
themselves, so why should we care about speed on OpenBSD?
- isc_mempool_get() can no longer fail; when there are no more objects
in the pool, more are always allocated. checking for NULL return is
no longer necessary.
- the isc_mempool_setmaxalloc() and isc_mempool_getmaxalloc() functions
are no longer used and have been removed.
Current mempools are kind of hybrid structures - they serve two
purposes:
1. mempool with a lock is basically static sized allocator with
pre-allocated free items
2. mempool without a lock is a doubly-linked list of preallocated items
The first kind of usage could be easily replaced with jemalloc small
sized arena objects and thread-local caches.
The second usage not-so-much and we need to keep this (in
libdns:message.c) for performance reasons.
Previously, we only had capability to trace the mempool gets and puts,
but for debugging, it's sometimes also important to keep track how many
and where do the memory pools get created and destroyed. This commit
adds such tracking capability.
The jemalloc non-standard API fits nicely with our memory contexts, so
just rewrite the memory context internals to use the non-public API.
There's just one caveat - since we no longer track the size of the
allocation for isc_mem_allocate/isc_mem_free combination, we need to use
sallocx() to get real allocation size in both allocator and deallocator
because otherwise the sizes would not match.
The ISC_MEM_DEBUGSIZE and ISC_MEM_DEBUGCTX did sanity checks on matching
size and memory context on the memory returned to the allocator. Those
will no longer needed when most of the allocator will be replaced with
jemalloc.
There's global variable called `malloc_conf` that can be used to
configure jemalloc behaviour at the program startup. We use following
configuration:
* xmalloc:true - abort-on-out-of-memory enabled.
* background_thread:true - Enable internal background worker threads
to handle purging asynchronously.
* metadata_thp:auto - allow jemalloc to use transparent huge page
(THP) for internal metadata initially, but may begin to do so when
metadata usage reaches certain level.
* dirty_decay_ms:30000 - Approximate time in milliseconds from the
creation of a set of unused dirty pages until an equivalent set of
unused dirty pages is purged and/or reused.
* muzzy_decay_ms:30000 - Approximate time in milliseconds from the
creation of a set of unused muzzy pages until an equivalent set of
unused muzzy pages is purged and/or reused.
More information about the specific meaning can be found in the jemalloc
manpage or online at http://jemalloc.net/jemalloc.3.html
Previously, we only had capability to trace the memory gets and puts,
but for debugging, it's sometimes also important to keep track how many
and where do the memory contexts get created and destroyed. This commit
adds such tracking capability.
configuring with --enable-mutex-atomics flagged these incorrectly
initialised variables on systems where pthread_mutex_init doesn't
just zero out the structure.
There is no possibility for mpctx->items to be NULL at the point where
the code was removed, since we enforce that fillcount > 0, if
mpctx->items == NULL when isc_mempool_get is called, then we will
allocate fillcount more items and add to the mpctx->items list.
When AddressSanitizer is in use, disable the internal mempool
implementation and redirect the isc_mempool_get to isc_mem_get
(and similarly for isc_mempool_put). This is the method recommended
by the AddressSanitizer authors for tracking allocations and
deallocations instead of custom poison/unpoison code (see
https://github.com/google/sanitizers/wiki/AddressSanitizerManualPoisoning).
Instead of calling isc_tls_initialize()/isc_tls_destroy() explicitly use
gcc/clang attributes on POSIX and DLLMain on Windows to initialize and
shutdown OpenSSL library.
This resolves the issue when isc_nm_create() / isc_nm_destroy() was
called multiple times and it would call OpenSSL library destructors from
isc_nm_destroy().
At the same time, since we now have introduced the ctor/dtor for libisc,
this commit moves the isc_mem API initialization (the list of the
contexts) and changes the isc_mem_checkdestroyed() to schedule the
checking of memory context on library unload instead of executing the
code immediately.
The ISC_MEM_CHECKOVERRUN would add canary byte at the end of every
allocations and check whether the canary byte hasn't been changed at the
free time. The AddressSanitizer and valgrind memory checks surpases
simple checks like this, so there's no need to actually keep the code
inside the allocator.
This is yet another step into unlocking some parts of the memory
contexts. All the regularly updated variables has been turned into
atomic types, so we can later remove the locks when updating various
counters.
Also unlock as much code as possible without breaking anything.
On 24-core machine, the tests would crash because we would run out of
the hazard pointers. We now adjust the number of hazard pointers to be
in the <128,256> interval based on the number of available cores.
Note: This is just a band-aid and needs a proper fix.
Previously, the applications using libisc would be able to override the
internal memory methods with own implementation. This was no longer
possible, but the extra level of indirection was not removed. This
commit removes the extra level of indirection for the memory methods and
the default_memalloc() and default_memfree().
The internal memory allocator had an extra code to keep a list of blocks
for small size allocation. This would help to reduce the interactions
with the system malloc as the memory would be already allocated from the
system, but there's an extra cost associated with that - all the
allocations/deallocations must be locked, effectively eliminating any
optimizations in the system allocator targeted at multi-threaded
applications. While the isc_mem API is still using locks pretty heavily,
this is a first step into reducing the memory allocation/deallocation
contention.
Created isc_refcount_decrement_expect macro to test conditionally
the return value to ensure it is in expected range. Converted
unchecked isc_refcount_decrement to use isc_refcount_decrement_expect.
Converted INSIST(isc_refcount_decrement()...) to isc_refcount_decrement_expect.
mem.c:add_trace_entry() -> isc_hash_function() -> isc_siphash24()
129 for (; in != end; in += 8) {
6. byte_swapping: Performing a byte swapping operation on
in implies that it came from an external source, and is
therefore tainted.
130 uint64_t m = U8TO64_LE(in);
this corrects some style glitches such as:
```
long_function_call(arg, arg2, arg3, arg4, arg5, "str"
"ing");
```
...by adjusting the penalties for breaking strings and call
parameter lists.