Add tracing probes to ISC own isc_rwlock implementation to allow
fine-grained tracing. The pthread rwlock already has probes inside
glibc, and it's difficult to add probes to headers included from the
other libraries.
This changes the internal isc_rwlock implementation to:
Irina Calciu, Dave Dice, Yossi Lev, Victor Luchangco, Virendra
J. Marathe, and Nir Shavit. 2013. NUMA-aware reader-writer locks.
SIGPLAN Not. 48, 8 (August 2013), 157–166.
DOI:https://doi.org/10.1145/2517327.24425
(The full article available from:
http://mcg.cs.tau.ac.il/papers/ppopp2013-rwlocks.pdf)
The implementation is based on the The Writer-Preference Lock (C-RW-WP)
variant (see the 3.4 section of the paper for the rationale).
The implemented algorithm has been modified for simplicity and for usage
patterns in rbtdb.c.
The changes compared to the original algorithm:
* We haven't implemented the cohort locks because that would require a
knowledge of NUMA nodes, instead a simple atomic_bool is used as
synchronization point for writer lock.
* The per-thread reader counters are not being used - this would
require the internal thread id (isc_tid_v) to be always initialized,
even in the utilities; the change has a slight performance penalty,
so we might revisit this change in the future. However, this change
also saves a lot of memory, because cache-line aligned counters were
used, so on 32-core machine, the rwlock would be 4096+ bytes big.
* The readers use a writer_barrier that will raise after a while when
readers lock can't be acquired to prevent readers starvation.
* Separate ingress and egress readers counters queues to reduce both
inter and intra-thread contention.
Unfortunately, C still lacks a standard function for pause (x86,
sparc) or yeild (arm) instructions, for use in spin lock or CAS loops.
BIND has its own based on vendor intrinsics or inline asm.
Previously, it was buried in the `isc_rwlock` implementation. This
commit renames `isc_rwlock_pause()` to `isc_pause()` and moves
it into <isc/pause.h>.
This commit also fixes the configure script so that it detects ARM
yield support on systems that identify as `aarch*` instead of `arm*`.
On 64-bit ARM systems we now use the ISB (instruction synchronization
barrier) instruction in preference to yield. The ISB instruction
pauses the CPU for longer, several nanoseconds, which is more like the
x86 pause instruction. There are more details in a Rust pull request,
which also refers to MySQL making the same change:
https://github.com/rust-lang/rust/pull/84725
While using mutrace, the phtread-rwlock based isc_rwlock implementation
would be all tracked in the rwlock.c unit losing all useful information
as all rwlocks would be traced in a single place. Rewrite the
pthread_rwlock based implementation to be header-only macros, so we can
use mutrace to properly track the rwlock contention without heavily
patching mutrace to understand the libisc synchronization primitives.
Instead of checking the PTHREAD_RUNTIME_CHECK from the header, move it
to the pthread_rwlock implementation functions. The internal isc_rwlock
actually cannot fail, so the checks in the header was useless anyway.
Mostly generated automatically with the following semantic patch,
except where coccinelle was confused by #ifdef in lib/isc/net.c
@@ expression list args; @@
- UNEXPECTED_ERROR(__FILE__, __LINE__, args)
+ UNEXPECTED_ERROR(args)
@@ expression list args; @@
- FATAL_ERROR(__FILE__, __LINE__, args)
+ FATAL_ERROR(args)
Replace direct uses of implementation-specific rwlock functions in
lib/isc/include/isc/rwlock.h with preprocessor macros that use
ERRNO_CHECK(), in order to augment rwlock-related error messages with
file/line/caller information and the error string corresponding to
errno. Adjust the implementation-specific functions for pthreads-based
rwlocks so that they return any errors encountered to the caller instead
of aborting execution immediately using RUNTIME_CHECK().
To keep code modifications simple, make the non-pthreads-based
implementation-specific rwlock functions always return 0; these
functions continue to handle errors using less verbose run-time
assertions as they do not set errno anyway.
Some POSIX threads implementations (e.g. FreeBSD's libthr) allocate
memory on the heap when pthread_rwlock_init() is called. Every call to
that function must be accompanied by a corresponding call to
pthread_rwlock_destroy() or else the memory allocated for the rwlock
will leak.
jemalloc can be used for detecting memory allocations which are not
released by a process when it exits. Unfortunately, since jemalloc is
also the system allocator on FreeBSD and a special (profiling-enabled)
build of jemalloc is required for memory leak detection, this method
cannot be used for detecting leaked memory allocated by libthr on a
stock FreeBSD installation.
However, libthr's behavior can be emulated on any platform by
implementing alternative versions of libisc functions for creating and
destroying rwlocks that allocate memory using malloc() and release it
using free(). This enables using jemalloc for detecting missing
pthread_rwlock_destroy() calls on any platform on which it works
reliably.
When the newly introduced ISC_TRACK_PTHREADS_OBJECTS preprocessor macro
is set (and --enable-pthread-rwlock is used), allocate isc_rwlock_t
structures on the heap in isc_rwlock_init() and free them in
isc_rwlock_destroy(). Reuse existing functions defined in
lib/isc/rwlock.c for other operations, but rename them first, so that
they contain triple underscores (to indicate that these functions are
implementation-specific, unlike their mutex and condition variable
counterparts, which always use the pthreads implementation). Define the
isc__rwlock_init() macro so that it is a logical counterpart of
isc__mutex_init() and isc__condition_init(); adjust isc___rwlock_init()
accordingly. Remove a redundant function prototype for
isc__rwlock_lock() and rename that (static) function to rwlock_lock() in
order to avoid having to use quadruple underscores.
Instead of returning error values from isc_rwlock_*(), isc_mutex_*(),
and isc_condition_*() macros/functions and subsequently carrying out
runtime assertion checks on the return values in the calling code,
trigger assertion failures directly in those macros/functions whenever
any pthread function returns an error, as there is no point in
continuing execution in such a case anyway.
isc_rwlock_init() currently detects pthread_rwlock_init() failures using
a REQUIRE() assertion. Use the ERRNO_CHECK() macro for that purpose
instead, so that read-write lock initialization failures are handled
identically as condition variable (pthread_cond_init()) and mutex
(pthread_mutex_init()) initialization failures.
In couple places, we have missed INSIST(0) or ISC_UNREACHABLE()
replacement on some branches with UNREACHABLE(). Replace all
ISC_UNREACHABLE() or INSIST(0) calls with UNREACHABLE().
Previously, the unreachable code paths would have to be tagged with:
INSIST(0);
ISC_UNREACHABLE();
There was also older parts of the code that used comment annotation:
/* NOTREACHED */
Unify the handling of unreachable code paths to just use:
UNREACHABLE();
The UNREACHABLE() macro now asserts when reached and also uses
__builtin_unreachable(); when such builtin is available in the compiler.
This commit converts the license handling to adhere to the REUSE
specification. It specifically:
1. Adds used licnses to LICENSES/ directory
2. Add "isc" template for adding the copyright boilerplate
3. Changes all source files to include copyright and SPDX license
header, this includes all the C sources, documentation, zone files,
configuration files. There are notes in the doc/dev/copyrights file
on how to add correct headers to the new files.
4. Handle the rest that can't be modified via .reuse/dep5 file. The
binary (or otherwise unmodifiable) files could have license places
next to them in <foo>.license file, but this would lead to cluttered
repository and most of the files handled in the .reuse/dep5 file are
system test files.
The isc/platform.h header was left empty which things either already
moved to config.h or to appropriate headers. This is just the final
cleanup commit.
The pthread_self(), thrd_current() or GetCurrentThreadId() could
actually be a pointer, so we should rather convert the value into
uintptr_t instead of unsigned long.
While testing BIND 9 on arm64 8+ core machine, it was discovered that
the weak variants in fact does spuriously fail, we haven't observed that
on other architectures.
This commit replaces all non-loop usage of atomic_compare_exchange_weak
with atomic_compare_exchange_strong.
The memory ordering in the rwlock was all wrong, I am copying excerpts
from the https://en.cppreference.com/w/c/atomic/memory_order#Relaxed_ordering
for the convenience of the reader:
Relaxed ordering
Atomic operations tagged memory_order_relaxed are not synchronization
operations; they do not impose an order among concurrent memory
accesses. They only guarantee atomicity and modification order
consistency.
Release-Acquire ordering
If an atomic store in thread A is tagged memory_order_release and an
atomic load in thread B from the same variable is tagged
memory_order_acquire, all memory writes (non-atomic and relaxed atomic)
that happened-before the atomic store from the point of view of thread
A, become visible side-effects in thread B. That is, once the atomic
load is completed, thread B is guaranteed to see everything thread A
wrote to memory.
The synchronization is established only between the threads releasing
and acquiring the same atomic variable. Other threads can see different
order of memory accesses than either or both of the synchronized
threads.
Which basically means that we had no or weak synchronization between
threads using the same variables in the rwlock structure. There should
not be a significant performance drop because the critical sections were
already protected by:
while(1) {
if (relaxed_atomic_operation) {
break;
}
LOCK(lock);
if (!relaxed_atomic_operation) {
WAIT(sem, lock);
}
UNLOCK(lock)l
}
I would add one more thing to "Don't do your own crypto, folks.":
- Also don't do your own locking, folks.
The change fixes the following build failure on sparc T3 and older CPUs:
```
sparc-unknown-linux-gnu-gcc ... -O2 -mcpu=niagara2 ... -c rwlock.c
{standard input}: Assembler messages:
{standard input}:398: Error: Architecture mismatch on "pause ".
{standard input}:398: (Requires v9e|v9v|v9m|m8; requested architecture is v9b.)
make[1]: *** [Makefile:280: rwlock.o] Error 1
```
`pause` insutruction exists only on `-mcpu=niagara4` (`T4`) and upper.
The change adds `pause` configure-time autodetection and uses it if available.
config.h.in got new `HAVE_SPARC_PAUSE` knob. Fallback is a fall-through no-op.
Build-tested on:
- sparc-unknown-linux-gnu-gcc (no `pause`, build succeeds)
- sparc-unknown-linux-gnu-gcc -mcpu=niagara4 (`pause`, build succeeds)
Reported-by: Rolf Eike Beer
Bug: https://bugs.gentoo.org/691708
Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org>
The ThreadSanitizer found several possible data races in our rwlock
implementation. This commit changes all the unprotected variables to atomic and
also changes the explicit memory ordering (atomic_<foo>_explicit(..., <order>)
functions to use our convenience macros (atomic_<foo>_<order>).
The stock toolchain available on CentOS 6 for i386 is unable to use the
_mm_pause() intrinsic. Fix by using "rep; nop" assembly instructions on
that platform instead.