2
0
mirror of https://gitlab.isc.org/isc-projects/bind9 synced 2025-09-05 00:55:24 +00:00
Commit Graph

12911 Commits

Author SHA1 Message Date
Ondřej Surý
fd975a551d Split reusing the addr/port and load-balancing socket options
The SO_REUSEADDR, SO_REUSEPORT and SO_REUSEPORT_LB has different meaning
on different platform. In this commit, we split the function to set the
reuse of address/port and setting the load-balancing into separate
functions.

The libuv library already have multiplatform support for setting
SO_REUSEADDR and SO_REUSEPORT that allows binding to the same address
and port, but unfortunately, when used after the load-balancing socket
options have been already set, it overrides the previous setting, so we
need our own helper function to enable the SO_REUSEADDR/SO_REUSEPORT
first and then enable the load-balancing socket option.
2020-10-05 15:18:28 +02:00
Ondřej Surý
acb6ad9e3c Use uv_os_sock_t instead of uv_os_fd_t for sockets
On POSIX based systems both uv_os_sock_t and uv_os_fd_t are both typedef
to int.  That's not true on Windows, where uv_os_sock_t is SOCKET and
uv_os_fd_t is HANDLE and they differ in level of indirection.
2020-10-05 15:18:28 +02:00
Ondřej Surý
9dc01a636b Refactor isc__nm_socket_freebind() to take fd and sa_family as args
The isc__nm_socket_freebind() has been refactored to match other
isc__nm_socket_...() helper functions and take uv_os_fd_t and
sa_family_t as function arguments.
2020-10-05 15:18:24 +02:00
Ondřej Surý
d685bbc822 Add helper function to enable DF (don't fragment) flag on UDP sockets
This commits add isc__nm_socket_dontfrag() helper functions.
2020-10-05 14:55:20 +02:00
Ondřej Surý
5daaca7146 Add SO_REUSEPORT and SO_INCOMING_CPU helper functions
The setting of SO_REUSE**** and SO_INCOMING_CPU have been moved into a
separate helper functions.
2020-10-05 14:54:24 +02:00
Matthijs Mekking
70d1ec432f Use explicit result codes for 'rndc dnssec' cmd
It is better to add new result codes than to overload existing codes.
2020-10-05 10:53:46 +02:00
Matthijs Mekking
edc53fc416 Various rndc dnssec -checkds fixes
While working on 'rndc dnssec -rollover' I noticed the following
(small) issues:

- The key files where updated with hints set to "-when" and that
  should always be "now.
- The kasp system test did not properly update the test number when
  calling 'rndc dnssec -checkds' (and ensuring that works).
- There was a missing ']' in the rndc.c help output.
2020-10-05 10:53:46 +02:00
Matthijs Mekking
fcd34abb9e Test rndc rollover inactive key
When users (accidentally) try to roll an inactive key, throw an error.
2020-10-05 10:53:46 +02:00
Matthijs Mekking
df8276aef0 Add manual key rollover logic
Add to the keymgr a function that will schedule a rollover. This
basically means setting the time when the key needs to retire,
and updating the key lifetime, then update the state file. The next
time that named runs the keymgr the new lifetime will be taken into
account.
2020-10-05 10:52:19 +02:00
Matthijs Mekking
5614454c3b Change condition for rndc dumpdb -expired
After backporting #1870 to 9.11-S I saw that the condition check there
is different than in the main branch. In 9.11-S "stale" can mean
stale and serve-stale, or not active (awaiting cleanup). In 9.16 and
later versions, "stale" is stale and serve-stale, and "ancient" means
not active (awaiting cleanup). An "ancient" RRset is one that is not
active (TTL expired) and is not eligble for serve-stale.

Update the condition for rndc dumpdb -expired to closer match what is
in 9.11-S.
2020-10-05 10:44:50 +02:00
Matthijs Mekking
7c555254fe Fix kasp min key size bug
The minimal size for RSASHA1, RSASHA256 is 512, but due to bad
assignment it was set to 1024.
2020-10-02 09:20:40 +02:00
Matthijs Mekking
0e207392ec Fix Ed25519 and Ed448 in dnssec-policy keymgr
The kasp code had bad implicit size values for the cryptographic
algorithms Ed25519 and Ed448. When creating keys they would never
match the dnssec-policy, leading to new attempts to create keys.

These algorithms were previously not yet added to the system tests,
due to lack of availability on some systems.
2020-10-02 09:20:19 +02:00
Michał Kępień
dbcf683c1a Allow "order none" in "rrset-order" rules
named-checkconf treats the following configuration as valid:

    options {
        rrset-order {
            order none;
        };
    };

Yet, the above configuration causes named to crash on startup with:

    order.c:74: REQUIRE(mode == 0x00000800 || mode == 0x00000400 || mode == 0x00800000) failed, back trace

Add DNS_RDATASETATTR_NONE to the list of RRset ordering modes accepted
by dns_order_add() to allow "order none" to be used in "rrset-order"
rules.  This both prevents the aforementioned crashes and addresses the
discrepancy between named-checkconf and named.
2020-10-02 08:41:43 +02:00
Mark Andrews
6293682020 Add the ability select individual tests to rdata_test 2020-10-01 08:21:42 +00:00
Mark Andrews
a9c3374717 Add the ability to print out the list of test names (-l) 2020-10-01 08:21:42 +00:00
Mark Andrews
76837484e7 Add the ability to select tests to run
task_test [-t <test_name>]
2020-10-01 08:21:42 +00:00
Mark Andrews
96febe6b38 Alphabetise tests 2020-10-01 08:21:42 +00:00
Mark Andrews
840cf7adb3 Add missing rwlock calls when access keynode.initial and keynode.managed
WARNING: ThreadSanitizer: data race
    Write of size 1 at 0x000000000001 by thread T1 (mutexes: write M1):
    #0 dns_keynode_trust lib/dns/keytable.c:836
    #1 keyfetch_done lib/dns/zone.c:10187
    #2 dispatch lib/isc/task.c:1152
    #3 run lib/isc/task.c:1344
    #4 <null> <null>

    Previous read of size 1 at 0x000000000001 by thread T2 (mutexes: read M2):
    #0 keynode_dslist_totext lib/dns/keytable.c:682
    #1 dns_keytable_totext lib/dns/keytable.c:732
    #2 named_server_dumpsecroots bin/named/server.c:11357
    #3 named_control_docommand bin/named/control.c:264
    #4 control_command bin/named/controlconf.c:390
    #5 dispatch lib/isc/task.c:1152
    #6 run lib/isc/task.c:1344
    #7 <null> <null>

    Location is heap block of size 241 at 0x000000000010 allocated by thread T3:
    #0 malloc <null>
    #1 default_memalloc lib/isc/mem.c:713
    #2 mem_get lib/isc/mem.c:622
    #3 mem_allocateunlocked lib/isc/mem.c:1268
    #4 isc___mem_allocate lib/isc/mem.c:1288
    #5 isc__mem_allocate lib/isc/mem.c:2453
    #6 isc___mem_get lib/isc/mem.c:1037
    #7 isc__mem_get lib/isc/mem.c:2432
    #8 new_keynode lib/dns/keytable.c:346
    #9 insert lib/dns/keytable.c:393
    #10 dns_keytable_add lib/dns/keytable.c:421
    #11 process_key bin/named/server.c:955
    #12 load_view_keys bin/named/server.c:983
    #13 configure_view_dnsseckeys bin/named/server.c:1140
    #14 configure_view bin/named/server.c:5371
    #15 load_configuration bin/named/server.c:9110
    #16 loadconfig bin/named/server.c:10310
    #17 named_server_reconfigcommand bin/named/server.c:10693
    #18 named_control_docommand bin/named/control.c:250
    #19 control_command bin/named/controlconf.c:390
    #20 dispatch lib/isc/task.c:1152
    #21 run lib/isc/task.c:1344
    #22 <null> <null>

    Mutex M1 is already destroyed.

    Mutex M2 is already destroyed.

    Thread T1 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create pthreads/thread.c:73
    #2 isc_taskmgr_create lib/isc/task.c:1434
    #3 create_managers bin/named/main.c:915
    #4 setup bin/named/main.c:1223
    #5 main bin/named/main.c:1523

    Thread T2 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create pthreads/thread.c:73
    #2 isc_taskmgr_create lib/isc/task.c:1434
    #3 create_managers bin/named/main.c:915
    #4 setup bin/named/main.c:1223
    #5 main bin/named/main.c:1523

    Thread T3 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create pthreads/thread.c:73
    #2 isc_taskmgr_create lib/isc/task.c:1434
    #3 create_managers bin/named/main.c:915
    #4 setup bin/named/main.c:1223
    #5 main bin/named/main.c:1523

    SUMMARY: ThreadSanitizer: data race lib/dns/keytable.c:836 in dns_keynode_trust
2020-10-01 17:26:09 +10:00
Mark Andrews
519b070618 Add ISO time stamps to the microsecond 2020-09-30 23:56:18 +10:00
Mark Andrews
5b5f1ba0b2 Check that sig0 name is the root. 2020-09-30 13:24:29 +00:00
Mark Andrews
450fab92b1 Always clean sig0name in msgresetsigs() and dns_message_renderreset()
The fuzzing harness operates on dns_message_t in non-standard ways
and if 'sig0name' is non-NULL when msgresetsigs() and
dns_message_renderreset() are called it should be cleaned up.
2020-09-30 13:24:29 +00:00
Ondřej Surý
33eefe9f85 The dns_message_create() cannot fail, change the return to void
The dns_message_create() function cannot soft fail (as all memory
allocations either succeed or cause abort), so we change the function to
return void and cleanup the calls.
2020-09-29 08:22:08 +02:00
Diego Fronza
cde6227a68 Properly handling dns_message_t shared references
This commit fix the problems that arose when moving the dns_message_t
object from fetchctx_t to the query structure.

Since the lifetime of query objects are different than that of a fetchctx
and the dns_message_t object held by the query may be being used by some
external module, e.g. validator, even after the query may have been destroyed,
propery handling of the references to the message were added in this commit to
avoid accessing an already destroyed object.

Specifically, in rctx_done(), a reference to the message is attached at
the beginning of the function and detached at the end, since a possible call
to fctx_cancelquery() would release the dns_message_t object, and in the next
lines of code a call to rctx_nextserver() or rctx_chaseds() would require
a valid pointer to the same object.

In valcreate() a new reference is attached to the message object, this
ensures that if the corresponding query object is destroyed before the
validator attempts to access it, no invalid pointer access occurs.

In validated() we have to attach a new reference to the message, since
we destroy the validator object at the beginning of the function,
and we need access to the message in the next lines of the same function.

rctx_nextserver() and rctx_chaseds() functions were adapted to receive
a new parameter of dns_message_t* type, this was so they could receive a
valid reference to a dns_message_t since using the response context respctx_t
to access the message through rctx->query->rmessage could lead to an already
released reference due to the query being canceled.
2020-09-29 08:22:08 +02:00
Diego Fronza
02f9e125c1 Fix invalid dns message state in resolver's logic
The assertion failure REQUIRE(msg->state == DNS_SECTION_ANY),
caused by calling dns_message_setclass within function resquery_response()
in resolver.c, was happening due to wrong management of dns message_t
objects used to process responses to the queries issued by the resolver.

Before the fix, a resolver's fetch context (fetchctx_t) would hold
a pointer to the message, this same reference would then be used over all
the attempts to resolve the query, trying next server, etc... for this to work
the message object would have it's state reset between each iteration, marking
it as ready for a new processing.

The problem arose in a scenario with many different forwarders configured,
managing the state of the dns_message_t object was lacking better
synchronization, which have led it to a invalid dns_message_t state in
resquery_response().

Instead of adding unnecessarily complex code to synchronize the object,
the dns_message_t object was moved from fetchctx_t structure to the
query structure, where it better belongs to, since each query will produce
a response, this way whenever a new query is created an associated
dns_messate_t is also created.

This commit deals mainly with moving the dns_message_t object from fetchctx_t
to the query structure.
2020-09-29 08:22:08 +02:00
Diego Fronza
12d6d13100 Refactored dns_message_t for using attach/detach semantics
This commit will be used as a base for the next code updates in order
to have a better control of dns_message_t objects' lifetime.
2020-09-29 08:22:08 +02:00
Mark Andrews
6727e23a47 Update comments to have binary notation 2020-09-29 10:36:07 +10:00
Ondřej Surý
e5ab137ba3 Refactor the pausing/unpausing and finishing the nm_thread
The isc_nm_pause(), isc_nm_resume() and finishing the nm_thread() from
nm_destroy() has been refactored, so all use the netievents instead of
directly touching the worker structure members.  This allows us to
remove most of the locking as the .paused and .finished members are
always accessed from the matching nm_thread.

When shutting down the nm_thread(), instead of issuing uv_stop(), we
just shutdown the .async handler, so all uv_loop_t events are properly
finished first and uv_run() ends gracefully with no outstanding active
handles in the loop.
2020-09-28 11:17:11 +02:00
Michał Kępień
b60d7345ed Fix function overrides in unit tests on macOS
Since Mac OS X 10.1, Mach-O object files are by default built with a
so-called two-level namespace which prevents symbol lookups in BIND unit
tests that attempt to override the implementations of certain library
functions from working as intended.  This feature can be disabled by
passing the "-flat_namespace" flag to the linker.  Fix unit tests
affected by this issue on macOS by adding "-flat_namespace" to LDFLAGS
used for building all object files on that operating system (it is not
enough to only set that flag for the unit test executables).
2020-09-28 09:09:21 +02:00
Michał Kępień
8bdba2edeb Drop function wrapping as it is redundant for now
As currently used in the BIND source tree, the --wrap linker option is
redundant because:

  - static builds are no longer supported,

  - there is no need to wrap around existing functions - what is
    actually required (at least for now) is to replace them altogether
    in unit tests,

  - only functions exposed by shared libraries linked into unit test
    binaries are currently being replaced.

Given the above, providing the alternative implementations of functions
to be overridden in lib/ns/tests/nstest.c is a much simpler alternative
to using the --wrap linker option.  Drop the code detecting support for
the latter from configure.ac, simplify the relevant Makefile.am, and
remove lib/ns/tests/wrap.c, updating lib/ns/tests/nstest.c accordingly
(it is harmless for unit tests which are not calling the overridden
functions).
2020-09-28 09:09:21 +02:00
Evan Hunt
86eddebc83 Purge memory pool upon plugin destruction
The typical sequence of events for AAAA queries which trigger recursion
for an A RRset at the same name is as follows:

 1. Original query context is created.
 2. An AAAA RRset is found in cache.
 3. Client-specific data is allocated from the filter-aaaa memory pool.
 4. Recursion is triggered for an A RRset.
 5. Original query context is torn down.

 6. Recursion for an A RRset completes.
 7. A second query context is created.
 8. Client-specific data is retrieved from the filter-aaaa memory pool.
 9. The response to be sent is processed according to configuration.
10. The response is sent.
11. Client-specific data is returned to the filter-aaaa memory pool.
12. The second query context is torn down.

However, steps 6-12 are not executed if recursion for an A RRset is
canceled.  Thus, if named is in the process of recursing for A RRsets
when a shutdown is requested, the filter-aaaa memory pool will have
outstanding allocations which will never get released.  This in turn
leads to a crash since every memory pool must not have any outstanding
allocations by the time isc_mempool_destroy() is called.

Fix by creating a stub query context whenever fetch_callback() is called,
including cancellation events. When the qctx is destroyed, it will ensure
the client is detached and the plugin memory is freed.
2020-09-25 13:32:34 -07:00
Matthijs Mekking
d14c2d0d73 rndc dumpdb -expired: print when RRsets expired
When calling 'rndc dumpdb -expired', also print when the RRset expired.
2020-09-23 16:09:26 +02:00
Matthijs Mekking
388cc666e5 Handle ancient rrsets in bind_rdataset
An ancient RRset is one still in the cache but expired, and awaiting
cleanup.
2020-09-23 16:08:29 +02:00
Matthijs Mekking
17d5bd4493 Include expired rdatasets in iteration functions
By changing the check in 'rdatasetiter_first' and 'rdatasetiter_next'
from "now > header->rdh_ttl" to "now - RBDTB_VIRTUAL > header->rdh_ttl"
we include expired rdataset entries so that they can be used for
"rndc dumpdb -expired".
2020-09-23 16:08:29 +02:00
Matthijs Mekking
8beda7d2ea Add -expired flag to rndc dumpdb command
This flag is the same as -cache, but will use a different style format
that will also print expired entries (awaiting cleanup) from the cache.
2020-09-23 16:08:29 +02:00
Mark Andrews
c37b251eb9 It appears that you can't change what you are polling for while connecting.
WARNING: ThreadSanitizer: data race
    Read of size 8 at 0x000000000001 by thread T1 (mutexes: write M1):
    #0 epoll_ctl <null>
    #1 watch_fd lib/isc/unix/socket.c:704:8
    #2 wakeup_socket lib/isc/unix/socket.c:897:11
    #3 process_ctlfd lib/isc/unix/socket.c:3362:3
    #4 process_fds lib/isc/unix/socket.c:3275:10
    #5 netthread lib/isc/unix/socket.c:3516:10

    Previous write of size 8 at 0x000000000001 by thread T2 (mutexes: write M2):
    #0 connect <null>
    #1 isc_socket_connect lib/isc/unix/socket.c:4737:7
    #2 resquery_send lib/dns/resolver.c:2892:13
    #3 fctx_query lib/dns/resolver.c:2202:12
    #4 fctx_try lib/dns/resolver.c:4300:11
    #5 resquery_connected lib/dns/resolver.c:3130:4
    #6 dispatch lib/isc/task.c:1152:7
    #7 run lib/isc/task.c:1344:2

    Location is file descriptor 513 created by thread T2 at:
    #0 connect <null>
    #1 isc_socket_connect lib/isc/unix/socket.c:4737:7
    #2 resquery_send lib/dns/resolver.c:2892:13
    #3 fctx_query lib/dns/resolver.c:2202:12
    #4 fctx_try lib/dns/resolver.c:4300:11
    #5 resquery_connected lib/dns/resolver.c:3130:4
    #6 dispatch lib/isc/task.c:1152:7
    #7 run lib/isc/task.c:1344:2

    Mutex M1 (0x000000000016) created at:
    #0 pthread_mutex_init <null>
    #1 isc__mutex_init lib/isc/pthreads/mutex.c:288:8
    #2 setup_thread lib/isc/unix/socket.c:3584:3
    #3 isc_socketmgr_create2 lib/isc/unix/socket.c:3825:3
    #4 create_managers bin/named/main.c:932:11
    #5 setup bin/named/main.c:1223:11
    #6 main bin/named/main.c:1523:2

    Mutex M2 is already destroyed.

    Thread T1 'isc-socket-1' (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_socketmgr_create2 lib/isc/unix/socket.c:3826:3
    #3 create_managers bin/named/main.c:932:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    Thread T2 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: data race in epoll_ctl
2020-09-23 13:54:06 +10:00
Mark Andrews
a669c919c8 Address lock order inversions.
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock)
    Cycle in lock order graph: M1 (0x000000000000) => M2 (0x000000000000) => M1

    Mutex M2 acquired here while holding mutex M1 in thread T1:
    #0 pthread_mutex_lock <null>
    #1 dns_view_findzonecut lib/dns/view.c:1310:2
    #2 fctx_create lib/dns/resolver.c:5070:13
    #3 dns_resolver_createfetch lib/dns/resolver.c:10813:12
    #4 dns_resolver_prime lib/dns/resolver.c:10442:12
    #5 dns_view_find lib/dns/view.c:1176:4
    #6 dbfind_name lib/dns/adb.c:3833:11
    #7 dns_adb_createfind lib/dns/adb.c:3155:12
    #8 findname lib/dns/resolver.c:3497:11
    #9 fctx_getaddresses lib/dns/resolver.c:3808:3
    #10 fctx_try lib/dns/resolver.c:4197:12
    #11 fctx_start lib/dns/resolver.c:4824:4
    #12 dispatch lib/isc/task.c:1152:7
    #13 run lib/isc/task.c:1344:2

    Mutex M1 previously acquired by the same thread here:
    #0 pthread_mutex_lock <null>
    #1 dns_resolver_createfetch lib/dns/resolver.c:10767:2
    #2 dns_resolver_prime lib/dns/resolver.c:10442:12
    #3 dns_view_find lib/dns/view.c:1176:4
    #4 dbfind_name lib/dns/adb.c:3833:11
    #5 dns_adb_createfind lib/dns/adb.c:3155:12
    #6 findname lib/dns/resolver.c:3497:11
    #7 fctx_getaddresses lib/dns/resolver.c:3808:3
    #8 fctx_try lib/dns/resolver.c:4197:12
    #9 fctx_start lib/dns/resolver.c:4824:4
    #10 dispatch lib/isc/task.c:1152:7
    #11 run lib/isc/task.c:1344:2

    Mutex M1 acquired here while holding mutex M2 in thread T1:
    #0 pthread_mutex_lock <null>
    #1 dns_resolver_shutdown lib/dns/resolver.c:10530:4
    #2 view_flushanddetach lib/dns/view.c:632:4
    #3 dns_view_detach lib/dns/view.c:689:2
    #4 qctx_destroy lib/ns/query.c:5152:2
    #5 fetch_callback lib/ns/query.c:5749:3
    #6 dispatch lib/isc/task.c:1152:7
    #7 run lib/isc/task.c:1344:2

    Mutex M2 previously acquired by the same thread here:
    #0 pthread_mutex_lock <null>
    #1 view_flushanddetach lib/dns/view.c:630:3
    #2 dns_view_detach lib/dns/view.c:689:2
    #3 qctx_destroy lib/ns/query.c:5152:2
    #4 fetch_callback lib/ns/query.c:5749:3
    #5 dispatch lib/isc/task.c:1152:7
    #6 run lib/isc/task.c:1344:2

    Thread T1 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_mutex_lock
2020-09-23 01:13:28 +00:00
Mark Andrews
f0d9bf7c30 Clone the saved / query message buffers
The message buffer passed to ns__client_request is only valid for
the life of the the ns__client_request call.  Save a copy of it
when we recurse or process a update as ns__client_request will
return before those operations complete.
2020-09-23 10:37:42 +10:00
Mark Andrews
1090876693 Address lock-order-inversion
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock)
    Cycle in lock order graph: M1 (0x000000000001) => M2 (0x000000000002) => M1

    Mutex M2 acquired here while holding mutex M1 in thread T1:
    #0 pthread_rwlock_wrlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:52:4
    #2 zone_postload lib/dns/zone.c:5101:2
    #3 receive_secure_db lib/dns/zone.c:16206:11
    #4 dispatch lib/isc/task.c:1152:7
    #5 run lib/isc/task.c:1344:2

    Mutex M1 previously acquired by the same thread here:
    #0 pthread_mutex_lock <null>
    #1 receive_secure_db lib/dns/zone.c:16204:2
    #2 dispatch lib/isc/task.c:1152:7
    #3 run lib/isc/task.c:1344:2

    Mutex M1 acquired here while holding mutex M2 in thread T1:
    #0 pthread_mutex_lock <null>
    #1 get_raw_serial lib/dns/zone.c:2518:2
    #2 zone_gotwritehandle lib/dns/zone.c:2559:4
    #3 dispatch lib/isc/task.c:1152:7
    #4 run lib/isc/task.c:1344:2

    Mutex M2 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 zone_gotwritehandle lib/dns/zone.c:2552:2
    #3 dispatch lib/isc/task.c:1152:7
    #4 run lib/isc/task.c:1344:2

    Thread T1 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_rwlock_wrlock
2020-09-22 11:43:44 +00:00
Ondřej Surý
d4976e0ebe Add separate prefetch nmhandle to ns_client_t
As the query_prefetch() or query_rpzfetch() could be called during
"regular" fetch, we need to introduce separate storage for attaching
the nmhandle during prefetching the records.  The query_prefetch()
and query_rpzfetch() are guarded for re-entrance by .query.prefetch
member of ns_client_t, so we can reuse the same .prefetchhandle for
both.
2020-09-22 09:56:26 +02:00
Mark Andrews
48d54368d5 Remove the memmove call on dns_rbtnode_t structure that contains atomics
Calling the plain memmove on the structure that contains atomic members
triggers following TSAN warning (even when we don't really use the
atomic members in the code):

    WARNING: ThreadSanitizer: data race
      Read of size 8 at 0x000000000001 by thread T1 (mutexes: write M1, write M2):
	#0 memmove <null>
	#1 memmove /usr/include/x86_64-linux-gnu/bits/string_fortified.h:40:10
	#2 deletefromlevel lib/dns/rbt.c:2675:3
	#3 dns_rbt_deletenode lib/dns/rbt.c:2143:2
	#4 delete_node lib/dns/rbtdb.c
	#5 decrement_reference lib/dns/rbtdb.c:2202:4
	#6 prune_tree lib/dns/rbtdb.c:2259:3
	#7 dispatch lib/isc/task.c:1152:7
	#8 run lib/isc/task.c:1344:2

      Previous atomic write of size 8 at 0x000000000001 by thread T2 (mutexes: read M3):
	#0 __tsan_atomic64_fetch_sub <null>
	#1 decrement_reference lib/dns/rbtdb.c:2103:7
	#2 detachnode lib/dns/rbtdb.c:5440:6
	#3 dns_db_detachnode lib/dns/db.c:588:2
	#4 qctx_clean lib/ns/query.c:5104:3
	#5 ns_query_done lib/ns/query.c:10868:2
	#6 query_sign_nodata lib/ns/query.c
	#7 query_nodata lib/ns/query.c:8438:11
	#8 query_gotanswer lib/ns/query.c
	#9 query_lookup lib/ns/query.c:5624:10
	#10 ns__query_start lib/ns/query.c:5500:10
	#11 query_setup lib/ns/query.c:5224:11
	#12 ns_query_start lib/ns/query.c:11357:8
	#13 ns__client_request lib/ns/client.c:2166:3
	#14 udp_recv_cb lib/isc/netmgr/udp.c:414:2
	#15 uv__udp_recvmsg /home/ondrej/Projects/tsan/libuv/src/unix/udp.c
	#16 uv__udp_io /home/ondrej/Projects/tsan/libuv/src/unix/udp.c:180:5
	#17 uv__io_poll /home/ondrej/Projects/tsan/libuv/src/unix/linux-core.c:461:11
	#18 uv_run /home/ondrej/Projects/tsan/libuv/src/unix/core.c:385:5
	#19 nm_thread lib/isc/netmgr/netmgr.c:500:11

      Location is heap block of size 132 at 0x000000000030 allocated by thread T3:
	#0 malloc <null>
	#1 default_memalloc lib/isc/mem.c:713:8
	#2 mem_get lib/isc/mem.c:622:8
	#3 mem_allocateunlocked lib/isc/mem.c:1268:8
	#4 isc___mem_allocate lib/isc/mem.c:1288:7
	#5 isc__mem_allocate lib/isc/mem.c:2453:10
	#6 isc___mem_get lib/isc/mem.c:1037:11
	#7 isc__mem_get lib/isc/mem.c:2432:10
	#8 create_node lib/dns/rbt.c:2239:9
	#9 dns_rbt_addnode lib/dns/rbt.c:1435:12
	#10 findnodeintree lib/dns/rbtdb.c:2895:12
	#11 findnode lib/dns/rbtdb.c:2941:10
	#12 dns_db_findnode lib/dns/db.c:439:11
	#13 diff_apply lib/dns/diff.c:306:5
	#14 dns_diff_apply lib/dns/diff.c:459:10
	#15 do_one_tuple lib/ns/update.c:444:11
	#16 update_one_rr lib/ns/update.c:495:10
	#17 update_action lib/ns/update.c:3123:6
	#18 dispatch lib/isc/task.c:1152:7
	#19 run lib/isc/task.c:1344:2

      Mutex M1 is already destroyed.

      Mutex M2 is already destroyed.

      Mutex M3 is already destroyed.

      Thread T1 (running) created by main thread at:
	#0 pthread_create <null>
	#1 isc_thread_create lib/isc/pthreads/thread.c:73:8
	#2 isc_taskmgr_create lib/isc/task.c:1434:3
	#3 create_managers bin/named/main.c:915:11
	#4 setup bin/named/main.c:1223:11
	#5 main bin/named/main.c:1523:2

      Thread T2 (running) created by main thread at:
	#0 pthread_create <null>
	#1 isc_thread_create lib/isc/pthreads/thread.c:73:8
	#2 isc_nm_start lib/isc/netmgr/netmgr.c:223:3
	#3 create_managers bin/named/main.c:909:15
	#4 setup bin/named/main.c:1223:11
	#5 main bin/named/main.c:1523:2

      Thread T3 (running) created by main thread at:
	#0 pthread_create <null>
	#1 isc_thread_create lib/isc/pthreads/thread.c:73:8
	#2 isc_taskmgr_create lib/isc/task.c:1434:3
	#3 create_managers bin/named/main.c:915:11
	#4 setup bin/named/main.c:1223:11
	#5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: data race in memmove
2020-09-21 08:58:20 +00:00
Ondřej Surý
79ca724d46 Handle the errors from sysconf() call in isc_meminfo_totalphys()
isc_meminfo_totalphys() would return invalid memory size when sysconf()
call would fail, because ((size_t)-1 * -1) is very large number.
2020-09-21 10:55:00 +02:00
Michał Kępień
dc8a7791bd Fix updating summary RPZ DB for mixed-case RPZs
Each dns_rpz_zone_t structure keeps a hash table of the names this RPZ
database contains.  Here is what happens when an RPZ is updated:

  - a new hash table is prepared for the new version of the RPZ by
    iterating over it; each name found is added to the summary RPZ
    database,

  - every name added to the new hash table is searched for in the old
    hash table; if found, it is removed from the old hash table,

  - the old hash table is iterated over; all names found in it are
    removed from the summary RPZ database (because at that point the old
    hash table should only contain names which are not present in the
    new version of the RPZ),

  - the new hash table replaces the old hash table.

When the new version of the RPZ is iterated over, if a given name is
spelled using a different letter case than in the old version of the
RPZ, the new variant will hash to a different value than the old
variant, which means it will not be removed from the old hash table.
When the old hash table is subsequently iterated over to remove
seemingly deleted names, the old variant of the name will still be
there, causing the name to be deleted from the summary RPZ database
(which effectively causes a given rule to be ignored).

The issue can be triggered not just by altering the case of existing
names in an RPZ, but also by adding sibling names spelled with a
different letter case.  This is because RBT code preserves case when
node splitting occurs.  The end result is that when the RPZ is iterated
over, a given name may be using a different case than in the zone file
(or XFR contents).

Fix by downcasing all names found in the RPZ database before adding them
to the summary RPZ database.
2020-09-21 09:28:36 +02:00
Ondřej Surý
0110d1ab17 Exclude isc_mem_isovermem from ThreadSanitizer
The .is_overmem member of isc_mem_t structure is intentionally accessed
unlocked as 100% accuracy isn't necessary here.

Without the attribute, following TSAN warning would show up:

    WARNING: ThreadSanitizer: data race
      Write of size 1 at 0x000000000001 by thread T1 (mutexes: write M1, write M2):
	#0 isc___mem_put lib/isc/mem.c:1119:19
	#1 isc__mem_put lib/isc/mem.c:2439:2
	#2 dns_rdataslab_fromrdataset lib/dns/rdataslab.c:327:2
	#3 addrdataset lib/dns/rbtdb.c:6761:11
	#4 dns_db_addrdataset lib/dns/db.c:719:10
	#5 cache_name lib/dns/resolver.c:6538:13
	#6 cache_message lib/dns/resolver.c:6628:14
	#7 resquery_response lib/dns/resolver.c:7883:13
	#8 dispatch lib/isc/task.c:1152:7
	#9 run lib/isc/task.c:1344:2

      Previous read of size 1 at 0x000000000001 by thread T2 (mutexes: write M3):
	#0 isc_mem_isovermem lib/isc/mem.c:1553:15
	#1 addrdataset lib/dns/rbtdb.c:6866:25
	#2 dns_db_addrdataset lib/dns/db.c:719:10
	#3 addoptout lib/dns/ncache.c:281:10
	#4 dns_ncache_add lib/dns/ncache.c:101:10
	#5 ncache_adderesult lib/dns/resolver.c:6668:12
	#6 ncache_message lib/dns/resolver.c:6845:11
	#7 rctx_ncache lib/dns/resolver.c:9174:11
	#8 resquery_response lib/dns/resolver.c:7894:2
	#9 dispatch lib/isc/task.c:1152:7
	#10 run lib/isc/task.c:1344:2

      Location is heap block of size 328 at 0x000000000020 allocated by thread T3:
	#0 malloc <null>
	#1 default_memalloc lib/isc/mem.c:713:8
	#2 mem_create lib/isc/mem.c:763:8
	#3 isc_mem_create lib/isc/mem.c:2425:2
	#4 configure_view bin/named/server.c:4494:4
	#5 load_configuration bin/named/server.c:9062:3
	#6 run_server bin/named/server.c:9771:2
	#7 dispatch lib/isc/task.c:1152:7
	#8 run lib/isc/task.c:1344:2

    [...]

    SUMMARY: ThreadSanitizer: data race lib/isc/mem.c:1119:19 in isc___mem_put
2020-09-17 13:51:50 +00:00
Mark Andrews
9e584a4511 Pause dbiterator ealier to prevent lock-order-inversion
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock)
    Cycle in lock order graph: M1 (0x000000000000) => M2 (0x000000000000) => M1

    Mutex M2 acquired here while holding mutex M1 in thread T1:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 findnodeintree lib/dns/rbtdb.c:2877:2
    #3 findnode lib/dns/rbtdb.c:2941:10
    #4 dns_db_findnode lib/dns/db.c:439:11
    #5 resume_addnsec3chain lib/dns/zone.c:3776:11
    #6 rss_post lib/dns/zone.c:20659:3
    #7 setnsec3param lib/dns/zone.c:20471:3
    #8 dispatch lib/isc/task.c:1152:7
    #9 run lib/isc/task.c:1344:2

    Mutex M1 previously acquired by the same thread here:
    #0 pthread_mutex_lock <null>
    #1 rss_post lib/dns/zone.c:20658:3
    #2 setnsec3param lib/dns/zone.c:20471:3
    #3 dispatch lib/isc/task.c:1152:7
    #4 run lib/isc/task.c:1344:2

    Mutex M1 acquired here while holding mutex M2 in thread T2:
    #0 pthread_mutex_lock <null>
    #1 zone_nsec3chain lib/dns/zone.c:8666:5
    #2 zone_maintenance lib/dns/zone.c:11063:4
    #3 zone_timer lib/dns/zone.c:14098:2
    #4 dispatch lib/isc/task.c:1152:7
    #5 run lib/isc/task.c:1344:2

    Mutex M2 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 resume_iteration lib/dns/rbtdb.c:9357:2
    #3 dbiterator_next lib/dns/rbtdb.c:9647:3
    #4 dns_dbiterator_next lib/dns/dbiterator.c:87:10
    #5 zone_nsec3chain lib/dns/zone.c:8656:13
    #6 zone_maintenance lib/dns/zone.c:11063:4
    #7 zone_timer lib/dns/zone.c:14098:2
    #8 dispatch lib/isc/task.c:1152:7
    #9 run lib/isc/task.c:1344:2
2020-09-17 07:03:56 +00:00
Mark Andrews
2e63de94aa Pause the database iterator to release rwlock 2020-09-17 07:03:56 +00:00
Mark Andrews
fbed962204 Pause dbiterator to release rwlock to prevent lock-order-inversion.
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock)
    Cycle in lock order graph: M1 (0x000000000000) => M2 (0x000000000001) => M1

    Mutex M2 acquired here while holding mutex M1 in thread T1:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 getsigningtime lib/dns/rbtdb.c:8198:2
    #3 dns_db_getsigningtime lib/dns/db.c:979:11
    #4 set_resigntime lib/dns/zone.c:3887:11
    #5 dns_zone_markdirty lib/dns/zone.c:11119:4
    #6 update_action lib/ns/update.c:3376:3
    #7 dispatch lib/isc/task.c:1152:7
    #8 run lib/isc/task.c:1344:2

    Mutex M1 previously acquired by the same thread here:
    #0 pthread_mutex_lock <null>
    #1 dns_zone_markdirty lib/dns/zone.c:11089:2
    #2 update_action lib/ns/update.c:3376:3
    #3 dispatch lib/isc/task.c:1152:7
    #4 run lib/isc/task.c:1344:2

    Mutex M1 acquired here while holding mutex M2 in thread T1:
    #0 pthread_mutex_lock <null>
    #1 zone_nsec3chain lib/dns/zone.c:8502:3
    #2 zone_maintenance lib/dns/zone.c:11056:4
    #3 zone_timer lib/dns/zone.c:14091:2
    #4 dispatch lib/isc/task.c:1152:7
    #5 run lib/isc/task.c:1344:2

    Mutex M2 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 resume_iteration lib/dns/rbtdb.c:9357:2
    #3 dbiterator_current lib/dns/rbtdb.c:9695:3
    #4 dns_dbiterator_current lib/dns/dbiterator.c:101:10
    #5 zone_nsec3chain lib/dns/zone.c:8539:3
    #6 zone_maintenance lib/dns/zone.c:11056:4
    #7 zone_timer lib/dns/zone.c:14091:2
    #8 dispatch lib/isc/task.c:1152:7
    #9 run lib/isc/task.c:1344:2

    Thread T1 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_rwlock_rdlock
2020-09-17 07:03:56 +00:00
Mark Andrews
c9dbad97b2 Pause dbiterator to release rwlock to prevent lock-order-inversion.
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock)
    Cycle in lock order graph: M1 (0x000000000001) => M2 (0x000000000000) => M1

    Mutex M2 acquired here while holding mutex M1 in thread T1:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 zone_sign lib/dns/zone.c:9247:3
    #3 zone_maintenance lib/dns/zone.c:11047:4
    #4 zone_timer lib/dns/zone.c:14090:2
    #5 dispatch lib/isc/task.c:1152:7
    #6 run lib/isc/task.c:1344:2

    Mutex M1 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 resume_iteration lib/dns/rbtdb.c:9357:2
    #3 dbiterator_next lib/dns/rbtdb.c:9647:3
    #4 dns_dbiterator_next lib/dns/dbiterator.c:87:10
    #5 zone_sign lib/dns/zone.c:9488:13
    #6 zone_maintenance lib/dns/zone.c:11047:4
    #7 zone_timer lib/dns/zone.c:14090:2
    #8 dispatch lib/isc/task.c:1152:7
    #9 run lib/isc/task.c:1344:2

    Mutex M1 acquired here while holding mutex M2 in thread T2:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 findnodeintree lib/dns/rbtdb.c:2877:2
    #3 findnode lib/dns/rbtdb.c:2941:10
    #4 dns_db_findnode lib/dns/db.c:439:11
    #5 dns_db_getsoaserial lib/dns/db.c:780:11
    #6 dump_done lib/dns/zone.c:11428:15
    #7 dump_quantum lib/dns/masterdump.c:1487:2
    #8 dispatch lib/isc/task.c:1152:7
    #9 run lib/isc/task.c:1344:2

    Mutex M2 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 dump_done lib/dns/zone.c:11426:4
    #3 dump_quantum lib/dns/masterdump.c:1487:2
    #4 dispatch lib/isc/task.c:1152:7
    #5 run lib/isc/task.c:1344:2

    Thread T1 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    Thread T2 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_rwlock_rdlock
2020-09-17 07:03:56 +00:00
Mark Andrews
98025e15d0 Pause dbiterator to release rwlock to prevent lock-order-inversion.
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock)
    Cycle in lock order graph: M1 (0x000000000000) => M2 (0x000000000000) => M1

    Mutex M2 acquired here while holding mutex M1 in thread T1:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 getsigningtime lib/dns/rbtdb.c:8198:2
    #3 dns_db_getsigningtime lib/dns/db.c:979:11
    #4 set_resigntime lib/dns/zone.c:3887:11
    #5 dns_zone_markdirty lib/dns/zone.c:11115:4
    #6 update_action lib/ns/update.c:3376:3
    #7 dispatch lib/isc/task.c:1152:7
    #8 run lib/isc/task.c:1344:2

    Mutex M1 previously acquired by the same thread here:
    #0 pthread_mutex_lock <null>
    #1 dns_zone_markdirty lib/dns/zone.c:11085:2
    #2 update_action lib/ns/update.c:3376:3
    #3 dispatch lib/isc/task.c:1152:7
    #4 run lib/isc/task.c:1344:2

    Mutex M1 acquired here while holding mutex M2 in thread T2:
    #0 pthread_mutex_lock <null>
    #1 zone_nsec3chain lib/dns/zone.c:8274:3
    #2 zone_maintenance lib/dns/zone.c:11052:4
    #3 zone_timer lib/dns/zone.c:14087:2
    #4 dispatch lib/isc/task.c:1152:7
    #5 run lib/isc/task.c:1344:2

    Mutex M2 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 resume_iteration lib/dns/rbtdb.c:9357:2
    #3 dbiterator_next lib/dns/rbtdb.c:9647:3
    #4 dns_dbiterator_next lib/dns/dbiterator.c:87:10
    #5 zone_nsec3chain lib/dns/zone.c:8412:13
    #6 zone_maintenance lib/dns/zone.c:11052:4
    #7 zone_timer lib/dns/zone.c:14087:2
    #8 dispatch lib/isc/task.c:1152:7
    #9 run lib/isc/task.c:1344:2

    Thread T1 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    Thread T2 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_rwlock_rdlock
2020-09-17 07:03:56 +00:00
Mark Andrews
e185e37137 Pause dbiterator to release rwlock to prevent lock-order-inversion.
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock)
    Cycle in lock order graph: M1 (0x000000000001) => M2 (0x000000000002) => M3 (0x000000000000) => M1

    Mutex M2 acquired here while holding mutex M1 in thread T1:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 findnodeintree lib/dns/rbtdb.c:2877:2
    #3 findnode lib/dns/rbtdb.c:2941:10
    #4 dns_db_findnode lib/dns/db.c:439:11
    #5 copy_non_dnssec_records lib/dns/zone.c:16031:11
    #6 receive_secure_db lib/dns/zone.c:16163:12
    #7 dispatch lib/isc/task.c:1152:7
    #8 run lib/isc/task.c:1344:2

    Mutex M1 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 resume_iteration lib/dns/rbtdb.c:9357:2
    #3 dbiterator_first lib/dns/rbtdb.c:9407:3
    #4 dns_dbiterator_first lib/dns/dbiterator.c:43:10
    #5 receive_secure_db lib/dns/zone.c:16160:16
    #6 dispatch lib/isc/task.c:1152:7
    #7 run lib/isc/task.c:1344:2

    Mutex M3 acquired here while holding mutex M2 in thread T2:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 zone_sign lib/dns/zone.c:9244:3
    #3 zone_maintenance lib/dns/zone.c:11044:4
    #4 zone_timer lib/dns/zone.c:14087:2
    #5 dispatch lib/isc/task.c:1152:7
    #6 run lib/isc/task.c:1344:2

    Mutex M2 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 resume_iteration lib/dns/rbtdb.c:9357:2
    #3 dbiterator_next lib/dns/rbtdb.c:9647:3
    #4 dns_dbiterator_next lib/dns/dbiterator.c:87:10
    #5 zone_sign lib/dns/zone.c:9485:13
    #6 zone_maintenance lib/dns/zone.c:11044:4
    #7 zone_timer lib/dns/zone.c:14087:2
    #8 dispatch lib/isc/task.c:1152:7
    #9 run lib/isc/task.c:1344:2

    Mutex M1 acquired here while holding mutex M3 in thread T3:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 findnodeintree lib/dns/rbtdb.c:2877:2
    #3 findnode lib/dns/rbtdb.c:2941:10
    #4 dns_db_findnode lib/dns/db.c:439:11
    #5 zone_get_from_db lib/dns/zone.c:5602:11
    #6 get_raw_serial lib/dns/zone.c:2520:12
    #7 zone_gotwritehandle lib/dns/zone.c:2559:4
    #8 dispatch lib/isc/task.c:1152:7
    #9 run lib/isc/task.c:1344:2

    Mutex M3 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 zone_gotwritehandle lib/dns/zone.c:2552:2
    #3 dispatch lib/isc/task.c:1152:7
    #4 run lib/isc/task.c:1344:2

    Thread T1 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    Thread T2 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    Thread T3 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_rwlock_rdlock
2020-09-17 07:03:56 +00:00
Mark Andrews
9e5f83c499 Address lock-order-inversion between the keytable and the db locks.
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock)
    Cycle in lock order graph: M1 (0x000000000000) => M2 (0x000000000000) => M1

    Mutex M2 acquired here while holding mutex M1 in thread T1:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 dns_keytable_find lib/dns/keytable.c:522:2
    #3 sync_keyzone lib/dns/zone.c:4560:12
    #4 dns_zone_synckeyzone lib/dns/zone.c:4635:11
    #5 mkey_refresh bin/named/server.c:15423:2
    #6 named_server_mkeys bin/named/server.c:15727:4
    #7 named_control_docommand bin/named/control.c:236:12
    #8 control_command bin/named/controlconf.c:365:17
    #9 dispatch lib/isc/task.c:1152:7
    #10 run lib/isc/task.c:1344:2

    Mutex M1 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 resume_iteration lib/dns/rbtdb.c:9357:2
    #3 dbiterator_first lib/dns/rbtdb.c:9407:3
    #4 dns_dbiterator_first lib/dns/dbiterator.c:43:10
    #5 dns_rriterator_first lib/dns/rriterator.c:71:15
    #6 sync_keyzone lib/dns/zone.c:4543:16
    #7 dns_zone_synckeyzone lib/dns/zone.c:4635:11
    #8 mkey_refresh bin/named/server.c:15423:2
    #9 named_server_mkeys bin/named/server.c:15727:4
    #10 named_control_docommand bin/named/control.c:236:12
    #11 control_command bin/named/controlconf.c:365:17
    #12 dispatch lib/isc/task.c:1152:7
    #13 run lib/isc/task.c:1344:2

    Mutex M1 acquired here while holding mutex M2 in thread T1:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 zone_find lib/dns/rbtdb.c:4029:2
    #3 dns_db_find lib/dns/db.c:500:11
    #4 addifmissing lib/dns/zone.c:4481:11
    #5 dns_keytable_forall lib/dns/keytable.c:786:4
    #6 sync_keyzone lib/dns/zone.c:4586:2
    #7 dns_zone_synckeyzone lib/dns/zone.c:4635:11
    #8 mkey_refresh bin/named/server.c:15423:2
    #9 named_server_mkeys bin/named/server.c:15727:4
    #10 named_control_docommand bin/named/control.c:236:12
    #11 control_command bin/named/controlconf.c:365:17
    #12 dispatch lib/isc/task.c:1152:7
    #13 run lib/isc/task.c:1344:2

    Mutex M2 previously acquired by the same thread here:
    #0 pthread_rwlock_rdlock <null>
    #1 isc_rwlock_lock lib/isc/rwlock.c:48:3
    #2 dns_keytable_forall lib/dns/keytable.c:770:2
    #3 sync_keyzone lib/dns/zone.c:4586:2
    #4 dns_zone_synckeyzone lib/dns/zone.c:4635:11
    #5 mkey_refresh bin/named/server.c:15423:2
    #6 named_server_mkeys bin/named/server.c:15727:4
    #7 named_control_docommand bin/named/control.c:236:12
    #8 control_command bin/named/controlconf.c:365:17
    #9 dispatch lib/isc/task.c:1152:7
    #10 run lib/isc/task.c:1344:2

    Thread T1 (running) created by main thread at:
    #0 pthread_create <null>
    #1 isc_thread_create lib/isc/pthreads/thread.c:73:8
    #2 isc_taskmgr_create lib/isc/task.c:1434:3
    #3 create_managers bin/named/main.c:915:11
    #4 setup bin/named/main.c:1223:11
    #5 main bin/named/main.c:1523:2

    SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_rwlock_rdlock
2020-09-17 07:03:56 +00:00