2011-09-02 21:15:39 +00:00
|
|
|
/*
|
2018-02-23 09:53:12 +01:00
|
|
|
* Copyright (C) Internet Systems Consortium, Inc. ("ISC")
|
2011-09-02 21:15:39 +00:00
|
|
|
*
|
2021-06-03 08:37:05 +02:00
|
|
|
* SPDX-License-Identifier: MPL-2.0
|
|
|
|
*
|
2016-06-27 14:56:38 +10:00
|
|
|
* This Source Code Form is subject to the terms of the Mozilla Public
|
|
|
|
* License, v. 2.0. If a copy of the MPL was not distributed with this
|
2020-09-14 16:20:40 -07:00
|
|
|
* file, you can obtain one at https://mozilla.org/MPL/2.0/.
|
2018-02-23 09:53:12 +01:00
|
|
|
*
|
|
|
|
* See the COPYRIGHT file distributed with this work for additional
|
|
|
|
* information regarding copyright ownership.
|
2011-09-02 21:15:39 +00:00
|
|
|
*/
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
#if HAVE_CMOCKA
|
|
|
|
|
Include <sched.h> where necessary for musl libc
All unit tests define the UNIT_TESTING macro, which causes <cmocka.h> to
replace malloc(), calloc(), realloc(), and free() with its own functions
tracking memory allocations. In order for this not to break
compilation, the system header declaring the prototypes for these
standard functions must be included before <cmocka.h>.
Normally, these prototypes are only present in <stdlib.h>, so we make
sure it is included before <cmocka.h>. However, musl libc also defines
the prototypes for calloc() and free() in <sched.h>, which is included
by <pthread.h>, which is included e.g. by <isc/mutex.h>. Thus, unit
tests including "dnstest.h" (which includes <isc/mem.h>, which includes
<isc/mutex.h>) after <cmocka.h> will not compile with musl libc as for
these programs, <sched.h> will be included after <cmocka.h>.
Always including <cmocka.h> after all other header files is not a
feasible solution as that causes the mock assertion macros defined in
<isc/util.h> to mangle the contents of <cmocka.h>, thus breaking
compilation. We cannot really use the __noreturn__ or analyzer_noreturn
attributes with cmocka assertion functions because they do return if the
tested condition is true. The problem is that what BIND unit tests do
is incompatible with Clang Static Analyzer's assumptions: since we use
cmocka, our custom assertion handlers are present in a shared library
(i.e. it is the cmocka library that checks the assertion condition, not
a macro in unit test code). Redefining cmocka's assertion macros in
<isc/util.h> is an ugly hack to overcome that problem - unfortunately,
this is the only way we can think of to make Clang Static Analyzer
properly process unit test code. Giving up on Clang Static Analyzer
being able to properly process unit test code is not a satisfactory
solution.
Undefining _GNU_SOURCE for unit test code could work around the problem
(musl libc's <sched.h> only defines the prototypes for calloc() and
free() when _GNU_SOURCE is defined), but doing that could introduce
discrepancies for unit tests including entire *.c files, so it is also
not a good solution.
All in all, including <sched.h> before <cmocka.h> for all affected unit
tests seems to be the most benign way of working around this musl libc
quirk. While quite an ugly solution, it achieves our goals here, which
are to keep the benefit of proper static analysis of unit test code and
to fix compilation against musl libc.
2019-07-30 21:08:40 +02:00
|
|
|
#include <inttypes.h>
|
|
|
|
#include <sched.h> /* IWYU pragma: keep */
|
2020-02-12 13:59:18 +01:00
|
|
|
#include <setjmp.h>
|
|
|
|
#include <stdarg.h>
|
2018-04-17 08:29:14 -07:00
|
|
|
#include <stdbool.h>
|
2020-02-12 13:59:18 +01:00
|
|
|
#include <stddef.h>
|
2018-02-27 23:01:14 -08:00
|
|
|
#include <stdlib.h>
|
2018-10-24 13:12:55 -07:00
|
|
|
#include <string.h>
|
Include <sched.h> where necessary for musl libc
All unit tests define the UNIT_TESTING macro, which causes <cmocka.h> to
replace malloc(), calloc(), realloc(), and free() with its own functions
tracking memory allocations. In order for this not to break
compilation, the system header declaring the prototypes for these
standard functions must be included before <cmocka.h>.
Normally, these prototypes are only present in <stdlib.h>, so we make
sure it is included before <cmocka.h>. However, musl libc also defines
the prototypes for calloc() and free() in <sched.h>, which is included
by <pthread.h>, which is included e.g. by <isc/mutex.h>. Thus, unit
tests including "dnstest.h" (which includes <isc/mem.h>, which includes
<isc/mutex.h>) after <cmocka.h> will not compile with musl libc as for
these programs, <sched.h> will be included after <cmocka.h>.
Always including <cmocka.h> after all other header files is not a
feasible solution as that causes the mock assertion macros defined in
<isc/util.h> to mangle the contents of <cmocka.h>, thus breaking
compilation. We cannot really use the __noreturn__ or analyzer_noreturn
attributes with cmocka assertion functions because they do return if the
tested condition is true. The problem is that what BIND unit tests do
is incompatible with Clang Static Analyzer's assumptions: since we use
cmocka, our custom assertion handlers are present in a shared library
(i.e. it is the cmocka library that checks the assertion condition, not
a macro in unit test code). Redefining cmocka's assertion macros in
<isc/util.h> is an ugly hack to overcome that problem - unfortunately,
this is the only way we can think of to make Clang Static Analyzer
properly process unit test code. Giving up on Clang Static Analyzer
being able to properly process unit test code is not a satisfactory
solution.
Undefining _GNU_SOURCE for unit test code could work around the problem
(musl libc's <sched.h> only defines the prototypes for calloc() and
free() when _GNU_SOURCE is defined), but doing that could introduce
discrepancies for unit tests including entire *.c files, so it is also
not a good solution.
All in all, including <sched.h> before <cmocka.h> for all affected unit
tests seems to be the most benign way of working around this musl libc
quirk. While quite an ugly solution, it achieves our goals here, which
are to keep the benefit of proper static analysis of unit test code and
to fix compilation against musl libc.
2019-07-30 21:08:40 +02:00
|
|
|
#include <unistd.h>
|
2011-09-02 21:15:39 +00:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
#define UNIT_TESTING
|
|
|
|
|
2020-12-01 15:08:49 +01:00
|
|
|
#include <cmocka.h>
|
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
#include <isc/atomic.h>
|
2020-09-02 18:22:21 +10:00
|
|
|
#include <isc/cmocka.h>
|
2018-10-24 13:12:55 -07:00
|
|
|
#include <isc/commandline.h>
|
2018-02-27 23:01:14 -08:00
|
|
|
#include <isc/condition.h>
|
2021-04-27 00:07:43 +02:00
|
|
|
#include <isc/managers.h>
|
2018-02-27 23:01:14 -08:00
|
|
|
#include <isc/mem.h>
|
2018-03-09 16:55:21 -08:00
|
|
|
#include <isc/print.h>
|
2011-09-02 21:15:39 +00:00
|
|
|
#include <isc/task.h>
|
2018-02-27 23:01:14 -08:00
|
|
|
#include <isc/time.h>
|
|
|
|
#include <isc/timer.h>
|
2011-09-02 21:15:39 +00:00
|
|
|
#include <isc/util.h>
|
|
|
|
|
2020-02-12 13:59:18 +01:00
|
|
|
#include "isctest.h"
|
2018-10-03 16:11:10 -07:00
|
|
|
|
2018-11-16 08:19:06 +00:00
|
|
|
/* Set to true (or use -v option) for verbose output */
|
2018-10-24 13:12:55 -07:00
|
|
|
static bool verbose = false;
|
2011-09-02 21:15:39 +00:00
|
|
|
|
2020-02-13 14:44:37 -08:00
|
|
|
static isc_mutex_t lock;
|
2018-10-24 13:12:55 -07:00
|
|
|
static isc_condition_t cv;
|
|
|
|
|
2019-07-12 16:44:51 +02:00
|
|
|
atomic_int_fast32_t counter;
|
2020-02-13 14:44:37 -08:00
|
|
|
static int active[10];
|
2021-05-18 19:44:31 +02:00
|
|
|
static atomic_bool done;
|
2011-09-02 21:15:39 +00:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
static int
|
2020-02-13 14:44:37 -08:00
|
|
|
_setup(void **state) {
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_result_t result;
|
|
|
|
|
|
|
|
UNUSED(state);
|
|
|
|
|
2018-11-16 15:33:22 +01:00
|
|
|
isc_mutex_init(&lock);
|
2018-10-24 13:12:55 -07:00
|
|
|
|
2018-11-15 17:20:36 +01:00
|
|
|
isc_condition_init(&cv);
|
2018-10-24 13:12:55 -07:00
|
|
|
|
|
|
|
result = isc_test_begin(NULL, true, 0);
|
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2020-02-13 14:44:37 -08:00
|
|
|
_setup2(void **state) {
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_result_t result;
|
|
|
|
|
|
|
|
UNUSED(state);
|
|
|
|
|
2018-11-16 15:33:22 +01:00
|
|
|
isc_mutex_init(&lock);
|
2018-10-24 13:12:55 -07:00
|
|
|
|
2018-11-15 17:20:36 +01:00
|
|
|
isc_condition_init(&cv);
|
2018-10-24 13:12:55 -07:00
|
|
|
|
|
|
|
/* Two worker threads */
|
|
|
|
result = isc_test_begin(NULL, true, 2);
|
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2020-02-13 14:44:37 -08:00
|
|
|
_setup4(void **state) {
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_result_t result;
|
|
|
|
|
|
|
|
UNUSED(state);
|
|
|
|
|
2018-11-16 15:33:22 +01:00
|
|
|
isc_mutex_init(&lock);
|
2018-10-24 13:12:55 -07:00
|
|
|
|
2018-11-15 17:20:36 +01:00
|
|
|
isc_condition_init(&cv);
|
2018-10-24 13:12:55 -07:00
|
|
|
|
|
|
|
/* Four worker threads */
|
|
|
|
result = isc_test_begin(NULL, true, 4);
|
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2020-02-13 14:44:37 -08:00
|
|
|
_teardown(void **state) {
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
|
|
|
|
|
|
|
isc_test_end();
|
|
|
|
isc_condition_destroy(&cv);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
2018-03-10 10:33:45 -08:00
|
|
|
|
2011-09-02 21:15:39 +00:00
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
set(isc_task_t *task, isc_event_t *event) {
|
2020-02-12 13:59:18 +01:00
|
|
|
atomic_int_fast32_t *value = (atomic_int_fast32_t *)event->ev_arg;
|
2011-09-02 21:15:39 +00:00
|
|
|
|
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
isc_event_free(&event);
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_store(value, atomic_fetch_add(&counter, 1));
|
2011-09-02 21:15:39 +00:00
|
|
|
}
|
|
|
|
|
Refactor taskmgr to run on top of netmgr
This commit changes the taskmgr to run the individual tasks on the
netmgr internal workers. While an effort has been put into keeping the
taskmgr interface intact, couple of changes have been made:
* The taskmgr has no concept of universal privileged mode - rather the
tasks are either privileged or unprivileged (normal). The privileged
tasks are run as a first thing when the netmgr is unpaused. There
are now four different queues in in the netmgr:
1. priority queue - netievent on the priority queue are run even when
the taskmgr enter exclusive mode and netmgr is paused. This is
needed to properly start listening on the interfaces, free
resources and resume.
2. privileged task queue - only privileged tasks are queued here and
this is the first queue that gets processed when network manager
is unpaused using isc_nm_resume(). All netmgr workers need to
clean the privileged task queue before they all proceed normal
operation. Both task queues are processed when the workers are
finished.
3. task queue - only (traditional) task are scheduled here and this
queue along with privileged task queues are process when the
netmgr workers are finishing. This is needed to process the task
shutdown events.
4. normal queue - this is the queue with netmgr events, e.g. reading,
sending, callbacks and pretty much everything is processed here.
* The isc_taskmgr_create() now requires initialized netmgr (isc_nm_t)
object.
* The isc_nm_destroy() function now waits for indefinite time, but it
will print out the active objects when in tracing mode
(-DNETMGR_TRACE=1 and -DNETMGR_TRACE_VERBOSE=1), the netmgr has been
made a little bit more asynchronous and it might take longer time to
shutdown all the active networking connections.
* Previously, the isc_nm_stoplistening() was a synchronous operation.
This has been changed and the isc_nm_stoplistening() just schedules
the child sockets to stop listening and exits. This was needed to
prevent a deadlock as the the (traditional) tasks are now executed on
the netmgr threads.
* The socket selection logic in isc__nm_udp_send() was flawed, but
fortunatelly, it was broken, so we never hit the problem where we
created uvreq_t on a socket from nmhandle_t, but then a different
socket could be picked up and then we were trying to run the send
callback on a socket that had different threadid than currently
running.
2021-04-09 11:31:19 +02:00
|
|
|
#include <isc/thread.h>
|
|
|
|
|
2011-09-02 21:15:39 +00:00
|
|
|
/* Create a task */
|
2018-10-24 13:12:55 -07:00
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
create_task(void **state) {
|
2011-09-02 21:15:39 +00:00
|
|
|
isc_result_t result;
|
2020-02-13 14:44:37 -08:00
|
|
|
isc_task_t *task = NULL;
|
2011-09-02 21:15:39 +00:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
2011-09-02 21:15:39 +00:00
|
|
|
|
|
|
|
result = isc_task_create(taskmgr, 0, &task);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2011-09-02 21:15:39 +00:00
|
|
|
|
|
|
|
isc_task_destroy(&task);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_null(task);
|
2011-09-02 21:15:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Process events */
|
2018-10-24 13:12:55 -07:00
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
all_events(void **state) {
|
|
|
|
isc_result_t result;
|
|
|
|
isc_task_t *task = NULL;
|
|
|
|
isc_event_t *event = NULL;
|
2019-07-12 16:44:51 +02:00
|
|
|
atomic_int_fast32_t a, b;
|
2020-02-13 14:44:37 -08:00
|
|
|
int i = 0;
|
2011-09-02 21:15:39 +00:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
2011-09-02 21:15:39 +00:00
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_init(&counter, 1);
|
2019-07-12 16:44:51 +02:00
|
|
|
atomic_init(&a, 0);
|
|
|
|
atomic_init(&b, 0);
|
2011-09-02 21:15:39 +00:00
|
|
|
|
|
|
|
result = isc_task_create(taskmgr, 0, &task);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2011-09-02 21:15:39 +00:00
|
|
|
|
|
|
|
/* First event */
|
2020-02-12 13:59:18 +01:00
|
|
|
event = isc_event_allocate(test_mctx, task, ISC_TASKEVENT_TEST, set, &a,
|
|
|
|
sizeof(isc_event_t));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event);
|
2011-09-02 21:15:39 +00:00
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
assert_int_equal(atomic_load(&a), 0);
|
2011-09-02 21:15:39 +00:00
|
|
|
isc_task_send(task, &event);
|
|
|
|
|
2020-02-12 13:59:18 +01:00
|
|
|
event = isc_event_allocate(test_mctx, task, ISC_TASKEVENT_TEST, set, &b,
|
|
|
|
sizeof(isc_event_t));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event);
|
2011-09-02 21:15:39 +00:00
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
assert_int_equal(atomic_load(&b), 0);
|
2011-09-02 21:15:39 +00:00
|
|
|
isc_task_send(task, &event);
|
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
while ((atomic_load(&a) == 0 || atomic_load(&b) == 0) && i++ < 5000) {
|
2011-09-02 21:15:39 +00:00
|
|
|
isc_test_nap(1000);
|
|
|
|
}
|
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
assert_int_not_equal(atomic_load(&a), 0);
|
|
|
|
assert_int_not_equal(atomic_load(&b), 0);
|
2011-09-02 21:15:39 +00:00
|
|
|
|
|
|
|
isc_task_destroy(&task);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_null(task);
|
2011-09-02 21:15:39 +00:00
|
|
|
}
|
|
|
|
|
2018-02-27 23:01:14 -08:00
|
|
|
/*
|
|
|
|
* Basic task functions:
|
|
|
|
*/
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
basic_cb(isc_task_t *task, isc_event_t *event) {
|
2018-10-24 13:12:55 -07:00
|
|
|
int i, j;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
j = 0;
|
|
|
|
for (i = 0; i < 1000000; i++) {
|
|
|
|
j += 100;
|
|
|
|
}
|
|
|
|
|
2019-08-08 13:52:44 +10:00
|
|
|
UNUSED(j);
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (verbose) {
|
|
|
|
print_message("# task %s\n", (char *)event->ev_arg);
|
|
|
|
}
|
|
|
|
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
basic_shutdown(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (verbose) {
|
|
|
|
print_message("# shutdown %s\n", (char *)event->ev_arg);
|
|
|
|
}
|
|
|
|
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
basic_tick(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (verbose) {
|
|
|
|
print_message("# %s\n", (char *)event->ev_arg);
|
|
|
|
}
|
|
|
|
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static char one[] = "1";
|
|
|
|
static char two[] = "2";
|
|
|
|
static char three[] = "3";
|
|
|
|
static char four[] = "4";
|
|
|
|
static char tick[] = "tick";
|
|
|
|
static char tock[] = "tock";
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
basic(void **state) {
|
|
|
|
isc_result_t result;
|
|
|
|
isc_task_t *task1 = NULL;
|
|
|
|
isc_task_t *task2 = NULL;
|
|
|
|
isc_task_t *task3 = NULL;
|
|
|
|
isc_task_t *task4 = NULL;
|
|
|
|
isc_event_t *event = NULL;
|
|
|
|
isc_timer_t *ti1 = NULL;
|
|
|
|
isc_timer_t *ti2 = NULL;
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_interval_t interval;
|
2020-02-12 13:59:18 +01:00
|
|
|
char *testarray[] = { one, one, one, one, one, one, one, one,
|
|
|
|
one, two, three, four, two, three, four, NULL };
|
2020-02-13 14:44:37 -08:00
|
|
|
int i;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
result = isc_task_create(taskmgr, 0, &task1);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
result = isc_task_create(taskmgr, 0, &task2);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
result = isc_task_create(taskmgr, 0, &task3);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
result = isc_task_create(taskmgr, 0, &task4);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
result = isc_task_onshutdown(task1, basic_shutdown, one);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
result = isc_task_onshutdown(task2, basic_shutdown, two);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
result = isc_task_onshutdown(task3, basic_shutdown, three);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
result = isc_task_onshutdown(task4, basic_shutdown, four);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
isc_interval_set(&interval, 1, 0);
|
2022-03-11 12:09:35 +01:00
|
|
|
isc_timer_create(timermgr, task1, basic_tick, tick, &ti1);
|
2022-03-11 23:08:17 +01:00
|
|
|
result = isc_timer_reset(ti1, isc_timertype_ticker, &interval, false);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
ti2 = NULL;
|
|
|
|
isc_interval_set(&interval, 1, 0);
|
2022-03-11 12:09:35 +01:00
|
|
|
isc_timer_create(timermgr, task2, basic_tick, tock, &ti2);
|
2022-03-11 23:08:17 +01:00
|
|
|
result = isc_timer_reset(ti2, isc_timertype_ticker, &interval, false);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
sleep(2);
|
|
|
|
|
|
|
|
for (i = 0; testarray[i] != NULL; i++) {
|
|
|
|
/*
|
|
|
|
* Note: (void *)1 is used as a sender here, since some
|
|
|
|
* compilers don't like casting a function pointer to a
|
|
|
|
* (void *).
|
|
|
|
*
|
|
|
|
* In a real use, it is more likely the sender would be a
|
|
|
|
* structure (socket, timer, task, etc) but this is just a
|
|
|
|
* test program.
|
|
|
|
*/
|
2019-11-09 14:01:08 +01:00
|
|
|
event = isc_event_allocate(test_mctx, (void *)1, 1, basic_cb,
|
2018-02-27 23:01:14 -08:00
|
|
|
testarray[i], sizeof(*event));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event);
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_task_send(task1, &event);
|
|
|
|
}
|
|
|
|
|
|
|
|
isc_task_detach(&task1);
|
|
|
|
isc_task_detach(&task2);
|
|
|
|
isc_task_detach(&task3);
|
|
|
|
isc_task_detach(&task4);
|
|
|
|
|
|
|
|
sleep(10);
|
|
|
|
isc_timer_detach(&ti1);
|
|
|
|
isc_timer_detach(&ti2);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Exclusive mode test:
|
|
|
|
* When one task enters exclusive mode, all other active
|
|
|
|
* tasks complete first.
|
|
|
|
*/
|
2018-10-24 13:12:55 -07:00
|
|
|
static int
|
2020-02-13 14:44:37 -08:00
|
|
|
spin(int n) {
|
2018-02-27 23:01:14 -08:00
|
|
|
int i;
|
|
|
|
int r = 0;
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
r += i;
|
2018-10-24 13:12:55 -07:00
|
|
|
if (r > 1000000) {
|
2018-02-27 23:01:14 -08:00
|
|
|
r = 0;
|
2018-10-24 13:12:55 -07:00
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
return (r);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
exclusive_cb(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
int taskno = *(int *)(event->ev_arg);
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (verbose) {
|
|
|
|
print_message("# task enter %d\n", taskno);
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/* task chosen from the middle of the range */
|
|
|
|
if (taskno == 6) {
|
|
|
|
isc_result_t result;
|
2020-02-13 14:44:37 -08:00
|
|
|
int i;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
result = isc_task_beginexclusive(task);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
for (i = 0; i < 10; i++) {
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(active[i], 0);
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
isc_task_endexclusive(task);
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_store(&done, true);
|
2018-02-27 23:01:14 -08:00
|
|
|
} else {
|
|
|
|
active[taskno]++;
|
2020-02-12 13:59:18 +01:00
|
|
|
(void)spin(10000000);
|
2018-02-27 23:01:14 -08:00
|
|
|
active[taskno]--;
|
|
|
|
}
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (verbose) {
|
|
|
|
print_message("# task exit %d\n", taskno);
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
if (atomic_load(&done)) {
|
2020-02-12 13:59:18 +01:00
|
|
|
isc_mem_put(event->ev_destroy_arg, event->ev_arg, sizeof(int));
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_event_free(&event);
|
Refactor taskmgr to run on top of netmgr
This commit changes the taskmgr to run the individual tasks on the
netmgr internal workers. While an effort has been put into keeping the
taskmgr interface intact, couple of changes have been made:
* The taskmgr has no concept of universal privileged mode - rather the
tasks are either privileged or unprivileged (normal). The privileged
tasks are run as a first thing when the netmgr is unpaused. There
are now four different queues in in the netmgr:
1. priority queue - netievent on the priority queue are run even when
the taskmgr enter exclusive mode and netmgr is paused. This is
needed to properly start listening on the interfaces, free
resources and resume.
2. privileged task queue - only privileged tasks are queued here and
this is the first queue that gets processed when network manager
is unpaused using isc_nm_resume(). All netmgr workers need to
clean the privileged task queue before they all proceed normal
operation. Both task queues are processed when the workers are
finished.
3. task queue - only (traditional) task are scheduled here and this
queue along with privileged task queues are process when the
netmgr workers are finishing. This is needed to process the task
shutdown events.
4. normal queue - this is the queue with netmgr events, e.g. reading,
sending, callbacks and pretty much everything is processed here.
* The isc_taskmgr_create() now requires initialized netmgr (isc_nm_t)
object.
* The isc_nm_destroy() function now waits for indefinite time, but it
will print out the active objects when in tracing mode
(-DNETMGR_TRACE=1 and -DNETMGR_TRACE_VERBOSE=1), the netmgr has been
made a little bit more asynchronous and it might take longer time to
shutdown all the active networking connections.
* Previously, the isc_nm_stoplistening() was a synchronous operation.
This has been changed and the isc_nm_stoplistening() just schedules
the child sockets to stop listening and exits. This was needed to
prevent a deadlock as the the (traditional) tasks are now executed on
the netmgr threads.
* The socket selection logic in isc__nm_udp_send() was flawed, but
fortunatelly, it was broken, so we never hit the problem where we
created uvreq_t on a socket from nmhandle_t, but then a different
socket could be picked up and then we were trying to run the send
callback on a socket that had different threadid than currently
running.
2021-04-09 11:31:19 +02:00
|
|
|
atomic_fetch_sub(&counter, 1);
|
2018-02-27 23:01:14 -08:00
|
|
|
} else {
|
|
|
|
isc_task_send(task, &event);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
task_exclusive(void **state) {
|
|
|
|
isc_task_t *tasks[10];
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_result_t result;
|
2020-02-13 14:44:37 -08:00
|
|
|
int i;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
Refactor taskmgr to run on top of netmgr
This commit changes the taskmgr to run the individual tasks on the
netmgr internal workers. While an effort has been put into keeping the
taskmgr interface intact, couple of changes have been made:
* The taskmgr has no concept of universal privileged mode - rather the
tasks are either privileged or unprivileged (normal). The privileged
tasks are run as a first thing when the netmgr is unpaused. There
are now four different queues in in the netmgr:
1. priority queue - netievent on the priority queue are run even when
the taskmgr enter exclusive mode and netmgr is paused. This is
needed to properly start listening on the interfaces, free
resources and resume.
2. privileged task queue - only privileged tasks are queued here and
this is the first queue that gets processed when network manager
is unpaused using isc_nm_resume(). All netmgr workers need to
clean the privileged task queue before they all proceed normal
operation. Both task queues are processed when the workers are
finished.
3. task queue - only (traditional) task are scheduled here and this
queue along with privileged task queues are process when the
netmgr workers are finishing. This is needed to process the task
shutdown events.
4. normal queue - this is the queue with netmgr events, e.g. reading,
sending, callbacks and pretty much everything is processed here.
* The isc_taskmgr_create() now requires initialized netmgr (isc_nm_t)
object.
* The isc_nm_destroy() function now waits for indefinite time, but it
will print out the active objects when in tracing mode
(-DNETMGR_TRACE=1 and -DNETMGR_TRACE_VERBOSE=1), the netmgr has been
made a little bit more asynchronous and it might take longer time to
shutdown all the active networking connections.
* Previously, the isc_nm_stoplistening() was a synchronous operation.
This has been changed and the isc_nm_stoplistening() just schedules
the child sockets to stop listening and exits. This was needed to
prevent a deadlock as the the (traditional) tasks are now executed on
the netmgr threads.
* The socket selection logic in isc__nm_udp_send() was flawed, but
fortunatelly, it was broken, so we never hit the problem where we
created uvreq_t on a socket from nmhandle_t, but then a different
socket could be picked up and then we were trying to run the send
callback on a socket that had different threadid than currently
running.
2021-04-09 11:31:19 +02:00
|
|
|
atomic_init(&counter, 0);
|
|
|
|
|
2018-02-27 23:01:14 -08:00
|
|
|
for (i = 0; i < 10; i++) {
|
|
|
|
isc_event_t *event = NULL;
|
2020-02-13 14:44:37 -08:00
|
|
|
int *v;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
tasks[i] = NULL;
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (i == 6) {
|
2021-05-06 16:11:43 +02:00
|
|
|
/* task chosen from the middle of the range */
|
|
|
|
result = isc_task_create_bound(taskmgr, 0, &tasks[i],
|
|
|
|
0);
|
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_taskmgr_setexcltask(taskmgr, tasks[6]);
|
2021-05-06 16:11:43 +02:00
|
|
|
} else {
|
|
|
|
result = isc_task_create(taskmgr, 0, &tasks[i]);
|
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-10-24 13:12:55 -07:00
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2019-11-09 14:01:08 +01:00
|
|
|
v = isc_mem_get(test_mctx, sizeof *v);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(v);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
*v = i;
|
|
|
|
|
2020-02-12 13:59:18 +01:00
|
|
|
event = isc_event_allocate(test_mctx, NULL, 1, exclusive_cb, v,
|
|
|
|
sizeof(*event));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
isc_task_send(tasks[i], &event);
|
Refactor taskmgr to run on top of netmgr
This commit changes the taskmgr to run the individual tasks on the
netmgr internal workers. While an effort has been put into keeping the
taskmgr interface intact, couple of changes have been made:
* The taskmgr has no concept of universal privileged mode - rather the
tasks are either privileged or unprivileged (normal). The privileged
tasks are run as a first thing when the netmgr is unpaused. There
are now four different queues in in the netmgr:
1. priority queue - netievent on the priority queue are run even when
the taskmgr enter exclusive mode and netmgr is paused. This is
needed to properly start listening on the interfaces, free
resources and resume.
2. privileged task queue - only privileged tasks are queued here and
this is the first queue that gets processed when network manager
is unpaused using isc_nm_resume(). All netmgr workers need to
clean the privileged task queue before they all proceed normal
operation. Both task queues are processed when the workers are
finished.
3. task queue - only (traditional) task are scheduled here and this
queue along with privileged task queues are process when the
netmgr workers are finishing. This is needed to process the task
shutdown events.
4. normal queue - this is the queue with netmgr events, e.g. reading,
sending, callbacks and pretty much everything is processed here.
* The isc_taskmgr_create() now requires initialized netmgr (isc_nm_t)
object.
* The isc_nm_destroy() function now waits for indefinite time, but it
will print out the active objects when in tracing mode
(-DNETMGR_TRACE=1 and -DNETMGR_TRACE_VERBOSE=1), the netmgr has been
made a little bit more asynchronous and it might take longer time to
shutdown all the active networking connections.
* Previously, the isc_nm_stoplistening() was a synchronous operation.
This has been changed and the isc_nm_stoplistening() just schedules
the child sockets to stop listening and exits. This was needed to
prevent a deadlock as the the (traditional) tasks are now executed on
the netmgr threads.
* The socket selection logic in isc__nm_udp_send() was flawed, but
fortunatelly, it was broken, so we never hit the problem where we
created uvreq_t on a socket from nmhandle_t, but then a different
socket could be picked up and then we were trying to run the send
callback on a socket that had different threadid than currently
running.
2021-04-09 11:31:19 +02:00
|
|
|
atomic_fetch_add(&counter, 1);
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < 10; i++) {
|
|
|
|
isc_task_detach(&tasks[i]);
|
|
|
|
}
|
Refactor taskmgr to run on top of netmgr
This commit changes the taskmgr to run the individual tasks on the
netmgr internal workers. While an effort has been put into keeping the
taskmgr interface intact, couple of changes have been made:
* The taskmgr has no concept of universal privileged mode - rather the
tasks are either privileged or unprivileged (normal). The privileged
tasks are run as a first thing when the netmgr is unpaused. There
are now four different queues in in the netmgr:
1. priority queue - netievent on the priority queue are run even when
the taskmgr enter exclusive mode and netmgr is paused. This is
needed to properly start listening on the interfaces, free
resources and resume.
2. privileged task queue - only privileged tasks are queued here and
this is the first queue that gets processed when network manager
is unpaused using isc_nm_resume(). All netmgr workers need to
clean the privileged task queue before they all proceed normal
operation. Both task queues are processed when the workers are
finished.
3. task queue - only (traditional) task are scheduled here and this
queue along with privileged task queues are process when the
netmgr workers are finishing. This is needed to process the task
shutdown events.
4. normal queue - this is the queue with netmgr events, e.g. reading,
sending, callbacks and pretty much everything is processed here.
* The isc_taskmgr_create() now requires initialized netmgr (isc_nm_t)
object.
* The isc_nm_destroy() function now waits for indefinite time, but it
will print out the active objects when in tracing mode
(-DNETMGR_TRACE=1 and -DNETMGR_TRACE_VERBOSE=1), the netmgr has been
made a little bit more asynchronous and it might take longer time to
shutdown all the active networking connections.
* Previously, the isc_nm_stoplistening() was a synchronous operation.
This has been changed and the isc_nm_stoplistening() just schedules
the child sockets to stop listening and exits. This was needed to
prevent a deadlock as the the (traditional) tasks are now executed on
the netmgr threads.
* The socket selection logic in isc__nm_udp_send() was flawed, but
fortunatelly, it was broken, so we never hit the problem where we
created uvreq_t on a socket from nmhandle_t, but then a different
socket could be picked up and then we were trying to run the send
callback on a socket that had different threadid than currently
running.
2021-04-09 11:31:19 +02:00
|
|
|
|
|
|
|
while (atomic_load(&counter) > 0) {
|
|
|
|
isc_test_nap(1000);
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Max tasks test:
|
|
|
|
* The task system can create and execute many tasks. Tests with 10000.
|
|
|
|
*/
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
maxtask_shutdown(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
if (event->ev_arg != NULL) {
|
2020-02-12 13:59:18 +01:00
|
|
|
isc_task_destroy((isc_task_t **)&event->ev_arg);
|
2018-02-27 23:01:14 -08:00
|
|
|
} else {
|
|
|
|
LOCK(&lock);
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_store(&done, true);
|
2018-02-27 23:01:14 -08:00
|
|
|
SIGNAL(&cv);
|
|
|
|
UNLOCK(&lock);
|
|
|
|
}
|
2018-10-24 13:12:55 -07:00
|
|
|
|
|
|
|
isc_event_free(&event);
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
maxtask_cb(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_result_t result;
|
|
|
|
|
|
|
|
if (event->ev_arg != NULL) {
|
|
|
|
isc_task_t *newtask = NULL;
|
|
|
|
|
2020-02-12 13:59:18 +01:00
|
|
|
event->ev_arg = (void *)(((uintptr_t)event->ev_arg) - 1);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a new task and forward the message.
|
|
|
|
*/
|
|
|
|
result = isc_task_create(taskmgr, 0, &newtask);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
result = isc_task_onshutdown(newtask, maxtask_shutdown,
|
2018-10-24 13:12:55 -07:00
|
|
|
(void *)task);
|
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
isc_task_send(newtask, &event);
|
|
|
|
} else if (task != NULL) {
|
|
|
|
isc_task_destroy(&task);
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_event_free(&event);
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
manytasks(void **state) {
|
|
|
|
isc_mem_t *mctx = NULL;
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_event_t *event = NULL;
|
2020-02-13 14:44:37 -08:00
|
|
|
uintptr_t ntasks = 10000;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (verbose) {
|
|
|
|
print_message("# Testing with %lu tasks\n",
|
|
|
|
(unsigned long)ntasks);
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2020-08-21 13:29:15 +10:00
|
|
|
isc_mutex_init(&lock);
|
2018-11-15 17:20:36 +01:00
|
|
|
isc_condition_init(&cv);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_mem_debugging = ISC_MEM_DEBUGRECORD;
|
2019-09-05 18:40:57 +02:00
|
|
|
isc_mem_create(&mctx);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2021-10-03 00:27:52 -07:00
|
|
|
isc_managers_create(mctx, 4, 0, &netmgr, &taskmgr, NULL);
|
2018-10-24 13:12:55 -07:00
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_init(&done, false);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
event = isc_event_allocate(mctx, (void *)1, 1, maxtask_cb,
|
|
|
|
(void *)ntasks, sizeof(*event));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
LOCK(&lock);
|
|
|
|
maxtask_cb(NULL, event);
|
2019-07-04 14:15:39 +02:00
|
|
|
while (!atomic_load(&done)) {
|
2018-02-27 23:01:14 -08:00
|
|
|
WAIT(&cv, &lock);
|
|
|
|
}
|
2019-07-04 14:15:39 +02:00
|
|
|
UNLOCK(&lock);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2021-10-03 00:27:52 -07:00
|
|
|
isc_managers_destroy(&netmgr, &taskmgr, NULL);
|
2021-04-27 00:07:43 +02:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_mem_destroy(&mctx);
|
|
|
|
isc_condition_destroy(&cv);
|
2020-08-21 13:29:15 +10:00
|
|
|
isc_mutex_destroy(&lock);
|
2018-10-24 13:12:55 -07:00
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Shutdown test:
|
|
|
|
* When isc_task_shutdown() is called, shutdown events are posted
|
|
|
|
* in LIFO order.
|
|
|
|
*/
|
|
|
|
|
2020-02-13 14:44:37 -08:00
|
|
|
static int nevents = 0;
|
|
|
|
static int nsdevents = 0;
|
|
|
|
static int senders[4];
|
2019-07-12 16:44:51 +02:00
|
|
|
atomic_bool ready, all_done;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
sd_sde1(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(nevents, 256);
|
|
|
|
assert_int_equal(nsdevents, 1);
|
2018-02-27 23:01:14 -08:00
|
|
|
++nsdevents;
|
2018-10-24 13:12:55 -07:00
|
|
|
|
|
|
|
if (verbose) {
|
|
|
|
print_message("# shutdown 1\n");
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
isc_event_free(&event);
|
2018-10-24 13:12:55 -07:00
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_store(&all_done, true);
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
sd_sde2(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(nevents, 256);
|
|
|
|
assert_int_equal(nsdevents, 0);
|
2018-02-27 23:01:14 -08:00
|
|
|
++nsdevents;
|
2018-10-24 13:12:55 -07:00
|
|
|
|
|
|
|
if (verbose) {
|
|
|
|
print_message("# shutdown 2\n");
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
sd_event1(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
LOCK(&lock);
|
2019-07-12 16:44:51 +02:00
|
|
|
while (!atomic_load(&ready)) {
|
2018-02-27 23:01:14 -08:00
|
|
|
WAIT(&cv, &lock);
|
|
|
|
}
|
2018-10-24 13:12:55 -07:00
|
|
|
UNLOCK(&lock);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (verbose) {
|
|
|
|
print_message("# event 1\n");
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
sd_event2(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
++nevents;
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
if (verbose) {
|
|
|
|
print_message("# event 2\n");
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
static void
|
2021-04-27 00:07:43 +02:00
|
|
|
task_shutdown(void **state) {
|
2020-02-13 14:44:37 -08:00
|
|
|
isc_result_t result;
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_eventtype_t event_type;
|
2020-02-13 14:44:37 -08:00
|
|
|
isc_event_t *event = NULL;
|
|
|
|
isc_task_t *task = NULL;
|
|
|
|
int i;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
nevents = nsdevents = 0;
|
2018-02-27 23:01:14 -08:00
|
|
|
event_type = 3;
|
2019-07-12 16:44:51 +02:00
|
|
|
atomic_init(&ready, false);
|
|
|
|
atomic_init(&all_done, false);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
LOCK(&lock);
|
|
|
|
|
|
|
|
result = isc_task_create(taskmgr, 0, &task);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This event causes the task to wait on cv.
|
|
|
|
*/
|
2020-02-12 13:59:18 +01:00
|
|
|
event = isc_event_allocate(test_mctx, &senders[1], event_type,
|
|
|
|
sd_event1, NULL, sizeof(*event));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event);
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_task_send(task, &event);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now we fill up the task's event queue with some events.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < 256; ++i) {
|
2019-11-09 14:01:08 +01:00
|
|
|
event = isc_event_allocate(test_mctx, &senders[1], event_type,
|
2018-02-27 23:01:14 -08:00
|
|
|
sd_event2, NULL, sizeof(*event));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event);
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_task_send(task, &event);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now we register two shutdown events.
|
|
|
|
*/
|
|
|
|
result = isc_task_onshutdown(task, sd_sde1, NULL);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
result = isc_task_onshutdown(task, sd_sde2, NULL);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
isc_task_shutdown(task);
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_task_detach(&task);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Now we free the task by signaling cv.
|
|
|
|
*/
|
2019-07-12 16:44:51 +02:00
|
|
|
atomic_store(&ready, true);
|
2018-02-27 23:01:14 -08:00
|
|
|
SIGNAL(&cv);
|
|
|
|
UNLOCK(&lock);
|
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
while (!atomic_load(&all_done)) {
|
2018-10-24 13:12:55 -07:00
|
|
|
isc_test_nap(1000);
|
|
|
|
}
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(nsdevents, 2);
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Post-shutdown test:
|
|
|
|
* After isc_task_shutdown() has been called, any call to
|
|
|
|
* isc_task_onshutdown() will return ISC_R_SHUTTINGDOWN.
|
|
|
|
*/
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
psd_event1(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
LOCK(&lock);
|
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
while (!atomic_load(&done)) {
|
2018-02-27 23:01:14 -08:00
|
|
|
WAIT(&cv, &lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
UNLOCK(&lock);
|
|
|
|
|
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
psd_sde(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
post_shutdown(void **state) {
|
|
|
|
isc_result_t result;
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_eventtype_t event_type;
|
2020-02-13 14:44:37 -08:00
|
|
|
isc_event_t *event;
|
|
|
|
isc_task_t *task;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_init(&done, false);
|
2018-02-27 23:01:14 -08:00
|
|
|
event_type = 4;
|
|
|
|
|
2018-11-15 17:20:36 +01:00
|
|
|
isc_condition_init(&cv);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
LOCK(&lock);
|
|
|
|
|
|
|
|
task = NULL;
|
|
|
|
result = isc_task_create(taskmgr, 0, &task);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This event causes the task to wait on cv.
|
|
|
|
*/
|
2020-02-12 13:59:18 +01:00
|
|
|
event = isc_event_allocate(test_mctx, &senders[1], event_type,
|
|
|
|
psd_event1, NULL, sizeof(*event));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event);
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_task_send(task, &event);
|
|
|
|
|
|
|
|
isc_task_shutdown(task);
|
|
|
|
|
|
|
|
result = isc_task_onshutdown(task, psd_sde, NULL);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SHUTTINGDOWN);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Release the task.
|
|
|
|
*/
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_store(&done, true);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
SIGNAL(&cv);
|
|
|
|
UNLOCK(&lock);
|
|
|
|
|
|
|
|
isc_task_detach(&task);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Helper for the purge tests below:
|
|
|
|
*/
|
|
|
|
|
2020-02-12 13:59:18 +01:00
|
|
|
#define SENDERCNT 3
|
2020-02-13 14:44:37 -08:00
|
|
|
#define TYPECNT 4
|
|
|
|
#define TAGCNT 5
|
|
|
|
#define NEVENTS (SENDERCNT * TYPECNT * TAGCNT)
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2020-02-13 14:44:37 -08:00
|
|
|
static int eventcnt;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
2019-07-12 16:44:51 +02:00
|
|
|
atomic_bool started;
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Helpers for purge event tests
|
|
|
|
*/
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
pge_event1(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
LOCK(&lock);
|
2019-07-04 14:15:39 +02:00
|
|
|
while (!atomic_load(&started)) {
|
2018-02-27 23:01:14 -08:00
|
|
|
WAIT(&cv, &lock);
|
|
|
|
}
|
|
|
|
UNLOCK(&lock);
|
|
|
|
|
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
pge_event2(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
++eventcnt;
|
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
pge_sde(isc_task_t *task, isc_event_t *event) {
|
2018-02-27 23:01:14 -08:00
|
|
|
UNUSED(task);
|
|
|
|
|
|
|
|
LOCK(&lock);
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_store(&done, true);
|
2018-02-27 23:01:14 -08:00
|
|
|
SIGNAL(&cv);
|
|
|
|
UNLOCK(&lock);
|
|
|
|
|
|
|
|
isc_event_free(&event);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2022-03-31 22:14:14 +02:00
|
|
|
try_purgeevent(void) {
|
2020-02-13 14:44:37 -08:00
|
|
|
isc_result_t result;
|
|
|
|
isc_task_t *task = NULL;
|
|
|
|
bool purged;
|
|
|
|
isc_event_t *event1 = NULL;
|
|
|
|
isc_event_t *event2 = NULL;
|
|
|
|
isc_event_t *event2_clone = NULL;
|
|
|
|
isc_time_t now;
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_interval_t interval;
|
|
|
|
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_init(&started, false);
|
|
|
|
atomic_init(&done, false);
|
2018-02-27 23:01:14 -08:00
|
|
|
eventcnt = 0;
|
|
|
|
|
2018-11-15 17:20:36 +01:00
|
|
|
isc_condition_init(&cv);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
result = isc_task_create(taskmgr, 0, &task);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
result = isc_task_onshutdown(task, pge_sde, NULL);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Block the task on cv.
|
|
|
|
*/
|
2019-11-09 14:01:08 +01:00
|
|
|
event1 = isc_event_allocate(test_mctx, (void *)1, (isc_eventtype_t)1,
|
2018-02-27 23:01:14 -08:00
|
|
|
pge_event1, NULL, sizeof(*event1));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event1);
|
2018-02-27 23:01:14 -08:00
|
|
|
isc_task_send(task, &event1);
|
|
|
|
|
2019-11-09 14:01:08 +01:00
|
|
|
event2 = isc_event_allocate(test_mctx, (void *)1, (isc_eventtype_t)1,
|
2018-02-27 23:01:14 -08:00
|
|
|
pge_event2, NULL, sizeof(*event2));
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_non_null(event2);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
event2_clone = event2;
|
|
|
|
|
|
|
|
isc_task_send(task, &event2);
|
|
|
|
|
|
|
|
purged = isc_task_purgeevent(task, event2_clone);
|
2022-03-31 22:14:14 +02:00
|
|
|
|
|
|
|
assert_true(purged);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Unblock the task, allowing event processing.
|
|
|
|
*/
|
|
|
|
LOCK(&lock);
|
2019-07-04 14:15:39 +02:00
|
|
|
atomic_store(&started, true);
|
2018-02-27 23:01:14 -08:00
|
|
|
SIGNAL(&cv);
|
|
|
|
|
|
|
|
isc_task_shutdown(task);
|
|
|
|
|
|
|
|
isc_interval_set(&interval, 5, 0);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for shutdown processing to complete.
|
|
|
|
*/
|
2019-07-04 14:15:39 +02:00
|
|
|
while (!atomic_load(&done)) {
|
2018-02-27 23:01:14 -08:00
|
|
|
result = isc_time_nowplusinterval(&now, &interval);
|
2018-10-24 13:12:55 -07:00
|
|
|
assert_int_equal(result, ISC_R_SUCCESS);
|
2018-02-27 23:01:14 -08:00
|
|
|
|
|
|
|
WAITUNTIL(&cv, &lock, &now);
|
|
|
|
}
|
|
|
|
|
|
|
|
UNLOCK(&lock);
|
|
|
|
|
|
|
|
isc_task_detach(&task);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Purge event test:
|
|
|
|
* When the event is marked as purgeable, a call to
|
|
|
|
* isc_task_purgeevent(task, event) purges the event 'event' from the
|
2018-04-17 08:29:14 -07:00
|
|
|
* task's queue and returns true.
|
2018-02-27 23:01:14 -08:00
|
|
|
*/
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
static void
|
2020-02-13 14:44:37 -08:00
|
|
|
purgeevent(void **state) {
|
2018-10-24 13:12:55 -07:00
|
|
|
UNUSED(state);
|
|
|
|
|
2022-03-31 22:14:14 +02:00
|
|
|
try_purgeevent();
|
2018-02-27 23:01:14 -08:00
|
|
|
}
|
|
|
|
|
2018-10-24 13:12:55 -07:00
|
|
|
int
|
2020-02-13 14:44:37 -08:00
|
|
|
main(int argc, char **argv) {
|
2018-10-24 13:12:55 -07:00
|
|
|
const struct CMUnitTest tests[] = {
|
|
|
|
cmocka_unit_test(manytasks),
|
|
|
|
cmocka_unit_test_setup_teardown(all_events, _setup, _teardown),
|
|
|
|
cmocka_unit_test_setup_teardown(basic, _setup2, _teardown),
|
2020-09-11 13:37:56 +10:00
|
|
|
cmocka_unit_test_setup_teardown(create_task, _setup, _teardown),
|
|
|
|
cmocka_unit_test_setup_teardown(post_shutdown, _setup2,
|
2020-02-12 13:59:18 +01:00
|
|
|
_teardown),
|
2018-10-24 13:12:55 -07:00
|
|
|
cmocka_unit_test_setup_teardown(purgeevent, _setup2, _teardown),
|
2021-04-27 00:07:43 +02:00
|
|
|
cmocka_unit_test_setup_teardown(task_shutdown, _setup4,
|
|
|
|
_teardown),
|
2020-09-11 13:37:56 +10:00
|
|
|
cmocka_unit_test_setup_teardown(task_exclusive, _setup4,
|
2020-02-12 13:59:18 +01:00
|
|
|
_teardown),
|
2018-10-24 13:12:55 -07:00
|
|
|
};
|
2020-09-02 18:22:21 +10:00
|
|
|
struct CMUnitTest selected[sizeof(tests) / sizeof(tests[0])];
|
2020-09-08 10:38:24 +10:00
|
|
|
size_t i;
|
2018-10-24 13:12:55 -07:00
|
|
|
int c;
|
|
|
|
|
2020-09-02 18:22:21 +10:00
|
|
|
memset(selected, 0, sizeof(selected));
|
|
|
|
|
2020-09-08 10:38:24 +10:00
|
|
|
while ((c = isc_commandline_parse(argc, argv, "lt:v")) != -1) {
|
2018-10-24 13:12:55 -07:00
|
|
|
switch (c) {
|
2020-09-08 10:38:24 +10:00
|
|
|
case 'l':
|
|
|
|
for (i = 0; i < (sizeof(tests) / sizeof(tests[0])); i++)
|
|
|
|
{
|
|
|
|
if (tests[i].name != NULL) {
|
|
|
|
fprintf(stdout, "%s\n", tests[i].name);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return (0);
|
2020-09-02 18:22:21 +10:00
|
|
|
case 't':
|
|
|
|
if (!cmocka_add_test_byname(
|
|
|
|
tests, isc_commandline_argument, selected))
|
|
|
|
{
|
|
|
|
fprintf(stderr, "unknown test '%s'\n",
|
|
|
|
isc_commandline_argument);
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
break;
|
2018-10-24 13:12:55 -07:00
|
|
|
case 'v':
|
|
|
|
verbose = true;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-09-02 18:22:21 +10:00
|
|
|
if (selected[0].name != NULL) {
|
|
|
|
return (cmocka_run_group_tests(selected, NULL, NULL));
|
|
|
|
} else {
|
|
|
|
return (cmocka_run_group_tests(tests, NULL, NULL));
|
|
|
|
}
|
2011-09-02 21:15:39 +00:00
|
|
|
}
|
2018-10-24 13:12:55 -07:00
|
|
|
|
|
|
|
#else /* HAVE_CMOCKA */
|
|
|
|
|
|
|
|
#include <stdio.h>
|
|
|
|
|
|
|
|
int
|
2020-02-13 14:44:37 -08:00
|
|
|
main(void) {
|
2018-10-24 13:12:55 -07:00
|
|
|
printf("1..0 # Skipped: cmocka not available\n");
|
2021-01-18 19:15:44 +01:00
|
|
|
return (SKIPPED_TEST_EXIT_CODE);
|
2018-10-24 13:12:55 -07:00
|
|
|
}
|
|
|
|
|
2020-02-13 21:48:23 +01:00
|
|
|
#endif /* if HAVE_CMOCKA */
|