2
0
mirror of https://gitlab.isc.org/isc-projects/bind9 synced 2025-08-22 10:10:06 +00:00

Refactor netmgr and add more unit tests

This is a part of the works that intends to make the netmgr stable,
testable, maintainable and tested.  It contains a numerous changes to
the netmgr code and unfortunately, it was not possible to split this
into smaller chunks as the work here needs to be committed as a complete
works.

NOTE: There's a quite a lot of duplicated code between udp.c, tcp.c and
tcpdns.c and it should be a subject to refactoring in the future.

The changes that are included in this commit are listed here
(extensively, but not exclusively):

* The netmgr_test unit test was split into individual tests (udp_test,
  tcp_test, tcpdns_test and newly added tcp_quota_test)

* The udp_test and tcp_test has been extended to allow programatic
  failures from the libuv API.  Unfortunately, we can't use cmocka
  mock() and will_return(), so we emulate the behaviour with #define and
  including the netmgr/{udp,tcp}.c source file directly.

* The netievents that we put on the nm queue have variable number of
  members, out of these the isc_nmsocket_t and isc_nmhandle_t always
  needs to be attached before enqueueing the netievent_<foo> and
  detached after we have called the isc_nm_async_<foo> to ensure that
  the socket (handle) doesn't disappear between scheduling the event and
  actually executing the event.

* Cancelling the in-flight TCP connection using libuv requires to call
  uv_close() on the original uv_tcp_t handle which just breaks too many
  assumptions we have in the netmgr code.  Instead of using uv_timer for
  TCP connection timeouts, we use platform specific socket option.

* Fix the synchronization between {nm,async}_{listentcp,tcpconnect}

  When isc_nm_listentcp() or isc_nm_tcpconnect() is called it was
  waiting for socket to either end up with error (that path was fine) or
  to be listening or connected using condition variable and mutex.

  Several things could happen:

    0. everything is ok

    1. the waiting thread would miss the SIGNAL() - because the enqueued
       event would be processed faster than we could start WAIT()ing.
       In case the operation would end up with error, it would be ok, as
       the error variable would be unchanged.

    2. the waiting thread miss the sock->{connected,listening} = `true`
       would be set to `false` in the tcp_{listen,connect}close_cb() as
       the connection would be so short lived that the socket would be
       closed before we could even start WAIT()ing

* The tcpdns has been converted to using libuv directly.  Previously,
  the tcpdns protocol used tcp protocol from netmgr, this proved to be
  very complicated to understand, fix and make changes to.  The new
  tcpdns protocol is modeled in a similar way how tcp netmgr protocol.
  Closes: #2194, #2283, #2318, #2266, #2034, #1920

* The tcp and tcpdns is now not using isc_uv_import/isc_uv_export to
  pass accepted TCP sockets between netthreads, but instead (similar to
  UDP) uses per netthread uv_loop listener.  This greatly reduces the
  complexity as the socket is always run in the associated nm and uv
  loops, and we are also not touching the libuv internals.

  There's an unfortunate side effect though, the new code requires
  support for load-balanced sockets from the operating system for both
  UDP and TCP (see #2137).  If the operating system doesn't support the
  load balanced sockets (either SO_REUSEPORT on Linux or SO_REUSEPORT_LB
  on FreeBSD 12+), the number of netthreads is limited to 1.

* The netmgr has now two debugging #ifdefs:

  1. Already existing NETMGR_TRACE prints any dangling nmsockets and
     nmhandles before triggering assertion failure.  This options would
     reduce performance when enabled, but in theory, it could be enabled
     on low-performance systems.

  2. New NETMGR_TRACE_VERBOSE option has been added that enables
     extensive netmgr logging that allows the software engineer to
     precisely track any attach/detach operations on the nmsockets and
     nmhandles.  This is not suitable for any kind of production
     machine, only for debugging.

* The tlsdns netmgr protocol has been split from the tcpdns and it still
  uses the old method of stacking the netmgr boxes on top of each other.
  We will have to refactor the tlsdns netmgr protocol to use the same
  approach - build the stack using only libuv and openssl.

* Limit but not assert the tcp buffer size in tcp_alloc_cb
  Closes: #2061
This commit is contained in:
Ondřej Surý 2020-11-12 10:32:18 +01:00
parent 3a36662207
commit 634bdfb16d
30 changed files with 8057 additions and 3056 deletions

View File

@ -106,6 +106,9 @@
(list
"--enable=all"
"--suppress=missingIncludeSystem"
"--suppress=nullPointerRedundantCheck"
(concat "--suppressions-list=" (expand-file-name
(concat directory-of-current-dir-locals-file "util/suppressions.txt")))
(concat "-include=" (expand-file-name
(concat directory-of-current-dir-locals-file "config.h")))
)

View File

@ -3232,7 +3232,10 @@ tcp_connected(isc_nmhandle_t *handle, isc_result_t eresult, void *arg) {
REQUIRE(DIG_VALID_QUERY(query));
REQUIRE(query->handle == NULL);
REQUIRE(!free_now);
INSIST(!free_now);
debug("tcp_connected(%p, %s, %p)", handle, isc_result_totext(eresult),
query);
LOCK_LOOKUP;
lookup_attach(query->lookup, &l);
@ -3303,7 +3306,10 @@ tcp_connected(isc_nmhandle_t *handle, isc_result_t eresult, void *arg) {
launch_next_query(query);
query_detach(&query);
isc_nmhandle_detach(&handle);
if (l->tls_mode) {
/* FIXME: This is a accounting bug in TLSDNS */
isc_nmhandle_detach(&handle);
}
lookup_detach(&l);
UNLOCK_LOOKUP;
}

View File

@ -8621,8 +8621,7 @@ load_configuration(const char *filename, named_server_t *server,
advertised = MAX_TCP_TIMEOUT;
}
isc_nm_tcp_settimeouts(named_g_nm, initial, idle, keepalive,
advertised);
isc_nm_settimeouts(named_g_nm, initial, idle, keepalive, advertised);
/*
* Configure sets of UDP query source ports.
@ -15950,8 +15949,8 @@ named_server_tcptimeouts(isc_lex_t *lex, isc_buffer_t **text) {
return (ISC_R_UNEXPECTEDEND);
}
isc_nm_tcp_gettimeouts(named_g_nm, &initial, &idle, &keepalive,
&advertised);
isc_nm_gettimeouts(named_g_nm, &initial, &idle, &keepalive,
&advertised);
/* Look for optional arguments. */
ptr = next_token(lex, NULL);
@ -16000,8 +15999,8 @@ named_server_tcptimeouts(isc_lex_t *lex, isc_buffer_t **text) {
result = isc_task_beginexclusive(named_g_server->task);
RUNTIME_CHECK(result == ISC_R_SUCCESS);
isc_nm_tcp_settimeouts(named_g_nm, initial, idle, keepalive,
advertised);
isc_nm_settimeouts(named_g_nm, initial, idle, keepalive,
advertised);
isc_task_endexclusive(named_g_server->task);
}

View File

@ -961,8 +961,6 @@ xfrin_connect_done(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
CHECK(xfrin_send_request(xfr));
failure:
isc_nmhandle_detach(&handle);
if (result != ISC_R_SUCCESS && result != ISC_R_SHUTTINGDOWN) {
xfrin_fail(xfr, result, "failed to connect");
}

View File

@ -127,6 +127,7 @@ libisc_la_SOURCES = \
netmgr/netmgr.c \
netmgr/tcp.c \
netmgr/tcpdns.c \
netmgr/tlsdns.c \
netmgr/tls.c \
netmgr/udp.c \
netmgr/uv-compat.c \

View File

@ -18,6 +18,7 @@
#endif /* HAVE_UCHAR_H */
#include <isc/mutex.h>
#include <isc/util.h>
#if !defined(__has_feature)
#define __has_feature(x) 0

View File

@ -111,10 +111,22 @@ isc_nmsocket_close(isc_nmsocket_t **sockp);
* sockets with active handles, the socket will be closed.
*/
#ifdef NETMGR_TRACE
#define isc_nmhandle_attach(handle, dest) \
isc__nmhandle_attach(handle, dest, __FILE__, __LINE__, __func__)
#define isc_nmhandle_detach(handlep) \
isc__nmhandle_detach(handlep, __FILE__, __LINE__, __func__)
#define FLARG , const char *file, unsigned int line, const char *func
#else
#define isc_nmhandle_attach(handle, dest) isc__nmhandle_attach(handle, dest)
#define isc_nmhandle_detach(handlep) isc__nmhandle_detach(handlep)
#define FLARG
#endif
void
isc_nmhandle_attach(isc_nmhandle_t *handle, isc_nmhandle_t **dest);
isc__nmhandle_attach(isc_nmhandle_t *handle, isc_nmhandle_t **dest FLARG);
void
isc_nmhandle_detach(isc_nmhandle_t **handlep);
isc__nmhandle_detach(isc_nmhandle_t **handlep FLARG);
/*%<
* Increment/decrement the reference counter in a netmgr handle,
* but (unlike the attach/detach functions) do not change the pointer
@ -127,6 +139,7 @@ isc_nmhandle_detach(isc_nmhandle_t **handlep);
* otherwise know that the handle was in use and might free it, along
* with the client.)
*/
#undef FLARG
void *
isc_nmhandle_getdata(isc_nmhandle_t *handle);
@ -302,9 +315,6 @@ isc_nm_listentcp(isc_nm_t *mgr, isc_nmiface_t *iface,
* If 'quota' is not NULL, then the socket is attached to the specified
* quota. This allows us to enforce TCP client quota limits.
*
* NOTE: This is currently only called inside isc_nm_listentcpdns(), which
* creates a 'wrapper' socket that sends and receives DNS messages
* prepended with a two-byte length field, and handles buffering.
*/
isc_result_t
@ -326,10 +336,11 @@ isc_nm_tcpconnect(isc_nm_t *mgr, isc_nmiface_t *local, isc_nmiface_t *peer,
*/
isc_result_t
isc_nm_listentcpdns(isc_nm_t *mgr, isc_nmiface_t *iface, isc_nm_recv_cb_t cb,
void *cbarg, isc_nm_accept_cb_t accept_cb,
void *accept_cbarg, size_t extrahandlesize, int backlog,
isc_quota_t *quota, isc_nmsocket_t **sockp);
isc_nm_listentcpdns(isc_nm_t *mgr, isc_nmiface_t *iface,
isc_nm_recv_cb_t recv_cb, void *recv_cbarg,
isc_nm_accept_cb_t accept_cb, void *accept_cbarg,
size_t extrahandlesize, int backlog, isc_quota_t *quota,
isc_nmsocket_t **sockp);
/*%<
* Start listening for DNS messages over the TCP interface 'iface', using
* net manager 'mgr'.
@ -391,8 +402,35 @@ isc_nm_tcpdns_keepalive(isc_nmhandle_t *handle, bool value);
*/
void
isc_nm_tcp_settimeouts(isc_nm_t *mgr, uint32_t init, uint32_t idle,
uint32_t keepalive, uint32_t advertised);
isc_nm_tlsdns_sequential(isc_nmhandle_t *handle);
/*%<
* Disable pipelining on this connection. Each DNS packet will be only
* processed after the previous completes.
*
* The socket must be unpaused after the query is processed. This is done
* the response is sent, or if we're dropping the query, it will be done
* when a handle is fully dereferenced by calling the socket's
* closehandle_cb callback.
*
* Note: This can only be run while a message is being processed; if it is
* run before any messages are read, no messages will be read.
*
* Also note: once this has been set, it cannot be reversed for a given
* connection.
*/
void
isc_nm_tlsdns_keepalive(isc_nmhandle_t *handle, bool value);
/*%<
* Enable/disable keepalive on this connection by setting it to 'value'.
*
* When keepalive is active, we switch to using the keepalive timeout
* to determine when to close a connection, rather than the idle timeout.
*/
void
isc_nm_settimeouts(isc_nm_t *mgr, uint32_t init, uint32_t idle,
uint32_t keepalive, uint32_t advertised);
/*%<
* Sets the initial, idle, and keepalive timeout values to use for
* TCP connections, and the timeout value to advertise in responses using
@ -404,8 +442,8 @@ isc_nm_tcp_settimeouts(isc_nm_t *mgr, uint32_t init, uint32_t idle,
*/
void
isc_nm_tcp_gettimeouts(isc_nm_t *mgr, uint32_t *initial, uint32_t *idle,
uint32_t *keepalive, uint32_t *advertised);
isc_nm_gettimeouts(isc_nm_t *mgr, uint32_t *initial, uint32_t *idle,
uint32_t *keepalive, uint32_t *advertised);
/*%<
* Gets the initial, idle, keepalive, or advertised timeout values,
* in tenths of seconds.

View File

@ -31,6 +31,7 @@
#include <isc/atomic.h>
#include <isc/lang.h>
#include <isc/magic.h>
#include <isc/mutex.h>
#include <isc/types.h>
@ -44,6 +45,7 @@ ISC_LANG_BEGINDECLS
typedef struct isc_quota_cb isc_quota_cb_t;
typedef void (*isc_quota_cb_func_t)(isc_quota_t *quota, void *data);
struct isc_quota_cb {
int magic;
isc_quota_cb_func_t cb_func;
void * data;
ISC_LINK(isc_quota_cb_t) link;
@ -51,6 +53,7 @@ struct isc_quota_cb {
/*% isc_quota structure */
struct isc_quota {
int magic;
atomic_uint_fast32_t max;
atomic_uint_fast32_t used;
atomic_uint_fast32_t soft;

View File

@ -64,6 +64,76 @@
void
isc__nm_dump_active(isc_nm_t *nm);
#if defined(__linux__)
#include <syscall.h>
#define gettid() (uint32_t) syscall(SYS_gettid)
#elif defined(_WIN32)
#define gettid() (uint32_t) GetCurrentThreadId()
#else
#define gettid() (uint32_t) pthread_self()
#endif
#ifdef NETMGR_TRACE_VERBOSE
#define NETMGR_TRACE_LOG(format, ...) \
fprintf(stderr, "%" PRIu32 ":%d:%s:%u:%s:" format, gettid(), \
isc_nm_tid(), file, line, func, __VA_ARGS__)
#else
#define NETMGR_TRACE_LOG(format, ...) \
(void)file; \
(void)line; \
(void)func;
#endif
#define FLARG_PASS , file, line, func
#define FLARG \
, const char *file __attribute__((unused)), \
unsigned int line __attribute__((unused)), \
const char *func __attribute__((unused))
#define FLARG_IEVENT(ievent) \
const char *file = ievent->file; \
unsigned int line = ievent->line; \
const char *func = ievent->func;
#define FLARG_IEVENT_PASS(ievent) \
ievent->file = file; \
ievent->line = line; \
ievent->func = func;
#define isc__nm_uvreq_get(req, sock) \
isc___nm_uvreq_get(req, sock, __FILE__, __LINE__, __func__)
#define isc__nm_uvreq_put(req, sock) \
isc___nm_uvreq_put(req, sock, __FILE__, __LINE__, __func__)
#define isc__nmsocket_init(sock, mgr, type, iface) \
isc___nmsocket_init(sock, mgr, type, iface, __FILE__, __LINE__, \
__func__)
#define isc__nmsocket_put(sockp) \
isc___nmsocket_put(sockp, __FILE__, __LINE__, __func__)
#define isc__nmsocket_attach(sock, target) \
isc___nmsocket_attach(sock, target, __FILE__, __LINE__, __func__)
#define isc__nmsocket_detach(socketp) \
isc___nmsocket_detach(socketp, __FILE__, __LINE__, __func__)
#define isc__nmsocket_close(socketp) \
isc___nmsocket_close(socketp, __FILE__, __LINE__, __func__)
#define isc__nmhandle_get(sock, peer, local) \
isc___nmhandle_get(sock, peer, local, __FILE__, __LINE__, __func__)
#define isc__nmsocket_prep_destroy(sock) \
isc___nmsocket_prep_destroy(sock, __FILE__, __LINE__, __func__)
#else
#define NETMGR_TRACE_LOG(format, ...)
#define FLARG_PASS
#define FLARG
#define FLARG_IEVENT(ievent)
#define FLARG_IEVENT_PASS(ievent)
#define isc__nm_uvreq_get(req, sock) isc___nm_uvreq_get(req, sock)
#define isc__nm_uvreq_put(req, sock) isc___nm_uvreq_put(req, sock)
#define isc__nmsocket_init(sock, mgr, type, iface) \
isc___nmsocket_init(sock, mgr, type, iface)
#define isc__nmsocket_put(sockp) isc___nmsocket_put(sockp)
#define isc__nmsocket_attach(sock, target) isc___nmsocket_attach(sock, target)
#define isc__nmsocket_detach(socketp) isc___nmsocket_detach(socketp)
#define isc__nmsocket_close(socketp) isc___nmsocket_close(socketp)
#define isc__nmhandle_get(sock, peer, local) \
isc___nmhandle_get(sock, peer, local)
#define isc__nmsocket_prep_destroy(sock) isc___nmsocket_prep_destroy(sock)
#endif
/*
@ -149,12 +219,13 @@ typedef enum isc__netievent_type {
netievent_tcpsend,
netievent_tcpstartread,
netievent_tcppauseread,
netievent_tcpchildaccept,
netievent_tcpaccept,
netievent_tcpstop,
netievent_tcpcancel,
netievent_tcpclose,
netievent_tcpdnsaccept,
netievent_tcpdnsconnect,
netievent_tcpdnssend,
netievent_tcpdnsread,
netievent_tcpdnscancel,
@ -167,13 +238,20 @@ typedef enum isc__netievent_type {
netievent_tlsconnect,
netievent_tlsdobio,
netievent_closecb,
netievent_tlsdnsaccept,
netievent_tlsdnsconnect,
netievent_tlsdnssend,
netievent_tlsdnsread,
netievent_tlsdnscancel,
netievent_tlsdnsclose,
netievent_tlsdnsstop,
netievent_close,
netievent_shutdown,
netievent_stop,
netievent_pause,
netievent_connectcb,
netievent_acceptcb,
netievent_readcb,
netievent_sendcb,
@ -184,6 +262,7 @@ typedef enum isc__netievent_type {
*/
netievent_udplisten,
netievent_tcplisten,
netievent_tcpdnslisten,
netievent_resume,
netievent_detach,
} isc__netievent_type;
@ -231,40 +310,107 @@ struct isc__nm_uvreq {
ISC_LINK(isc__nm_uvreq_t) link;
};
void *
isc__nm_get_netievent(isc_nm_t *mgr, isc__netievent_type type);
/*%<
* Allocate an ievent and set the type.
*/
void
isc__nm_put_netievent(isc_nm_t *mgr, void *ievent);
/*
* The macros here are used to simulate the "inheritance" in C, there's the base
* netievent structure that contains just its own type and socket, and there are
* extended netievent types that also have handles or requests or other data.
*
* The macros here ensure that:
*
* 1. every netievent type has matching definition, declaration and
* implementation
*
* 2. we handle all the netievent types of same subclass the same, e.g. if the
* extended netievent contains handle, we always attach to the handle in
* the ctor and detach from the handle in dtor.
*
* There are three macros here for each netievent subclass:
*
* 1. NETIEVENT_*_TYPE(type) creates the typedef for each type; used below in
* this header
*
* 2. NETIEVENT_*_DECL(type) generates the declaration of the get and put
* functions (isc__nm_get_netievent_* and isc__nm_put_netievent_*); used
* below in this header
*
* 3. NETIEVENT_*_DEF(type) generates the definition of the functions; used
* either in netmgr.c or matching protocol file (e.g. udp.c, tcp.c, etc.)
*/
#define NETIEVENT__SOCKET \
isc__netievent_type type; \
isc_nmsocket_t *sock; \
const char *file; \
unsigned int line; \
const char *func
typedef struct isc__netievent__socket {
isc__netievent_type type;
isc_nmsocket_t *sock;
NETIEVENT__SOCKET;
} isc__netievent__socket_t;
typedef isc__netievent__socket_t isc__netievent_udplisten_t;
typedef isc__netievent__socket_t isc__netievent_udpread_t;
typedef isc__netievent__socket_t isc__netievent_udpstop_t;
typedef isc__netievent__socket_t isc__netievent_udpclose_t;
typedef isc__netievent__socket_t isc__netievent_tcpstop_t;
#define NETIEVENT_SOCKET_TYPE(type) \
typedef isc__netievent__socket_t isc__netievent_##type##_t;
typedef isc__netievent__socket_t isc__netievent_tcpclose_t;
typedef isc__netievent__socket_t isc__netievent_startread_t;
typedef isc__netievent__socket_t isc__netievent_pauseread_t;
typedef isc__netievent__socket_t isc__netievent_closecb_t;
#define NETIEVENT_SOCKET_DECL(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock); \
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent);
typedef isc__netievent__socket_t isc__netievent_tcpdnsclose_t;
typedef isc__netievent__socket_t isc__netievent_tcpdnsread_t;
typedef isc__netievent__socket_t isc__netievent_tcpdnsstop_t;
typedef isc__netievent__socket_t isc__netievent_tlsclose_t;
typedef isc__netievent__socket_t isc__netievent_tlsdobio_t;
#define NETIEVENT_SOCKET_DEF(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock) { \
isc__netievent_##type##_t *ievent = \
isc__nm_get_netievent(nm, netievent_##type); \
isc__nmsocket_attach(sock, &ievent->sock); \
\
return (ievent); \
} \
\
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent) { \
isc__nmsocket_detach(&ievent->sock); \
isc__nm_put_netievent(nm, ievent); \
}
typedef struct isc__netievent__socket_req {
isc__netievent_type type;
isc_nmsocket_t *sock;
NETIEVENT__SOCKET;
isc__nm_uvreq_t *req;
} isc__netievent__socket_req_t;
typedef isc__netievent__socket_req_t isc__netievent_udpconnect_t;
typedef isc__netievent__socket_req_t isc__netievent_tcpconnect_t;
typedef isc__netievent__socket_req_t isc__netievent_tcplisten_t;
typedef isc__netievent__socket_req_t isc__netievent_tcpsend_t;
typedef isc__netievent__socket_req_t isc__netievent_tcpdnssend_t;
#define NETIEVENT_SOCKET_REQ_TYPE(type) \
typedef isc__netievent__socket_req_t isc__netievent_##type##_t;
#define NETIEVENT_SOCKET_REQ_DECL(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock, isc__nm_uvreq_t *req); \
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent);
#define NETIEVENT_SOCKET_REQ_DEF(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock, isc__nm_uvreq_t *req) { \
isc__netievent_##type##_t *ievent = \
isc__nm_get_netievent(nm, netievent_##type); \
isc__nmsocket_attach(sock, &ievent->sock); \
ievent->req = req; \
\
return (ievent); \
} \
\
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent) { \
isc__nmsocket_detach(&ievent->sock); \
isc__nm_put_netievent(nm, ievent); \
}
typedef struct isc__netievent__socket_req_result {
isc__netievent_type type;
@ -273,43 +419,100 @@ typedef struct isc__netievent__socket_req_result {
isc_result_t result;
} isc__netievent__socket_req_result_t;
typedef isc__netievent__socket_req_result_t isc__netievent_connectcb_t;
typedef isc__netievent__socket_req_result_t isc__netievent_acceptcb_t;
typedef isc__netievent__socket_req_result_t isc__netievent_readcb_t;
typedef isc__netievent__socket_req_result_t isc__netievent_sendcb_t;
#define NETIEVENT_SOCKET_REQ_RESULT_TYPE(type) \
typedef isc__netievent__socket_req_result_t isc__netievent_##type##_t;
typedef struct isc__netievent__socket_streaminfo_quota {
isc__netievent_type type;
isc_nmsocket_t *sock;
isc_uv_stream_info_t streaminfo;
isc_quota_t *quota;
} isc__netievent__socket_streaminfo_quota_t;
#define NETIEVENT_SOCKET_REQ_RESULT_DECL(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock, isc__nm_uvreq_t *req, \
isc_result_t result); \
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent);
typedef isc__netievent__socket_streaminfo_quota_t
isc__netievent_tcpchildaccept_t;
#define NETIEVENT_SOCKET_REQ_RESULT_DEF(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock, isc__nm_uvreq_t *req, \
isc_result_t result) { \
isc__netievent_##type##_t *ievent = \
isc__nm_get_netievent(nm, netievent_##type); \
isc__nmsocket_attach(sock, &ievent->sock); \
ievent->req = req; \
ievent->result = result; \
\
return (ievent); \
} \
\
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent) { \
isc__nmsocket_detach(&ievent->sock); \
isc__nm_put_netievent(nm, ievent); \
}
typedef struct isc__netievent__socket_handle {
isc__netievent_type type;
isc_nmsocket_t *sock;
NETIEVENT__SOCKET;
isc_nmhandle_t *handle;
} isc__netievent__socket_handle_t;
typedef isc__netievent__socket_handle_t isc__netievent_udpcancel_t;
typedef isc__netievent__socket_handle_t isc__netievent_tcpcancel_t;
typedef isc__netievent__socket_handle_t isc__netievent_tcpdnscancel_t;
typedef isc__netievent__socket_handle_t isc__netievent_detach_t;
#define NETIEVENT_SOCKET_HANDLE_TYPE(type) \
typedef isc__netievent__socket_handle_t isc__netievent_##type##_t;
#define NETIEVENT_SOCKET_HANDLE_DECL(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock, isc_nmhandle_t *handle); \
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent);
#define NETIEVENT_SOCKET_HANDLE_DEF(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock, isc_nmhandle_t *handle) { \
isc__netievent_##type##_t *ievent = \
isc__nm_get_netievent(nm, netievent_##type); \
isc__nmsocket_attach(sock, &ievent->sock); \
isc_nmhandle_attach(handle, &ievent->handle); \
\
return (ievent); \
} \
\
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent) { \
isc__nmsocket_detach(&ievent->sock); \
isc_nmhandle_detach(&ievent->handle); \
isc__nm_put_netievent(nm, ievent); \
}
typedef struct isc__netievent__socket_quota {
isc__netievent_type type;
isc_nmsocket_t *sock;
NETIEVENT__SOCKET;
isc_quota_t *quota;
} isc__netievent__socket_quota_t;
typedef isc__netievent__socket_quota_t isc__netievent_tcpaccept_t;
#define NETIEVENT_SOCKET_QUOTA_TYPE(type) \
typedef isc__netievent__socket_quota_t isc__netievent_##type##_t;
#define NETIEVENT_SOCKET_QUOTA_DECL(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock, isc_quota_t *quota); \
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent);
#define NETIEVENT_SOCKET_QUOTA_DEF(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm, isc_nmsocket_t *sock, isc_quota_t *quota) { \
isc__netievent_##type##_t *ievent = \
isc__nm_get_netievent(nm, netievent_##type); \
isc__nmsocket_attach(sock, &ievent->sock); \
ievent->quota = quota; \
\
return (ievent); \
} \
\
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent) { \
isc__nmsocket_detach(&ievent->sock); \
isc__nm_put_netievent(nm, ievent); \
}
typedef struct isc__netievent_udpsend {
isc__netievent_type type;
isc_nmsocket_t *sock;
NETIEVENT__SOCKET;
isc_sockaddr_t peer;
isc__nm_uvreq_t *req;
} isc__netievent_udpsend_t;
@ -326,8 +529,26 @@ typedef struct isc__netievent {
isc__netievent_type type;
} isc__netievent_t;
typedef isc__netievent_t isc__netievent_shutdown_t;
typedef isc__netievent_t isc__netievent_stop_t;
#define NETIEVENT_TYPE(type) typedef isc__netievent_t isc__netievent_##type##_t;
#define NETIEVENT_DECL(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type(isc_nm_t *nm); \
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent);
#define NETIEVENT_DEF(type) \
isc__netievent_##type##_t *isc__nm_get_netievent_##type( \
isc_nm_t *nm) { \
isc__netievent_##type##_t *ievent = \
isc__nm_get_netievent(nm, netievent_##type); \
\
return (ievent); \
} \
\
void isc__nm_put_netievent_##type(isc_nm_t *nm, \
isc__netievent_##type##_t *ievent) { \
isc__nm_put_netievent(nm, ievent); \
}
typedef union {
isc__netievent_t ni;
@ -335,7 +556,6 @@ typedef union {
isc__netievent__socket_req_t nisr;
isc__netievent_udpsend_t nius;
isc__netievent__socket_quota_t nisq;
isc__netievent__socket_streaminfo_quota_t nissq;
isc__netievent_tlsconnect_t nitc;
} isc__netievent_storage_t;
@ -405,7 +625,9 @@ typedef enum isc_nmsocket_type {
isc_nm_tcpdnslistener,
isc_nm_tcpdnssocket,
isc_nm_tlslistener,
isc_nm_tlssocket
isc_nm_tlssocket,
isc_nm_tlsdnslistener,
isc_nm_tlsdnssocket
} isc_nmsocket_type;
/*%
@ -440,7 +662,7 @@ struct isc_nmsocket {
isc_nmsocket_t *parent;
/*% Listener socket this connection was accepted on */
isc_nmsocket_t *listener;
/*% Self, for self-contained unreferenced sockets (tcpdns) */
/*% Self socket */
isc_nmsocket_t *self;
/*% TLS stuff */
@ -513,7 +735,7 @@ struct isc_nmsocket {
/* Atomic */
/*% Number of running (e.g. listening) child sockets */
atomic_int_fast32_t rchildren;
uint_fast32_t rchildren;
/*%
* Socket is active if it's listening, working, etc. If it's
@ -532,11 +754,10 @@ struct isc_nmsocket {
atomic_bool closing;
atomic_bool closed;
atomic_bool listening;
atomic_bool listen_error;
atomic_bool connecting;
atomic_bool connected;
atomic_bool connect_error;
bool accepting;
bool reading;
isc_refcount_t references;
/*%
@ -550,17 +771,10 @@ struct isc_nmsocket {
atomic_bool sequential;
/*%
* TCPDNS socket has exceeded the maximum number of
* simultaneous requests per connection, so will be temporarily
* restricted from pipelining.
* The socket is processing read callback, this is guard to not read
* data before the readcb is back.
*/
atomic_bool overlimit;
/*%
* TCPDNS socket in sequential mode is currently processing a packet,
* we need to wait until it finishes.
*/
atomic_bool processing;
bool processing;
/*%
* A TCP socket has had isc_nm_pauseread() called.
@ -584,14 +798,41 @@ struct isc_nmsocket {
* Used to wait for TCP listening events to complete, and
* for the number of running children to reach zero during
* shutdown.
*
* We use two condition variables to prevent the race where the netmgr
* threads would be able to finish and destroy the socket before it's
* unlocked by the isc_nm_listen<proto>() function. So, the flow is as
* follows:
*
* 1. parent thread creates all children sockets and passes then to
* netthreads, looks at the signaling variable and WAIT(cond) until
* the childrens are done initializing
*
* 2. the events get picked by netthreads, calls the libuv API (and
* either succeeds or fails) and WAIT(scond) until all other
* children sockets in netthreads are initialized and the listening
* socket lock is unlocked
*
* 3. the control is given back to the parent thread which now either
* returns success or shutdowns the listener if an error has
* occured in the children netthread
*
* NOTE: The other approach would be doing an extra attach to the parent
* listening socket, and then detach it in the parent thread, but that
* breaks the promise that once the libuv socket is initialized on the
* nmsocket, the nmsocket needs to be handled only by matching
* netthread, so in fact that would add a complexity in a way that
* isc__nmsocket_detach would have to be converted to use an
* asynchrounous netievent.
*/
isc_mutex_t lock;
isc_condition_t cond;
isc_condition_t scond;
/*%
* Used to pass a result back from listen or connect events.
*/
atomic_int_fast32_t result;
isc_result_t result;
/*%
* List of active handles.
@ -631,14 +872,18 @@ struct isc_nmsocket {
*/
isc_nm_opaquecb_t closehandle_cb;
isc_nmhandle_t *recv_handle;
isc_nm_recv_cb_t recv_cb;
void *recv_cbarg;
bool recv_read;
isc_nm_cb_t connect_cb;
void *connect_cbarg;
isc_nm_accept_cb_t accept_cb;
void *accept_cbarg;
atomic_int_fast32_t active_child_connections;
#ifdef NETMGR_TRACE
void *backtrace[TRACE_SIZE];
int backtrace_size;
@ -653,13 +898,12 @@ isc__nm_in_netthread(void);
* Returns 'true' if we're in the network thread.
*/
void *
isc__nm_get_ievent(isc_nm_t *mgr, isc__netievent_type type);
/*%<
* Allocate an ievent and set the type.
*/
void
isc__nm_put_ievent(isc_nm_t *mgr, void *ievent);
isc__nm_maybe_enqueue_ievent(isc__networker_t *worker, isc__netievent_t *event);
/*%<
* If the caller is already in the matching nmthread, process the netievent
* directly, if not enqueue using isc__nm_enqueue_ievent().
*/
void
isc__nm_enqueue_ievent(isc__networker_t *worker, isc__netievent_t *event);
@ -679,8 +923,8 @@ isc__nm_free_uvbuf(isc_nmsocket_t *sock, const uv_buf_t *buf);
*/
isc_nmhandle_t *
isc__nmhandle_get(isc_nmsocket_t *sock, isc_sockaddr_t *peer,
isc_sockaddr_t *local);
isc___nmhandle_get(isc_nmsocket_t *sock, isc_sockaddr_t *peer,
isc_sockaddr_t *local FLARG);
/*%<
* Get a handle for the socket 'sock', allocating a new one
* if there isn't one available in 'sock->inactivehandles'.
@ -696,14 +940,14 @@ isc__nmhandle_get(isc_nmsocket_t *sock, isc_sockaddr_t *peer,
*/
isc__nm_uvreq_t *
isc__nm_uvreq_get(isc_nm_t *mgr, isc_nmsocket_t *sock);
isc___nm_uvreq_get(isc_nm_t *mgr, isc_nmsocket_t *sock FLARG);
/*%<
* Get a UV request structure for the socket 'sock', allocating a
* new one if there isn't one available in 'sock->inactivereqs'.
*/
void
isc__nm_uvreq_put(isc__nm_uvreq_t **req, isc_nmsocket_t *sock);
isc___nm_uvreq_put(isc__nm_uvreq_t **req, isc_nmsocket_t *sock FLARG);
/*%<
* Completes the use of a UV request structure, setting '*req' to NULL.
*
@ -712,28 +956,28 @@ isc__nm_uvreq_put(isc__nm_uvreq_t **req, isc_nmsocket_t *sock);
*/
void
isc__nmsocket_init(isc_nmsocket_t *sock, isc_nm_t *mgr, isc_nmsocket_type type,
isc_nmiface_t *iface);
isc___nmsocket_init(isc_nmsocket_t *sock, isc_nm_t *mgr, isc_nmsocket_type type,
isc_nmiface_t *iface FLARG);
/*%<
* Initialize socket 'sock', attach it to 'mgr', and set it to type 'type'
* and its interface to 'iface'.
*/
void
isc__nmsocket_attach(isc_nmsocket_t *sock, isc_nmsocket_t **target);
isc___nmsocket_attach(isc_nmsocket_t *sock, isc_nmsocket_t **target FLARG);
/*%<
* Attach to a socket, increasing refcount
*/
void
isc__nmsocket_detach(isc_nmsocket_t **socketp);
isc___nmsocket_detach(isc_nmsocket_t **socketp FLARG);
/*%<
* Detach from socket, decreasing refcount and possibly destroying the
* socket if it's no longer referenced.
*/
void
isc__nmsocket_prep_destroy(isc_nmsocket_t *sock);
isc___nmsocket_prep_destroy(isc_nmsocket_t *sock FLARG);
/*%<
* Market 'sock' as inactive, close it if necessary, and destroy it
* if there are no remaining references or active handles.
@ -771,17 +1015,14 @@ void
isc__nm_async_connectcb(isc__networker_t *worker, isc__netievent_t *ev0);
/*%<
* Issue a connect callback on the socket, used to call the callback
* on failed conditions when the event can't be scheduled on the uv loop.
*/
void
isc_result_t
isc__nm_acceptcb(isc_nmsocket_t *sock, isc__nm_uvreq_t *uvreq,
isc_result_t eresult);
void
isc__nm_async_acceptcb(isc__networker_t *worker, isc__netievent_t *ev0);
/*%<
* Issue a accept callback on the socket, used to call the callback
* on failed conditions when the event can't be scheduled on the uv loop.
* Issue a synchronous accept callback on the socket.
*/
void
@ -806,12 +1047,6 @@ isc__nm_async_sendcb(isc__networker_t *worker, isc__netievent_t *ev0);
* on failed conditions when the event can't be scheduled on the uv loop.
*/
void
isc__nm_async_closecb(isc__networker_t *worker, isc__netievent_t *ev0);
/*%<
* Issue a 'handle closed' callback on the socket.
*/
void
isc__nm_async_shutdown(isc__networker_t *worker, isc__netievent_t *ev0);
/*%<
@ -900,13 +1135,13 @@ isc__nm_tcp_close(isc_nmsocket_t *sock);
* Close a TCP socket.
*/
void
isc__nm_tcp_pauseread(isc_nmsocket_t *sock);
isc__nm_tcp_pauseread(isc_nmhandle_t *handle);
/*%<
* Pause reading on this socket, while still remembering the callback.
* Pause reading on this handle, while still remembering the callback.
*/
void
isc__nm_tcp_resumeread(isc_nmsocket_t *sock);
isc__nm_tcp_resumeread(isc_nmhandle_t *handle);
/*%<
* Resume reading from socket.
*
@ -931,6 +1166,12 @@ isc__nm_tcp_stoplistening(isc_nmsocket_t *sock);
* Stop listening on 'sock'.
*/
int_fast32_t
isc__nm_tcp_listener_nactive(isc_nmsocket_t *sock);
/*%<
* Returns the number of active connections for the TCP listener socket.
*/
void
isc__nm_tcp_settimeout(isc_nmhandle_t *handle, uint32_t timeout);
/*%<
@ -944,8 +1185,6 @@ isc__nm_async_tcplisten(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpaccept(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpchildaccept(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpstop(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpsend(isc__networker_t *worker, isc__netievent_t *ev0);
@ -954,9 +1193,9 @@ isc__nm_async_startread(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_pauseread(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcp_startread(isc__networker_t *worker, isc__netievent_t *ev0);
isc__nm_async_tcpstartread(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcp_pauseread(isc__networker_t *worker, isc__netievent_t *ev0);
isc__nm_async_tcppauseread(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpcancel(isc__networker_t *worker, isc__netievent_t *ev0);
void
@ -976,15 +1215,21 @@ void
isc__nm_async_tlsconnect(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tls_startread(isc__networker_t *worker, isc__netievent_t *ev0);
isc__nm_async_tlsstartread(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tls_do_bio(isc__networker_t *worker, isc__netievent_t *ev0);
isc__nm_async_tlsdobio(isc__networker_t *worker, isc__netievent_t *ev0);
/*%<
* Callback handlers for asynchronouse TLS events.
*/
void
isc__nm_async_tcpdnsaccept(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpdnsconnect(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpdnslisten(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_tcpdns_send(isc_nmhandle_t *handle, isc_region_t *region,
isc_nm_cb_t cb, void *cbarg);
@ -992,6 +1237,9 @@ isc__nm_tcpdns_send(isc_nmhandle_t *handle, isc_region_t *region,
* Back-end implementation of isc_nm_send() for TCPDNS handles.
*/
void
isc__nm_tcpdns_shutdown(isc_nmsocket_t *sock);
void
isc__nm_tcpdns_close(isc_nmsocket_t *sock);
/*%<
@ -1011,6 +1259,10 @@ isc__nm_tcpdns_settimeout(isc_nmhandle_t *handle, uint32_t timeout);
* associated with 'handle', and the TCP socket it wraps around.
*/
void
isc__nm_async_tcpdnslisten(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpdnsaccept(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tcpdnscancel(isc__networker_t *worker, isc__netievent_t *ev0);
void
@ -1032,6 +1284,56 @@ isc__nm_tcpdns_cancelread(isc_nmhandle_t *handle);
* Stop reading on a connected TCPDNS handle.
*/
void
isc__nm_tlsdns_send(isc_nmhandle_t *handle, isc_region_t *region,
isc_nm_cb_t cb, void *cbarg);
/*%<
* Back-end implementation of isc_nm_send() for TLSDNS handles.
*/
void
isc__nm_tlsdns_shutdown(isc_nmsocket_t *sock);
void
isc__nm_tlsdns_close(isc_nmsocket_t *sock);
/*%<
* Close a TLSDNS socket.
*/
void
isc__nm_tlsdns_stoplistening(isc_nmsocket_t *sock);
/*%<
* Stop listening on 'sock'.
*/
void
isc__nm_tlsdns_settimeout(isc_nmhandle_t *handle, uint32_t timeout);
/*%<
* Set the read timeout and reset the timer for the TLSDNS socket
* associated with 'handle', and the TCP socket it wraps around.
*/
void
isc__nm_async_tlsdnscancel(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tlsdnsclose(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tlsdnssend(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tlsdnsstop(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_async_tlsdnsread(isc__networker_t *worker, isc__netievent_t *ev0);
void
isc__nm_tlsdns_read(isc_nmhandle_t *handle, isc_nm_recv_cb_t cb, void *cbarg);
void
isc__nm_tlsdns_cancelread(isc_nmhandle_t *handle);
/*%<
* Stop reading on a connected TLSDNS handle.
*/
void
isc__nm_tls_send(isc_nmhandle_t *handle, isc_region_t *region, isc_nm_cb_t cb,
void *cbarg);
@ -1046,15 +1348,15 @@ isc__nm_tls_close(isc_nmsocket_t *sock);
*/
void
isc__nm_tls_pauseread(isc_nmsocket_t *sock);
isc__nm_tls_pauseread(isc_nmhandle_t *handle);
/*%<
* Pause reading on this socket, while still remembering the callback.
* Pause reading on this handle, while still remembering the callback.
*/
void
isc__nm_tls_resumeread(isc_nmsocket_t *sock);
isc__nm_tls_resumeread(isc_nmhandle_t *handle);
/*%<
* Resume reading from socket.
* Resume reading from the handle.
*
*/
@ -1062,10 +1364,10 @@ void
isc__nm_tls_stoplistening(isc_nmsocket_t *sock);
#define isc__nm_uverr2result(x) \
isc___nm_uverr2result(x, true, __FILE__, __LINE__)
isc___nm_uverr2result(x, true, __FILE__, __LINE__, __func__)
isc_result_t
isc___nm_uverr2result(int uverr, bool dolog, const char *file,
unsigned int line);
unsigned int line, const char *func);
/*%<
* Convert a libuv error value into an isc_result_t. The
* list of supported error values is not complete; new users
@ -1109,6 +1411,12 @@ isc__nm_socket(int domain, int type, int protocol, uv_os_sock_t *sockp);
* Platform independent socket() version
*/
void
isc__nm_closesocket(uv_os_sock_t sock);
/*%<
* Platform independent closesocket() version
*/
isc_result_t
isc__nm_socket_freebind(uv_os_sock_t fd, sa_family_t sa_family);
/*%<
@ -1139,8 +1447,124 @@ isc__nm_socket_dontfrag(uv_os_sock_t fd, sa_family_t sa_family);
* Set the SO_IP_DONTFRAG (or equivalent) socket option of the fd if available
*/
isc_result_t
isc__nm_socket_connectiontimeout(uv_os_sock_t fd, int timeout_ms);
/*%<
* Set the connection timeout in miliseconds, on non-Linux platforms,
* the minimum value must be at least 1000 (1 second).
*/
void
isc__nm_tls_initialize(void);
/*%<
* Initialize OpenSSL library, idempotent.
*/
/*
* typedef all the netievent types
*/
NETIEVENT_SOCKET_TYPE(close);
NETIEVENT_SOCKET_TYPE(tcpclose);
NETIEVENT_SOCKET_TYPE(tcplisten);
NETIEVENT_SOCKET_TYPE(tcppauseread);
NETIEVENT_SOCKET_TYPE(tcpstop);
NETIEVENT_SOCKET_TYPE(tlsclose);
/* NETIEVENT_SOCKET_TYPE(tlsconnect); */ /* unique type, defined independently
*/
NETIEVENT_SOCKET_TYPE(tlsdobio);
NETIEVENT_SOCKET_TYPE(tlsstartread);
NETIEVENT_SOCKET_TYPE(udpclose);
NETIEVENT_SOCKET_TYPE(udplisten);
NETIEVENT_SOCKET_TYPE(udpread);
/* NETIEVENT_SOCKET_TYPE(udpsend); */ /* unique type, defined independently */
NETIEVENT_SOCKET_TYPE(udpstop);
NETIEVENT_SOCKET_TYPE(tcpdnsclose);
NETIEVENT_SOCKET_TYPE(tcpdnsread);
NETIEVENT_SOCKET_TYPE(tcpdnsstop);
NETIEVENT_SOCKET_TYPE(tcpdnslisten);
NETIEVENT_SOCKET_REQ_TYPE(tcpdnsconnect);
NETIEVENT_SOCKET_REQ_TYPE(tcpdnssend);
NETIEVENT_SOCKET_HANDLE_TYPE(tcpdnscancel);
NETIEVENT_SOCKET_QUOTA_TYPE(tcpdnsaccept);
NETIEVENT_SOCKET_TYPE(tlsdnsclose);
NETIEVENT_SOCKET_TYPE(tlsdnsread);
NETIEVENT_SOCKET_TYPE(tlsdnsstop);
NETIEVENT_SOCKET_REQ_TYPE(tlsdnssend);
NETIEVENT_SOCKET_HANDLE_TYPE(tlsdnscancel);
NETIEVENT_SOCKET_REQ_TYPE(tcpconnect);
NETIEVENT_SOCKET_REQ_TYPE(tcpsend);
NETIEVENT_SOCKET_TYPE(tcpstartread);
NETIEVENT_SOCKET_REQ_TYPE(tlssend);
NETIEVENT_SOCKET_REQ_TYPE(udpconnect);
NETIEVENT_SOCKET_REQ_RESULT_TYPE(connectcb);
NETIEVENT_SOCKET_REQ_RESULT_TYPE(readcb);
NETIEVENT_SOCKET_REQ_RESULT_TYPE(sendcb);
NETIEVENT_SOCKET_HANDLE_TYPE(detach);
NETIEVENT_SOCKET_HANDLE_TYPE(tcpcancel);
NETIEVENT_SOCKET_HANDLE_TYPE(udpcancel);
NETIEVENT_SOCKET_QUOTA_TYPE(tcpaccept);
NETIEVENT_TYPE(pause);
NETIEVENT_TYPE(resume);
NETIEVENT_TYPE(shutdown);
NETIEVENT_TYPE(stop);
/* Now declared the helper functions */
NETIEVENT_SOCKET_DECL(close);
NETIEVENT_SOCKET_DECL(tcpclose);
NETIEVENT_SOCKET_DECL(tcplisten);
NETIEVENT_SOCKET_DECL(tcppauseread);
NETIEVENT_SOCKET_DECL(tcpstartread);
NETIEVENT_SOCKET_DECL(tcpstop);
NETIEVENT_SOCKET_DECL(tlsclose);
NETIEVENT_SOCKET_DECL(tlsconnect);
NETIEVENT_SOCKET_DECL(tlsdobio);
NETIEVENT_SOCKET_DECL(tlsstartread);
NETIEVENT_SOCKET_DECL(udpclose);
NETIEVENT_SOCKET_DECL(udplisten);
NETIEVENT_SOCKET_DECL(udpread);
NETIEVENT_SOCKET_DECL(udpsend);
NETIEVENT_SOCKET_DECL(udpstop);
NETIEVENT_SOCKET_DECL(tcpdnsclose);
NETIEVENT_SOCKET_DECL(tcpdnsread);
NETIEVENT_SOCKET_DECL(tcpdnsstop);
NETIEVENT_SOCKET_DECL(tcpdnslisten);
NETIEVENT_SOCKET_REQ_DECL(tcpdnsconnect);
NETIEVENT_SOCKET_REQ_DECL(tcpdnssend);
NETIEVENT_SOCKET_HANDLE_DECL(tcpdnscancel);
NETIEVENT_SOCKET_QUOTA_DECL(tcpdnsaccept);
NETIEVENT_SOCKET_DECL(tlsdnsclose);
NETIEVENT_SOCKET_DECL(tlsdnsread);
NETIEVENT_SOCKET_DECL(tlsdnsstop);
NETIEVENT_SOCKET_REQ_DECL(tlsdnssend);
NETIEVENT_SOCKET_HANDLE_DECL(tlsdnscancel);
NETIEVENT_SOCKET_REQ_DECL(tcpconnect);
NETIEVENT_SOCKET_REQ_DECL(tcpsend);
NETIEVENT_SOCKET_REQ_DECL(tlssend);
NETIEVENT_SOCKET_REQ_DECL(udpconnect);
NETIEVENT_SOCKET_REQ_RESULT_DECL(connectcb);
NETIEVENT_SOCKET_REQ_RESULT_DECL(readcb);
NETIEVENT_SOCKET_REQ_RESULT_DECL(sendcb);
NETIEVENT_SOCKET_HANDLE_DECL(udpcancel);
NETIEVENT_SOCKET_HANDLE_DECL(tcpcancel);
NETIEVENT_SOCKET_DECL(detach);
NETIEVENT_SOCKET_QUOTA_DECL(tcpaccept);
NETIEVENT_DECL(pause);
NETIEVENT_DECL(resume);
NETIEVENT_DECL(shutdown);
NETIEVENT_DECL(stop);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -91,8 +91,7 @@ tls_senddone(isc_nmhandle_t *handle, isc_result_t eresult, void *cbarg) {
static void
async_tls_do_bio(isc_nmsocket_t *sock) {
isc__netievent_tlsdobio_t *ievent =
isc__nm_get_ievent(sock->mgr, netievent_tlsdobio);
ievent->sock = sock;
isc__nm_get_netievent_tlsdobio(sock->mgr, sock);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
@ -314,10 +313,11 @@ initialize_tls(isc_nmsocket_t *sock, bool server) {
static isc_result_t
tlslisten_acceptcb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
REQUIRE(VALID_NMSOCK(cbarg));
isc_nmsocket_t *tlslistensock = (isc_nmsocket_t *)cbarg;
isc_nmsocket_t *tlssock = NULL;
int r;
REQUIRE(VALID_NMSOCK(tlslistensock));
REQUIRE(tlslistensock->type == isc_nm_tlslistener);
/* If accept() was unsuccessful we can't do anything */
@ -350,8 +350,10 @@ tlslisten_acceptcb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
return (ISC_R_TLSERROR);
}
uv_timer_init(&tlssock->mgr->workers[isc_nm_tid()].loop,
&tlssock->timer);
r = uv_timer_init(&tlssock->mgr->workers[isc_nm_tid()].loop,
&tlssock->timer);
RUNTIME_CHECK(r == 0);
tlssock->timer.data = tlssock;
tlssock->timer_initialized = true;
tlssock->tls.ctx = tlslistensock->tls.ctx;
@ -410,7 +412,8 @@ isc__nm_async_tlssend(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__nm_uvreq_t *req = ievent->req;
ievent->req = NULL;
REQUIRE(VALID_UVREQ(req));
REQUIRE(worker->id == sock->tid);
REQUIRE(sock->tid == isc_nm_tid());
UNUSED(worker);
if (inactive(sock)) {
req->cb.send(req->handle, ISC_R_CANCELED, req->cbarg);
@ -449,7 +452,7 @@ isc__nm_async_tlssend(isc__networker_t *worker, isc__netievent_t *ev0) {
void
isc__nm_tls_send(isc_nmhandle_t *handle, isc_region_t *region, isc_nm_cb_t cb,
void *cbarg) {
isc__netievent_tcpsend_t *ievent = NULL;
isc__netievent_tlssend_t *ievent = NULL;
isc__nm_uvreq_t *uvreq = NULL;
isc_nmsocket_t *sock = NULL;
REQUIRE(VALID_NMHANDLE(handle));
@ -475,60 +478,61 @@ isc__nm_tls_send(isc_nmhandle_t *handle, isc_region_t *region, isc_nm_cb_t cb,
/*
* We need to create an event and pass it using async channel
*/
ievent = isc__nm_get_ievent(sock->mgr, netievent_tlssend);
ievent->sock = sock;
ievent->req = uvreq;
ievent = isc__nm_get_netievent_tlssend(sock->mgr, sock, uvreq);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
void
isc__nm_async_tls_startread(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__netievent_startread_t *ievent = (isc__netievent_startread_t *)ev0;
isc__nm_async_tlsstartread(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__netievent_tlsstartread_t *ievent =
(isc__netievent_tlsstartread_t *)ev0;
isc_nmsocket_t *sock = ievent->sock;
REQUIRE(worker->id == isc_nm_tid());
REQUIRE(sock->tid == isc_nm_tid());
UNUSED(worker);
tls_do_bio(sock);
}
void
isc__nm_tls_read(isc_nmhandle_t *handle, isc_nm_recv_cb_t cb, void *cbarg) {
isc_nmsocket_t *sock = NULL;
isc__netievent_startread_t *ievent = NULL;
REQUIRE(VALID_NMHANDLE(handle));
REQUIRE(VALID_NMSOCK(handle->sock));
REQUIRE(handle->sock->statichandle == handle);
REQUIRE(handle->sock->tid == isc_nm_tid());
sock = handle->sock;
REQUIRE(sock->statichandle == handle);
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->recv_cb == NULL);
REQUIRE(sock->tid == isc_nm_tid());
isc__netievent_tlsstartread_t *ievent = NULL;
isc_nmsocket_t *sock = handle->sock;
if (inactive(sock)) {
cb(handle, ISC_R_NOTCONNECTED, NULL, cbarg);
return;
}
sock = handle->sock;
sock->recv_cb = cb;
sock->recv_cbarg = cbarg;
ievent = isc__nm_get_ievent(sock->mgr, netievent_tlsstartread);
ievent->sock = sock;
ievent = isc__nm_get_netievent_tlsstartread(sock->mgr, sock);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
void
isc__nm_tls_pauseread(isc_nmsocket_t *sock) {
isc__nm_tls_pauseread(isc_nmhandle_t *handle) {
REQUIRE(VALID_NMHANDLE(handle));
REQUIRE(VALID_NMSOCK(handle->sock));
isc_nmsocket_t *sock = handle->sock;
atomic_store(&sock->readpaused, true);
}
void
isc__nm_tls_resumeread(isc_nmsocket_t *sock) {
isc__nm_tls_resumeread(isc_nmhandle_t *handle) {
REQUIRE(VALID_NMHANDLE(handle));
REQUIRE(VALID_NMSOCK(handle->sock));
isc_nmsocket_t *sock = handle->sock;
atomic_store(&sock->readpaused, false);
async_tls_do_bio(sock);
}
@ -536,12 +540,12 @@ isc__nm_tls_resumeread(isc_nmsocket_t *sock) {
static void
timer_close_cb(uv_handle_t *handle) {
isc_nmsocket_t *sock = (isc_nmsocket_t *)uv_handle_get_data(handle);
INSIST(VALID_NMSOCK(sock));
tls_close_direct(sock);
}
static void
tls_close_direct(isc_nmsocket_t *sock) {
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->tid == isc_nm_tid());
if (sock->timer_running) {
@ -602,9 +606,7 @@ isc__nm_tls_close(isc_nmsocket_t *sock) {
tls_close_direct(sock);
} else {
isc__netievent_tlsclose_t *ievent =
isc__nm_get_ievent(sock->mgr, netievent_tlsclose);
ievent->sock = sock;
isc__nm_get_netievent_tlsclose(sock->mgr, sock);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
@ -614,7 +616,8 @@ void
isc__nm_async_tlsclose(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__netievent_tlsclose_t *ievent = (isc__netievent_tlsclose_t *)ev0;
REQUIRE(worker->id == ievent->sock->tid);
REQUIRE(ievent->sock->tid == isc_nm_tid());
UNUSED(worker);
tls_close_direct(ievent->sock);
}
@ -644,7 +647,7 @@ isc_result_t
isc_nm_tlsconnect(isc_nm_t *mgr, isc_nmiface_t *local, isc_nmiface_t *peer,
isc_nm_cb_t cb, void *cbarg, SSL_CTX *ctx,
unsigned int timeout, size_t extrahandlesize) {
isc_nmsocket_t *nsock = NULL, *tmp = NULL;
isc_nmsocket_t *nsock = NULL;
isc__netievent_tlsconnect_t *ievent = NULL;
isc_result_t result = ISC_R_SUCCESS;
@ -653,7 +656,7 @@ isc_nm_tlsconnect(isc_nm_t *mgr, isc_nmiface_t *local, isc_nmiface_t *peer,
nsock = isc_mem_get(mgr->mctx, sizeof(*nsock));
isc__nmsocket_init(nsock, mgr, isc_nm_tlssocket, local);
nsock->extrahandlesize = extrahandlesize;
atomic_init(&nsock->result, ISC_R_SUCCESS);
nsock->result = ISC_R_SUCCESS;
nsock->connect_cb = cb;
nsock->connect_cbarg = cbarg;
nsock->connect_timeout = timeout;
@ -667,31 +670,22 @@ isc_nm_tlsconnect(isc_nm_t *mgr, isc_nmiface_t *local, isc_nmiface_t *peer,
return (ISC_R_TLSERROR);
}
ievent = isc__nm_get_ievent(mgr, netievent_tlsconnect);
ievent->sock = nsock;
ievent = isc__nm_get_netievent_tlsconnect(mgr, nsock);
ievent->local = local->addr;
ievent->peer = peer->addr;
ievent->ctx = ctx;
/*
* Async callbacks can dereference the socket in the meantime,
* we need to hold an additional reference to it.
*/
isc__nmsocket_attach(nsock, &tmp);
if (isc__nm_in_netthread()) {
nsock->tid = isc_nm_tid();
isc__nm_async_tlsconnect(&mgr->workers[nsock->tid],
(isc__netievent_t *)ievent);
isc__nm_put_ievent(mgr, ievent);
isc__nm_put_netievent_tlsconnect(mgr, ievent);
} else {
nsock->tid = isc_random_uniform(mgr->nworkers);
isc__nm_enqueue_ievent(&mgr->workers[nsock->tid],
(isc__netievent_t *)ievent);
}
isc__nmsocket_detach(&tmp);
return (result);
}
@ -703,8 +697,9 @@ tls_connect_cb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
if (result != ISC_R_SUCCESS) {
tlssock->connect_cb(handle, result, tlssock->connect_cbarg);
atomic_store(&tlssock->result, result);
atomic_store(&tlssock->connect_error, true);
LOCK(&tlssock->parent->lock);
tlssock->parent->result = result;
UNLOCK(&tlssock->parent->lock);
tls_close_direct(tlssock);
return;
}
@ -716,8 +711,9 @@ tls_connect_cb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
result = initialize_tls(tlssock, false);
if (result != ISC_R_SUCCESS) {
tlssock->connect_cb(handle, result, tlssock->connect_cbarg);
atomic_store(&tlssock->result, result);
atomic_store(&tlssock->connect_error, true);
LOCK(&tlssock->parent->lock);
tlssock->parent->result = result;
UNLOCK(&tlssock->parent->lock);
tls_close_direct(tlssock);
return;
}
@ -728,12 +724,15 @@ isc__nm_async_tlsconnect(isc__networker_t *worker, isc__netievent_t *ev0) {
(isc__netievent_tlsconnect_t *)ev0;
isc_nmsocket_t *tlssock = ievent->sock;
isc_result_t result;
int r;
UNUSED(worker);
tlssock->tid = isc_nm_tid();
uv_timer_init(&tlssock->mgr->workers[isc_nm_tid()].loop,
&tlssock->timer);
r = uv_timer_init(&tlssock->mgr->workers[isc_nm_tid()].loop,
&tlssock->timer);
RUNTIME_CHECK(r == 0);
tlssock->timer.data = tlssock;
tlssock->timer_initialized = true;
tlssock->tls.state = TLS_INIT;
@ -745,15 +744,16 @@ isc__nm_async_tlsconnect(isc__networker_t *worker, isc__netievent_t *ev0) {
if (result != ISC_R_SUCCESS) {
/* FIXME: We need to pass valid handle */
tlssock->connect_cb(NULL, result, tlssock->connect_cbarg);
atomic_store(&tlssock->result, result);
atomic_store(&tlssock->connect_error, true);
LOCK(&tlssock->parent->lock);
tlssock->parent->result = result;
UNLOCK(&tlssock->parent->lock);
tls_close_direct(tlssock);
return;
}
}
void
isc__nm_async_tls_do_bio(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__nm_async_tlsdobio(isc__networker_t *worker, isc__netievent_t *ev0) {
UNUSED(worker);
isc__netievent_tlsdobio_t *ievent = (isc__netievent_tlsdobio_t *)ev0;
tls_do_bio(ievent->sock);

984
lib/isc/netmgr/tlsdns.c Normal file
View File

@ -0,0 +1,984 @@
/*
* Copyright (C) Internet Systems Consortium, Inc. ("ISC")
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, you can obtain one at https://mozilla.org/MPL/2.0/.
*
* See the COPYRIGHT file distributed with this work for additional
* information regarding copyright ownership.
*/
#include <unistd.h>
#include <uv.h>
#include <isc/atomic.h>
#include <isc/buffer.h>
#include <isc/condition.h>
#include <isc/magic.h>
#include <isc/mem.h>
#include <isc/netmgr.h>
#include <isc/random.h>
#include <isc/refcount.h>
#include <isc/region.h>
#include <isc/result.h>
#include <isc/sockaddr.h>
#include <isc/thread.h>
#include <isc/util.h>
#include "netmgr-int.h"
#include "uv-compat.h"
#define TLSDNS_CLIENTS_PER_CONN 23
/*%<
*
* Maximum number of simultaneous handles in flight supported for a single
* connected TLSDNS socket. This value was chosen arbitrarily, and may be
* changed in the future.
*/
static void
dnslisten_readcb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *arg);
static void
resume_processing(void *arg);
static void
tlsdns_close_direct(isc_nmsocket_t *sock);
static inline size_t
dnslen(unsigned char *base) {
return ((base[0] << 8) + (base[1]));
}
/*
* COMPAT CODE
*/
static void *
isc__nm_get_ievent(isc_nm_t *mgr, isc__netievent_type type) {
isc__netievent_storage_t *event = isc_mempool_get(mgr->evpool);
*event = (isc__netievent_storage_t){ .ni.type = type };
return (event);
}
/*
* Regular TCP buffer, should suffice in most cases.
*/
#define NM_REG_BUF 4096
/*
* Two full DNS packets with lengths.
* netmgr receives 64k at most so there's no risk
* of overrun.
*/
#define NM_BIG_BUF (65535 + 2) * 2
static inline void
alloc_dnsbuf(isc_nmsocket_t *sock, size_t len) {
REQUIRE(len <= NM_BIG_BUF);
if (sock->buf == NULL) {
/* We don't have the buffer at all */
size_t alloc_len = len < NM_REG_BUF ? NM_REG_BUF : NM_BIG_BUF;
sock->buf = isc_mem_allocate(sock->mgr->mctx, alloc_len);
sock->buf_size = alloc_len;
} else {
/* We have the buffer but it's too small */
sock->buf = isc_mem_reallocate(sock->mgr->mctx, sock->buf,
NM_BIG_BUF);
sock->buf_size = NM_BIG_BUF;
}
}
static void
timer_close_cb(uv_handle_t *handle) {
isc_nmsocket_t *sock = (isc_nmsocket_t *)uv_handle_get_data(handle);
REQUIRE(VALID_NMSOCK(sock));
atomic_store(&sock->closed, true);
tlsdns_close_direct(sock);
}
static void
dnstcp_readtimeout(uv_timer_t *timer) {
isc_nmsocket_t *sock =
(isc_nmsocket_t *)uv_handle_get_data((uv_handle_t *)timer);
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->tid == isc_nm_tid());
/* Close the TCP connection; its closure should fire ours. */
if (sock->outerhandle != NULL) {
isc_nmhandle_detach(&sock->outerhandle);
}
}
/*
* Accept callback for TCP-DNS connection.
*/
static isc_result_t
dnslisten_acceptcb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
isc_nmsocket_t *dnslistensock = (isc_nmsocket_t *)cbarg;
isc_nmsocket_t *dnssock = NULL;
isc_nmhandle_t *readhandle = NULL;
isc_nm_accept_cb_t accept_cb;
void *accept_cbarg;
REQUIRE(VALID_NMSOCK(dnslistensock));
REQUIRE(dnslistensock->type == isc_nm_tlsdnslistener);
if (result != ISC_R_SUCCESS) {
return (result);
}
accept_cb = dnslistensock->accept_cb;
accept_cbarg = dnslistensock->accept_cbarg;
if (accept_cb != NULL) {
result = accept_cb(handle, ISC_R_SUCCESS, accept_cbarg);
if (result != ISC_R_SUCCESS) {
return (result);
}
}
/* We need to create a 'wrapper' dnssocket for this connection */
dnssock = isc_mem_get(handle->sock->mgr->mctx, sizeof(*dnssock));
isc__nmsocket_init(dnssock, handle->sock->mgr, isc_nm_tlsdnssocket,
handle->sock->iface);
dnssock->extrahandlesize = dnslistensock->extrahandlesize;
isc__nmsocket_attach(dnslistensock, &dnssock->listener);
isc__nmsocket_attach(dnssock, &dnssock->self);
isc_nmhandle_attach(handle, &dnssock->outerhandle);
dnssock->peer = handle->sock->peer;
dnssock->read_timeout = handle->sock->mgr->init;
dnssock->tid = isc_nm_tid();
dnssock->closehandle_cb = resume_processing;
uv_timer_init(&dnssock->mgr->workers[isc_nm_tid()].loop,
&dnssock->timer);
dnssock->timer.data = dnssock;
dnssock->timer_initialized = true;
uv_timer_start(&dnssock->timer, dnstcp_readtimeout,
dnssock->read_timeout, 0);
dnssock->timer_running = true;
/*
* Add a reference to handle to keep it from being freed by
* the caller. It will be detached in dnslisted_readcb() when
* the connection is closed or there is no more data to be read.
*/
isc_nmhandle_attach(handle, &readhandle);
isc_nm_read(readhandle, dnslisten_readcb, dnssock);
isc__nmsocket_detach(&dnssock);
return (ISC_R_SUCCESS);
}
/*
* Process a single packet from the incoming buffer.
*
* Return ISC_R_SUCCESS and attach 'handlep' to a handle if something
* was processed; return ISC_R_NOMORE if there isn't a full message
* to be processed.
*
* The caller will need to unreference the handle.
*/
static isc_result_t
processbuffer(isc_nmsocket_t *dnssock, isc_nmhandle_t **handlep) {
size_t len;
REQUIRE(VALID_NMSOCK(dnssock));
REQUIRE(dnssock->tid == isc_nm_tid());
REQUIRE(handlep != NULL && *handlep == NULL);
/*
* If we don't even have the length yet, we can't do
* anything.
*/
if (dnssock->buf_len < 2) {
return (ISC_R_NOMORE);
}
/*
* Process the first packet from the buffer, leaving
* the rest (if any) for later.
*/
len = dnslen(dnssock->buf);
if (len <= dnssock->buf_len - 2) {
isc_nmhandle_t *dnshandle = NULL;
isc_nmsocket_t *listener = NULL;
isc_nm_recv_cb_t cb = NULL;
void *cbarg = NULL;
if (atomic_load(&dnssock->client) &&
dnssock->statichandle != NULL) {
isc_nmhandle_attach(dnssock->statichandle, &dnshandle);
} else {
dnshandle = isc__nmhandle_get(dnssock, NULL, NULL);
}
listener = dnssock->listener;
if (listener != NULL) {
cb = listener->recv_cb;
cbarg = listener->recv_cbarg;
} else if (dnssock->recv_cb != NULL) {
cb = dnssock->recv_cb;
cbarg = dnssock->recv_cbarg;
/*
* We need to clear the read callback *before*
* calling it, because it might make another
* call to isc_nm_read() and set up a new callback.
*/
isc__nmsocket_clearcb(dnssock);
}
if (cb != NULL) {
cb(dnshandle, ISC_R_SUCCESS,
&(isc_region_t){ .base = dnssock->buf + 2,
.length = len },
cbarg);
}
len += 2;
dnssock->buf_len -= len;
if (len > 0) {
memmove(dnssock->buf, dnssock->buf + len,
dnssock->buf_len);
}
*handlep = dnshandle;
return (ISC_R_SUCCESS);
}
return (ISC_R_NOMORE);
}
/*
* We've got a read on our underlying socket. Check whether
* we have a complete DNS packet and, if so, call the callback.
*/
static void
dnslisten_readcb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *arg) {
isc_nmsocket_t *dnssock = (isc_nmsocket_t *)arg;
unsigned char *base = NULL;
bool done = false;
size_t len;
REQUIRE(VALID_NMSOCK(dnssock));
REQUIRE(dnssock->tid == isc_nm_tid());
REQUIRE(VALID_NMHANDLE(handle));
if (!isc__nmsocket_active(dnssock) || atomic_load(&dnssock->closing) ||
dnssock->outerhandle == NULL ||
(dnssock->listener != NULL &&
!isc__nmsocket_active(dnssock->listener)) ||
atomic_load(&dnssock->mgr->closing))
{
if (eresult == ISC_R_SUCCESS) {
eresult = ISC_R_CANCELED;
}
}
if (region == NULL || eresult != ISC_R_SUCCESS) {
isc_nm_recv_cb_t cb = dnssock->recv_cb;
void *cbarg = dnssock->recv_cbarg;
/* Connection closed */
dnssock->result = eresult;
isc__nmsocket_clearcb(dnssock);
if (atomic_load(&dnssock->client) && cb != NULL) {
cb(dnssock->statichandle, eresult, NULL, cbarg);
}
if (dnssock->self != NULL) {
isc__nmsocket_detach(&dnssock->self);
}
if (dnssock->outerhandle != NULL) {
isc__nmsocket_clearcb(dnssock->outerhandle->sock);
isc_nmhandle_detach(&dnssock->outerhandle);
}
if (dnssock->listener != NULL) {
isc__nmsocket_detach(&dnssock->listener);
}
/*
* Server connections will hold two handle references when
* shut down, but client (tlsdnsconnect) connections have
* only one.
*/
if (!atomic_load(&dnssock->client)) {
isc_nmhandle_detach(&handle);
}
return;
}
base = region->base;
len = region->length;
if (dnssock->buf_len + len > dnssock->buf_size) {
alloc_dnsbuf(dnssock, dnssock->buf_len + len);
}
memmove(dnssock->buf + dnssock->buf_len, base, len);
dnssock->buf_len += len;
dnssock->read_timeout = (atomic_load(&dnssock->keepalive)
? dnssock->mgr->keepalive
: dnssock->mgr->idle);
do {
isc_result_t result;
isc_nmhandle_t *dnshandle = NULL;
result = processbuffer(dnssock, &dnshandle);
if (result != ISC_R_SUCCESS) {
/*
* There wasn't anything in the buffer to process.
*/
return;
}
/*
* We have a packet: stop timeout timers
*/
if (dnssock->timer_initialized) {
uv_timer_stop(&dnssock->timer);
}
if (atomic_load(&dnssock->sequential) ||
dnssock->recv_cb == NULL) {
/*
* There are two reasons we might want to pause here:
* - We're in sequential mode and we've received
* a whole packet, so we're done until it's been
* processed; or
* - We no longer have a read callback.
*/
isc_nm_pauseread(dnssock->outerhandle);
done = true;
} else {
/*
* We're pipelining, so we now resume processing
* packets until the clients-per-connection limit
* is reached (as determined by the number of
* active handles on the socket). When the limit
* is reached, pause reading.
*/
if (atomic_load(&dnssock->ah) >=
TLSDNS_CLIENTS_PER_CONN) {
isc_nm_pauseread(dnssock->outerhandle);
done = true;
}
}
isc_nmhandle_detach(&dnshandle);
} while (!done);
}
/*
* isc_nm_listentlsdns works exactly as listentlsdns but on an SSL socket.
*/
isc_result_t
isc_nm_listentlsdns(isc_nm_t *mgr, isc_nmiface_t *iface, isc_nm_recv_cb_t cb,
void *cbarg, isc_nm_accept_cb_t accept_cb,
void *accept_cbarg, size_t extrahandlesize, int backlog,
isc_quota_t *quota, SSL_CTX *sslctx,
isc_nmsocket_t **sockp) {
isc_nmsocket_t *dnslistensock = isc_mem_get(mgr->mctx,
sizeof(*dnslistensock));
isc_result_t result;
REQUIRE(VALID_NM(mgr));
REQUIRE(sslctx != NULL);
isc__nmsocket_init(dnslistensock, mgr, isc_nm_tlsdnslistener, iface);
dnslistensock->recv_cb = cb;
dnslistensock->recv_cbarg = cbarg;
dnslistensock->accept_cb = accept_cb;
dnslistensock->accept_cbarg = accept_cbarg;
dnslistensock->extrahandlesize = extrahandlesize;
result = isc_nm_listentls(mgr, iface, dnslisten_acceptcb, dnslistensock,
extrahandlesize, backlog, quota, sslctx,
&dnslistensock->outer);
if (result == ISC_R_SUCCESS) {
atomic_store(&dnslistensock->listening, true);
*sockp = dnslistensock;
return (ISC_R_SUCCESS);
} else {
atomic_store(&dnslistensock->closed, true);
isc__nmsocket_detach(&dnslistensock);
return (result);
}
}
void
isc__nm_async_tlsdnsstop(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__netievent_tlsdnsstop_t *ievent =
(isc__netievent_tlsdnsstop_t *)ev0;
isc_nmsocket_t *sock = ievent->sock;
UNUSED(worker);
REQUIRE(isc__nm_in_netthread());
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->type == isc_nm_tlsdnslistener);
REQUIRE(sock->tid == isc_nm_tid());
atomic_store(&sock->listening, false);
atomic_store(&sock->closed, true);
isc__nmsocket_clearcb(sock);
if (sock->outer != NULL) {
isc__nm_tls_stoplistening(sock->outer);
isc__nmsocket_detach(&sock->outer);
}
}
void
isc__nm_tlsdns_stoplistening(isc_nmsocket_t *sock) {
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->type == isc_nm_tlsdnslistener);
isc__netievent_tlsdnsstop_t *ievent =
isc__nm_get_ievent(sock->mgr, netievent_tlsdnsstop);
isc__nmsocket_attach(sock, &ievent->sock);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
void
isc_nm_tlsdns_sequential(isc_nmhandle_t *handle) {
REQUIRE(VALID_NMHANDLE(handle));
if (handle->sock->type != isc_nm_tlsdnssocket ||
handle->sock->outerhandle == NULL)
{
return;
}
/*
* We don't want pipelining on this connection. That means
* that we need to pause after reading each request, and
* resume only after the request has been processed. This
* is done in resume_processing(), which is the socket's
* closehandle_cb callback, called whenever a handle
* is released.
*/
isc_nm_pauseread(handle->sock->outerhandle);
atomic_store(&handle->sock->sequential, true);
}
void
isc_nm_tlsdns_keepalive(isc_nmhandle_t *handle, bool value) {
REQUIRE(VALID_NMHANDLE(handle));
if (handle->sock->type != isc_nm_tlsdnssocket ||
handle->sock->outerhandle == NULL)
{
return;
}
atomic_store(&handle->sock->keepalive, value);
atomic_store(&handle->sock->outerhandle->sock->keepalive, value);
}
static void
resume_processing(void *arg) {
isc_nmsocket_t *sock = (isc_nmsocket_t *)arg;
isc_result_t result;
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->tid == isc_nm_tid());
if (sock->type != isc_nm_tlsdnssocket || sock->outerhandle == NULL) {
return;
}
if (atomic_load(&sock->ah) == 0) {
/* Nothing is active; sockets can timeout now */
if (sock->timer_initialized) {
uv_timer_start(&sock->timer, dnstcp_readtimeout,
sock->read_timeout, 0);
sock->timer_running = true;
}
}
/*
* For sequential sockets: Process what's in the buffer, or
* if there aren't any messages buffered, resume reading.
*/
if (atomic_load(&sock->sequential)) {
isc_nmhandle_t *handle = NULL;
result = processbuffer(sock, &handle);
if (result == ISC_R_SUCCESS) {
if (sock->timer_initialized) {
uv_timer_stop(&sock->timer);
}
isc_nmhandle_detach(&handle);
} else if (sock->outerhandle != NULL) {
isc_nm_resumeread(sock->outerhandle);
}
return;
}
/*
* For pipelined sockets: If we're under the clients-per-connection
* limit, resume processing until we reach the limit again.
*/
do {
isc_nmhandle_t *dnshandle = NULL;
result = processbuffer(sock, &dnshandle);
if (result != ISC_R_SUCCESS) {
/*
* Nothing in the buffer; resume reading.
*/
if (sock->outerhandle != NULL) {
isc_nm_resumeread(sock->outerhandle);
}
break;
}
if (sock->timer_initialized) {
uv_timer_stop(&sock->timer);
}
isc_nmhandle_detach(&dnshandle);
} while (atomic_load(&sock->ah) < TLSDNS_CLIENTS_PER_CONN);
}
static void
tlsdnssend_cb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
isc__nm_uvreq_t *req = (isc__nm_uvreq_t *)cbarg;
REQUIRE(VALID_UVREQ(req));
UNUSED(handle);
req->cb.send(req->handle, result, req->cbarg);
isc_mem_put(req->sock->mgr->mctx, req->uvbuf.base, req->uvbuf.len);
isc__nm_uvreq_put(&req, req->handle->sock);
isc_nmhandle_detach(&handle);
}
/*
* The socket is closing, outerhandle has been detached, listener is
* inactive, or the netmgr is closing: any operation on it should abort
* with ISC_R_CANCELED.
*/
static bool
inactive(isc_nmsocket_t *sock) {
return (!isc__nmsocket_active(sock) || atomic_load(&sock->closing) ||
sock->outerhandle == NULL ||
(sock->listener != NULL &&
!isc__nmsocket_active(sock->listener)) ||
atomic_load(&sock->mgr->closing));
}
void
isc__nm_async_tlsdnssend(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__netievent_tlsdnssend_t *ievent =
(isc__netievent_tlsdnssend_t *)ev0;
isc__nm_uvreq_t *req = ievent->req;
isc_nmsocket_t *sock = ievent->sock;
isc_nmhandle_t *sendhandle = NULL;
isc_region_t r;
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(VALID_UVREQ(req));
REQUIRE(worker->id == sock->tid);
REQUIRE(sock->tid == isc_nm_tid());
REQUIRE(sock->type == isc_nm_tlsdnssocket);
if (inactive(sock)) {
req->cb.send(req->handle, ISC_R_CANCELED, req->cbarg);
isc_mem_put(sock->mgr->mctx, req->uvbuf.base, req->uvbuf.len);
isc__nm_uvreq_put(&req, req->handle->sock);
return;
}
r.base = (unsigned char *)req->uvbuf.base;
r.length = req->uvbuf.len;
isc_nmhandle_attach(sock->outerhandle, &sendhandle);
isc_nm_send(sendhandle, &r, tlsdnssend_cb, req);
}
/*
* isc__nm_tcp_send sends buf to a peer on a socket.
*/
void
isc__nm_tlsdns_send(isc_nmhandle_t *handle, isc_region_t *region,
isc_nm_cb_t cb, void *cbarg) {
isc__nm_uvreq_t *uvreq = NULL;
REQUIRE(VALID_NMHANDLE(handle));
isc_nmsocket_t *sock = handle->sock;
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->type == isc_nm_tlsdnssocket);
if (inactive(sock)) {
cb(handle, ISC_R_CANCELED, cbarg);
return;
}
uvreq = isc__nm_uvreq_get(sock->mgr, sock);
isc_nmhandle_attach(handle, &uvreq->handle);
uvreq->cb.send = cb;
uvreq->cbarg = cbarg;
uvreq->uvbuf.base = isc_mem_get(sock->mgr->mctx, region->length + 2);
uvreq->uvbuf.len = region->length + 2;
*(uint16_t *)uvreq->uvbuf.base = htons(region->length);
memmove(uvreq->uvbuf.base + 2, region->base, region->length);
isc__netievent_tlsdnssend_t *ievent = NULL;
ievent = isc__nm_get_ievent(sock->mgr, netievent_tlsdnssend);
ievent->req = uvreq;
isc__nmsocket_attach(sock, &ievent->sock);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
static void
tlsdns_close_direct(isc_nmsocket_t *sock) {
REQUIRE(sock->tid == isc_nm_tid());
if (sock->timer_running) {
uv_timer_stop(&sock->timer);
sock->timer_running = false;
}
/* We don't need atomics here, it's all in single network thread */
if (sock->self != NULL) {
isc__nmsocket_detach(&sock->self);
} else if (sock->timer_initialized) {
/*
* We need to fire the timer callback to clean it up,
* it will then call us again (via detach) so that we
* can finally close the socket.
*/
sock->timer_initialized = false;
uv_timer_stop(&sock->timer);
uv_close((uv_handle_t *)&sock->timer, timer_close_cb);
} else {
/*
* At this point we're certain that there are no external
* references, we can close everything.
*/
if (sock->outerhandle != NULL) {
isc__nmsocket_clearcb(sock->outerhandle->sock);
isc_nmhandle_detach(&sock->outerhandle);
}
if (sock->listener != NULL) {
isc__nmsocket_detach(&sock->listener);
}
atomic_store(&sock->closed, true);
isc__nmsocket_prep_destroy(sock);
}
}
void
isc__nm_tlsdns_close(isc_nmsocket_t *sock) {
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->type == isc_nm_tlsdnssocket);
if (!atomic_compare_exchange_strong(&sock->closing, &(bool){ false },
true)) {
return;
}
if (sock->tid == isc_nm_tid()) {
tlsdns_close_direct(sock);
} else {
isc__netievent_tlsdnsclose_t *ievent =
isc__nm_get_ievent(sock->mgr, netievent_tlsdnsclose);
isc__nmsocket_attach(sock, &ievent->sock);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
}
void
isc__nm_async_tlsdnsclose(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__netievent_tlsdnsclose_t *ievent =
(isc__netievent_tlsdnsclose_t *)ev0;
isc_nmsocket_t *sock = ievent->sock;
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->tid == isc_nm_tid());
UNUSED(worker);
tlsdns_close_direct(ievent->sock);
}
typedef struct {
isc_mem_t *mctx;
isc_nm_cb_t cb;
void *cbarg;
size_t extrahandlesize;
} tcpconnect_t;
static void
tlsdnsconnect_cb(isc_nmhandle_t *handle, isc_result_t result, void *arg) {
tcpconnect_t *conn = (tcpconnect_t *)arg;
isc_nm_cb_t cb = conn->cb;
void *cbarg = conn->cbarg;
size_t extrahandlesize = conn->extrahandlesize;
isc_nmsocket_t *dnssock = NULL;
isc_nmhandle_t *readhandle = NULL;
REQUIRE(result != ISC_R_SUCCESS || VALID_NMHANDLE(handle));
isc_mem_putanddetach(&conn->mctx, conn, sizeof(*conn));
dnssock = isc_mem_get(handle->sock->mgr->mctx, sizeof(*dnssock));
isc__nmsocket_init(dnssock, handle->sock->mgr, isc_nm_tlsdnssocket,
handle->sock->iface);
dnssock->extrahandlesize = extrahandlesize;
isc_nmhandle_attach(handle, &dnssock->outerhandle);
dnssock->peer = handle->sock->peer;
dnssock->read_timeout = handle->sock->mgr->init;
dnssock->tid = isc_nm_tid();
atomic_init(&dnssock->client, true);
readhandle = isc__nmhandle_get(dnssock, NULL, NULL);
if (result != ISC_R_SUCCESS) {
cb(readhandle, result, cbarg);
isc__nmsocket_detach(&dnssock);
isc_nmhandle_detach(&readhandle);
return;
}
INSIST(dnssock->statichandle != NULL);
INSIST(dnssock->statichandle == readhandle);
INSIST(readhandle->sock == dnssock);
INSIST(dnssock->recv_cb == NULL);
uv_timer_init(&dnssock->mgr->workers[isc_nm_tid()].loop,
&dnssock->timer);
dnssock->timer.data = dnssock;
dnssock->timer_initialized = true;
uv_timer_start(&dnssock->timer, dnstcp_readtimeout,
dnssock->read_timeout, 0);
dnssock->timer_running = true;
/*
* The connection is now established; we start reading immediately,
* before we've been asked to. We'll read and buffer at most one
* packet.
*/
isc_nm_read(handle, dnslisten_readcb, dnssock);
cb(readhandle, ISC_R_SUCCESS, cbarg);
/*
* The sock is now attached to the handle.
*/
isc__nmsocket_detach(&dnssock);
}
isc_result_t
isc_nm_tlsdnsconnect(isc_nm_t *mgr, isc_nmiface_t *local, isc_nmiface_t *peer,
isc_nm_cb_t cb, void *cbarg, unsigned int timeout,
size_t extrahandlesize) {
isc_result_t result = ISC_R_SUCCESS;
tcpconnect_t *conn = isc_mem_get(mgr->mctx, sizeof(tcpconnect_t));
SSL_CTX *ctx = NULL;
*conn = (tcpconnect_t){ .cb = cb,
.cbarg = cbarg,
.extrahandlesize = extrahandlesize };
isc_mem_attach(mgr->mctx, &conn->mctx);
ctx = SSL_CTX_new(SSLv23_client_method());
result = isc_nm_tlsconnect(mgr, local, peer, tlsdnsconnect_cb, conn,
ctx, timeout, 0);
SSL_CTX_free(ctx);
if (result != ISC_R_SUCCESS) {
isc_mem_putanddetach(&conn->mctx, conn, sizeof(*conn));
}
return (result);
}
void
isc__nm_tlsdns_read(isc_nmhandle_t *handle, isc_nm_recv_cb_t cb, void *cbarg) {
isc_nmsocket_t *sock = NULL;
isc__netievent_tlsdnsread_t *ievent = NULL;
isc_nmhandle_t *eventhandle = NULL;
REQUIRE(VALID_NMHANDLE(handle));
sock = handle->sock;
REQUIRE(sock->statichandle == handle);
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(sock->recv_cb == NULL);
REQUIRE(sock->tid == isc_nm_tid());
REQUIRE(atomic_load(&sock->client));
if (inactive(sock)) {
cb(handle, ISC_R_NOTCONNECTED, NULL, cbarg);
return;
}
/*
* This MUST be done asynchronously, no matter which thread we're
* in. The callback function for isc_nm_read() often calls
* isc_nm_read() again; if we tried to do that synchronously
* we'd clash in processbuffer() and grow the stack indefinitely.
*/
ievent = isc__nm_get_ievent(sock->mgr, netievent_tlsdnsread);
isc__nmsocket_attach(sock, &ievent->sock);
sock->recv_cb = cb;
sock->recv_cbarg = cbarg;
sock->read_timeout = (atomic_load(&sock->keepalive)
? sock->mgr->keepalive
: sock->mgr->idle);
/*
* Add a reference to the handle to keep it from being freed by
* the caller; it will be detached in in isc__nm_async_tlsdnsread().
*/
isc_nmhandle_attach(handle, &eventhandle);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
void
isc__nm_async_tlsdnsread(isc__networker_t *worker, isc__netievent_t *ev0) {
isc_result_t result;
isc__netievent_tlsdnsread_t *ievent =
(isc__netievent_tlsdnsclose_t *)ev0;
isc_nmsocket_t *sock = ievent->sock;
isc_nmhandle_t *handle = NULL, *newhandle = NULL;
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(worker->id == sock->tid);
REQUIRE(sock->tid == isc_nm_tid());
handle = sock->statichandle;
if (inactive(sock)) {
isc_nm_recv_cb_t cb = sock->recv_cb;
void *cbarg = sock->recv_cbarg;
isc__nmsocket_clearcb(sock);
if (cb != NULL) {
cb(handle, ISC_R_NOTCONNECTED, NULL, cbarg);
}
isc_nmhandle_detach(&handle);
return;
}
/*
* Maybe we have a packet already?
*/
result = processbuffer(sock, &newhandle);
if (result == ISC_R_SUCCESS) {
if (sock->timer_initialized) {
uv_timer_stop(&sock->timer);
}
isc_nmhandle_detach(&newhandle);
} else if (sock->outerhandle != NULL) {
/* Restart reading, wait for the callback */
if (sock->timer_initialized) {
uv_timer_start(&sock->timer, dnstcp_readtimeout,
sock->read_timeout, 0);
sock->timer_running = true;
}
isc_nm_resumeread(sock->outerhandle);
} else {
isc_nm_recv_cb_t cb = sock->recv_cb;
void *cbarg = sock->recv_cbarg;
isc__nmsocket_clearcb(sock);
cb(handle, ISC_R_NOTCONNECTED, NULL, cbarg);
}
isc_nmhandle_detach(&handle);
}
void
isc__nm_tlsdns_cancelread(isc_nmhandle_t *handle) {
isc_nmsocket_t *sock = NULL;
isc__netievent_tlsdnscancel_t *ievent = NULL;
REQUIRE(VALID_NMHANDLE(handle));
sock = handle->sock;
REQUIRE(sock->type == isc_nm_tlsdnssocket);
ievent = isc__nm_get_ievent(sock->mgr, netievent_tlsdnscancel);
isc__nmsocket_attach(sock, &ievent->sock);
isc_nmhandle_attach(handle, &ievent->handle);
isc__nm_enqueue_ievent(&sock->mgr->workers[sock->tid],
(isc__netievent_t *)ievent);
}
void
isc__nm_async_tlsdnscancel(isc__networker_t *worker, isc__netievent_t *ev0) {
isc__netievent_tlsdnscancel_t *ievent =
(isc__netievent_tlsdnscancel_t *)ev0;
isc_nmsocket_t *sock = ievent->sock;
isc_nmhandle_t *handle = ievent->handle;
REQUIRE(VALID_NMSOCK(sock));
REQUIRE(worker->id == sock->tid);
REQUIRE(sock->tid == isc_nm_tid());
if (atomic_load(&sock->client)) {
isc_nm_recv_cb_t cb;
void *cbarg = NULL;
cb = sock->recv_cb;
cbarg = sock->recv_cbarg;
isc__nmsocket_clearcb(sock);
if (cb != NULL) {
cb(handle, ISC_R_EOF, NULL, cbarg);
}
isc__nm_tcp_cancelread(sock->outerhandle);
}
}
void
isc__nm_tlsdns_settimeout(isc_nmhandle_t *handle, uint32_t timeout) {
isc_nmsocket_t *sock = NULL;
REQUIRE(VALID_NMHANDLE(handle));
sock = handle->sock;
if (sock->outerhandle != NULL) {
isc__nm_tcp_settimeout(sock->outerhandle, timeout);
}
sock->read_timeout = timeout;
if (sock->timer_running) {
uv_timer_start(&sock->timer, dnstcp_readtimeout,
sock->read_timeout, 0);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -14,179 +14,7 @@
#include <isc/util.h>
#ifndef HAVE_UV_IMPORT
/*
* XXXWPK: This code goes into libuv internals and it's platform dependent.
* It's ugly, we shouldn't do it, but the alternative with passing sockets
* over IPC sockets is even worse, and causes all kind of different
* problems. We should try to push these things upstream.
*/
#ifdef WIN32
/* This code is adapted from libuv/src/win/internal.h */
typedef enum {
UV__IPC_SOCKET_XFER_NONE = 0,
UV__IPC_SOCKET_XFER_TCP_CONNECTION,
UV__IPC_SOCKET_XFER_TCP_SERVER
} uv__ipc_socket_xfer_type_t;
typedef struct {
WSAPROTOCOL_INFOW socket_info;
uint32_t delayed_error;
} uv__ipc_socket_xfer_info_t;
/*
* Needed to make sure that the internal structure that we pulled out of
* libuv hasn't changed.
*/
int
uv__tcp_xfer_import(uv_tcp_t *tcp, uv__ipc_socket_xfer_type_t xfer_type,
uv__ipc_socket_xfer_info_t *xfer_info);
int
uv__tcp_xfer_export(uv_tcp_t *handle, int target_pid,
uv__ipc_socket_xfer_type_t *xfer_type,
uv__ipc_socket_xfer_info_t *xfer_info);
int
isc_uv_export(uv_stream_t *stream, isc_uv_stream_info_t *info) {
uv__ipc_socket_xfer_info_t xfer_info;
uv__ipc_socket_xfer_type_t xfer_type = UV__IPC_SOCKET_XFER_NONE;
/*
* Needed to make sure that the internal structure that we pulled
* out of libuv hasn't changed.
*/
RUNTIME_CHECK(sizeof(uv__ipc_socket_xfer_info_t) == 632);
if (stream->type != UV_TCP) {
return (-1);
}
int r = uv__tcp_xfer_export((uv_tcp_t *)stream, GetCurrentProcessId(),
&xfer_type, &xfer_info);
if (r != 0) {
return (r);
}
if (xfer_info.delayed_error != 0) {
return (xfer_info.delayed_error);
}
INSIST(xfer_type == UV__IPC_SOCKET_XFER_TCP_CONNECTION);
info->type = UV_TCP;
info->socket_info = xfer_info.socket_info;
return (0);
}
int
isc_uv_import(uv_stream_t *stream, isc_uv_stream_info_t *info) {
if (stream->type != UV_TCP || info->type != UV_TCP) {
return (-1);
}
return (uv__tcp_xfer_import(
(uv_tcp_t *)stream, UV__IPC_SOCKET_XFER_TCP_CONNECTION,
&(uv__ipc_socket_xfer_info_t){
.socket_info = info->socket_info }));
}
#else /* WIN32 */
/* Adapted from libuv/src/unix/internal.h */
#include <fcntl.h>
#include <sys/ioctl.h>
static int
isc_uv__cloexec(int fd, int set) {
int r;
/*
* This #ifdef is taken directly from the libuv sources.
* We use FIOCLEX and FIONCLEX ioctl() calls when possible,
* but on some platforms are not implemented, or defined but
* not implemented correctly. On those, we use the FD_CLOEXEC
* fcntl() call, which adds extra system call overhead, but
* works.
*/
#if defined(_AIX) || defined(__APPLE__) || defined(__DragonFly__) || \
defined(__FreeBSD__) || defined(__FreeBSD_kernel__) || \
defined(__linux__) || defined(__OpenBSD__) || defined(__NetBSD__)
do {
r = ioctl(fd, set ? FIOCLEX : FIONCLEX);
} while (r == -1 && errno == EINTR);
#else /* FIOCLEX/FIONCLEX unsupported */
int flags;
do {
r = fcntl(fd, F_GETFD);
} while (r == -1 && errno == EINTR);
if (r == -1) {
return (-1);
}
if (!!(r & FD_CLOEXEC) == !!set) {
return (0);
}
if (set) {
flags = r | FD_CLOEXEC;
} else {
flags = r & ~FD_CLOEXEC;
}
do {
r = fcntl(fd, F_SETFD, flags);
} while (r == -1 && errno == EINTR);
#endif /* FIOCLEX/FIONCLEX unsupported */
if (r != 0) {
return (-1);
}
return (0);
}
int
isc_uv_export(uv_stream_t *stream, isc_uv_stream_info_t *info) {
int oldfd, fd;
int err;
if (stream->type != UV_TCP) {
return (-1);
}
err = uv_fileno((uv_handle_t *)stream, (uv_os_fd_t *)&oldfd);
if (err != 0) {
return (err);
}
fd = dup(oldfd);
if (fd == -1) {
return (-1);
}
err = isc_uv__cloexec(fd, 1);
if (err != 0) {
close(fd);
return (err);
}
info->type = stream->type;
info->fd = fd;
return (0);
}
int
isc_uv_import(uv_stream_t *stream, isc_uv_stream_info_t *info) {
if (info->type != UV_TCP) {
return (-1);
}
uv_tcp_t *tcp = (uv_tcp_t *)stream;
return (uv_tcp_open(tcp, info->fd));
}
#endif /* ifdef WIN32 */
#endif /* ifndef HAVE_UV_IMPORT */
#include "netmgr-int.h"
#ifndef HAVE_UV_UDP_CONNECT
int
@ -219,3 +47,82 @@ isc_uv_udp_connect(uv_udp_t *handle, const struct sockaddr *addr) {
return (0);
}
#endif /* ifndef HAVE_UV_UDP_CONNECT */
int
isc_uv_udp_freebind(uv_udp_t *handle, const struct sockaddr *addr,
unsigned int flags) {
int r;
int fd;
r = uv_fileno((const uv_handle_t *)handle, (uv_os_fd_t *)&fd);
if (r < 0) {
return (r);
}
r = uv_udp_bind(handle, addr, flags);
if (r == UV_EADDRNOTAVAIL &&
isc__nm_socket_freebind(fd, addr->sa_family) == ISC_R_SUCCESS)
{
/*
* Retry binding with IP_FREEBIND (or equivalent option) if the
* address is not available. This helps with IPv6 tentative
* addresses which are reported by the route socket, although
* named is not yet able to properly bind to them.
*/
r = uv_udp_bind(handle, addr, flags);
}
return (r);
}
static int
isc__uv_tcp_bind_now(uv_tcp_t *handle, const struct sockaddr *addr,
unsigned int flags) {
int r;
struct sockaddr_storage sname;
int snamelen = sizeof(sname);
r = uv_tcp_bind(handle, addr, flags);
if (r < 0) {
return (r);
}
/*
* uv_tcp_bind() uses a delayed error, initially returning
* success even if bind() fails. By calling uv_tcp_getsockname()
* here we can find out whether the bind() call was successful.
*/
r = uv_tcp_getsockname(handle, (struct sockaddr *)&sname, &snamelen);
if (r < 0) {
return (r);
}
return (0);
}
int
isc_uv_tcp_freebind(uv_tcp_t *handle, const struct sockaddr *addr,
unsigned int flags) {
int r;
int fd;
r = uv_fileno((const uv_handle_t *)handle, (uv_os_fd_t *)&fd);
if (r < 0) {
return (r);
}
r = isc__uv_tcp_bind_now(handle, addr, flags);
if (r == UV_EADDRNOTAVAIL &&
isc__nm_socket_freebind(fd, addr->sa_family) == ISC_R_SUCCESS)
{
/*
* Retry binding with IP_FREEBIND (or equivalent option) if the
* address is not available. This helps with IPv6 tentative
* addresses which are reported by the route socket, although
* named is not yet able to properly bind to them.
*/
r = isc__uv_tcp_bind_now(handle, addr, flags);
}
return (r);
}

View File

@ -33,53 +33,6 @@ uv_handle_set_data(uv_handle_t *handle, void *data) {
}
#endif /* ifndef HAVE_UV_HANDLE_SET_DATA */
#ifdef HAVE_UV_IMPORT
#define isc_uv_stream_info_t uv_stream_info_t
#define isc_uv_export uv_export
#define isc_uv_import uv_import
#else
/*
* These functions are not available in libuv, but they're very internal
* to libuv. We should try to get them merged upstream.
*/
/*
* A sane way to pass listening TCP socket to child threads, without using
* IPC (as the libuv example shows) but a version of the uv_export() and
* uv_import() functions that were unfortunately removed from libuv.
* This is based on the original libuv code.
*/
typedef struct isc_uv_stream_info_s isc_uv_stream_info_t;
struct isc_uv_stream_info_s {
uv_handle_type type;
#ifdef WIN32
WSAPROTOCOL_INFOW socket_info;
#else /* ifdef WIN32 */
int fd;
#endif /* ifdef WIN32 */
};
int
isc_uv_export(uv_stream_t *stream, isc_uv_stream_info_t *info);
/*%<
* Exports uv_stream_t as isc_uv_stream_info_t value, which could
* be used to initialize shared streams within the same process.
*/
int
isc_uv_import(uv_stream_t *stream, isc_uv_stream_info_t *info);
/*%<
* Imports uv_stream_info_t value into uv_stream_t to initialize a
* shared stream.
*/
#endif
#ifdef HAVE_UV_UDP_CONNECT
#define isc_uv_udp_connect uv_udp_connect
#else
@ -95,3 +48,11 @@ isc_uv_udp_connect(uv_udp_t *handle, const struct sockaddr *addr);
*/
#endif
int
isc_uv_udp_freebind(uv_udp_t *handle, const struct sockaddr *addr,
unsigned int flags);
int
isc_uv_tcp_freebind(uv_tcp_t *handle, const struct sockaddr *addr,
unsigned int flags);

View File

@ -27,7 +27,7 @@
*/
isc_result_t
isc___nm_uverr2result(int uverr, bool dolog, const char *file,
unsigned int line) {
unsigned int line, const char *func) {
switch (uverr) {
case UV_ENOTDIR:
case UV_ELOOP:
@ -81,12 +81,15 @@ isc___nm_uverr2result(int uverr, bool dolog, const char *file,
return (ISC_R_CONNREFUSED);
case UV_ECANCELED:
return (ISC_R_CANCELED);
case UV_EOF:
return (ISC_R_EOF);
default:
if (dolog) {
UNEXPECTED_ERROR(file, line,
"unable to convert libuv "
"error code to isc_result: %d: %s",
uverr, uv_strerror(uverr));
UNEXPECTED_ERROR(
file, line,
"unable to convert libuv "
"error code in %s to isc_result: %d: %s",
func, uverr, uv_strerror(uverr));
}
return (ISC_R_UNEXPECTED);
}

View File

@ -17,6 +17,12 @@
#include <isc/quota.h>
#include <isc/util.h>
#define QUOTA_MAGIC ISC_MAGIC('Q', 'U', 'O', 'T')
#define VALID_QUOTA(p) ISC_MAGIC_VALID(p, QUOTA_MAGIC)
#define QUOTA_CB_MAGIC ISC_MAGIC('Q', 'T', 'C', 'B')
#define VALID_QUOTA_CB(p) ISC_MAGIC_VALID(p, QUOTA_CB_MAGIC)
void
isc_quota_init(isc_quota_t *quota, unsigned int max) {
atomic_init(&quota->max, max);
@ -25,10 +31,14 @@ isc_quota_init(isc_quota_t *quota, unsigned int max) {
atomic_init(&quota->waiting, 0);
ISC_LIST_INIT(quota->cbs);
isc_mutex_init(&quota->cblock);
quota->magic = QUOTA_MAGIC;
}
void
isc_quota_destroy(isc_quota_t *quota) {
REQUIRE(VALID_QUOTA(quota));
quota->magic = 0;
INSIST(atomic_load(&quota->used) == 0);
INSIST(atomic_load(&quota->waiting) == 0);
INSIST(ISC_LIST_EMPTY(quota->cbs));
@ -40,26 +50,31 @@ isc_quota_destroy(isc_quota_t *quota) {
void
isc_quota_soft(isc_quota_t *quota, unsigned int soft) {
REQUIRE(VALID_QUOTA(quota));
atomic_store_release(&quota->soft, soft);
}
void
isc_quota_max(isc_quota_t *quota, unsigned int max) {
REQUIRE(VALID_QUOTA(quota));
atomic_store_release(&quota->max, max);
}
unsigned int
isc_quota_getmax(isc_quota_t *quota) {
REQUIRE(VALID_QUOTA(quota));
return (atomic_load_relaxed(&quota->max));
}
unsigned int
isc_quota_getsoft(isc_quota_t *quota) {
REQUIRE(VALID_QUOTA(quota));
return (atomic_load_relaxed(&quota->soft));
}
unsigned int
isc_quota_getused(isc_quota_t *quota) {
REQUIRE(VALID_QUOTA(quota));
return (atomic_load_relaxed(&quota->used));
}
@ -140,13 +155,21 @@ doattach(isc_quota_t *quota, isc_quota_t **p) {
}
isc_result_t
isc_quota_attach(isc_quota_t *quota, isc_quota_t **p) {
return (isc_quota_attach_cb(quota, p, NULL));
isc_quota_attach(isc_quota_t *quota, isc_quota_t **quotap) {
REQUIRE(VALID_QUOTA(quota));
REQUIRE(quotap != NULL && *quotap == NULL);
return (isc_quota_attach_cb(quota, quotap, NULL));
}
isc_result_t
isc_quota_attach_cb(isc_quota_t *quota, isc_quota_t **p, isc_quota_cb_t *cb) {
isc_result_t result = doattach(quota, p);
isc_quota_attach_cb(isc_quota_t *quota, isc_quota_t **quotap,
isc_quota_cb_t *cb) {
REQUIRE(VALID_QUOTA(quota));
REQUIRE(cb == NULL || VALID_QUOTA_CB(cb));
REQUIRE(quotap != NULL && *quotap == NULL);
isc_result_t result = doattach(quota, quotap);
if (result == ISC_R_QUOTA && cb != NULL) {
LOCK(&quota->cblock);
enqueue(quota, cb);
@ -160,11 +183,14 @@ isc_quota_cb_init(isc_quota_cb_t *cb, isc_quota_cb_func_t cb_func, void *data) {
ISC_LINK_INIT(cb, link);
cb->cb_func = cb_func;
cb->data = data;
cb->magic = QUOTA_CB_MAGIC;
}
void
isc_quota_detach(isc_quota_t **p) {
INSIST(p != NULL && *p != NULL);
quota_release(*p);
*p = NULL;
isc_quota_detach(isc_quota_t **quotap) {
REQUIRE(quotap != NULL && VALID_QUOTA(*quotap));
isc_quota_t *quota = *quotap;
*quotap = NULL;
quota_release(quota);
}

View File

@ -29,7 +29,6 @@ check_PROGRAMS = \
md_test \
mem_test \
netaddr_test \
netmgr_test \
parse_test \
pool_test \
quota_test \
@ -44,8 +43,12 @@ check_PROGRAMS = \
symtab_test \
task_test \
taskpool_test \
tcp_test \
tcp_quota_test \
tcpdns_test \
time_test \
timer_test
timer_test \
udp_test
TESTS = $(check_PROGRAMS)
@ -69,11 +72,39 @@ random_test_LDADD = \
$(LDADD) \
-lm
netmgr_test_CPPFLAGS = \
tcp_test_CPPFLAGS = \
$(AM_CPPFLAGS) \
$(OPENSSL_CFLAGS) \
$(LIBUV_CFLAGS)
netmgr_test_LDADD = \
tcp_test_LDADD = \
$(LDADD) \
$(LIBUV_LIBS)
tcp_quota_test_CPPFLAGS = \
$(AM_CPPFLAGS) \
$(OPENSSL_CFLAGS) \
$(LIBUV_CFLAGS)
tcp_quota_test_LDADD = \
$(LDADD) \
$(LIBUV_LIBS)
tcpdns_test_CPPFLAGS = \
$(AM_CPPFLAGS) \
$(OPENSSL_CFLAGS) \
$(LIBUV_CFLAGS)
tcpdns_test_LDADD = \
$(LDADD) \
$(LIBUV_LIBS)
udp_test_CPPFLAGS = \
$(AM_CPPFLAGS) \
$(OPENSSL_CFLAGS) \
$(LIBUV_CFLAGS)
udp_test_LDADD = \
$(LDADD) \
$(LIBUV_LIBS)

View File

@ -0,0 +1,737 @@
/*
* Copyright (C) Internet Systems Consortium, Inc. ("ISC")
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, you can obtain one at https://mozilla.org/MPL/2.0/.
*
* See the COPYRIGHT file distributed with this work for additional
* information regarding copyright ownership.
*/
#if HAVE_CMOCKA
#include <sched.h> /* IWYU pragma: keep */
#include <setjmp.h>
#include <stdarg.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#include <uv.h>
#define UNIT_TESTING
#include <cmocka.h>
#include <isc/atomic.h>
#include <isc/buffer.h>
#include <isc/condition.h>
#include <isc/mutex.h>
#include <isc/netmgr.h>
#include <isc/nonce.h>
#include <isc/os.h>
#include <isc/refcount.h>
#include <isc/sockaddr.h>
#include <isc/thread.h>
#include "../netmgr/netmgr-int.h"
#include "isctest.h"
#define MAX_NM 2
static isc_sockaddr_t tcp_listen_addr;
static uint64_t send_magic = 0;
static uint64_t stop_magic = 0;
static uv_buf_t send_msg = { .base = (char *)&send_magic,
.len = sizeof(send_magic) };
static uv_buf_t stop_msg = { .base = (char *)&stop_magic,
.len = sizeof(stop_magic) };
static atomic_uint_fast64_t nsends;
static atomic_uint_fast64_t ssends;
static atomic_uint_fast64_t sreads;
static atomic_uint_fast64_t saccepts;
static atomic_uint_fast64_t cconnects;
static atomic_uint_fast64_t csends;
static atomic_uint_fast64_t creads;
static atomic_uint_fast64_t ctimeouts;
static unsigned int workers = 2;
static isc_quota_t listener_quota;
static atomic_bool check_listener_quota;
#define NSENDS 100
#define NWRITES 10
/* Enable this to print values while running tests */
#undef PRINT_DEBUG
#ifdef PRINT_DEBUG
#define X(v) fprintf(stderr, #v " = %" PRIu64 "\n", atomic_load(&v))
#define P(v) fprintf(stderr, #v " = %" PRIu64 "\n", v)
#else
#define X(v)
#define P(v)
#endif
static int
setup_ephemeral_port(isc_sockaddr_t *addr, sa_family_t family) {
isc_result_t result;
socklen_t addrlen = sizeof(*addr);
int fd;
int r;
isc_sockaddr_fromin6(addr, &in6addr_loopback, 0);
fd = socket(AF_INET6, family, 0);
if (fd < 0) {
perror("setup_ephemeral_port: socket()");
return (-1);
}
r = bind(fd, (const struct sockaddr *)&addr->type.sa,
sizeof(addr->type.sin6));
if (r != 0) {
perror("setup_ephemeral_port: bind()");
close(fd);
return (r);
}
r = getsockname(fd, (struct sockaddr *)&addr->type.sa, &addrlen);
if (r != 0) {
perror("setup_ephemeral_port: getsockname()");
close(fd);
return (r);
}
result = isc__nm_socket_reuse(fd);
if (result != ISC_R_SUCCESS && result != ISC_R_NOTIMPLEMENTED) {
fprintf(stderr,
"setup_ephemeral_port: isc__nm_socket_reuse(): %s",
isc_result_totext(result));
close(fd);
return (-1);
}
result = isc__nm_socket_reuse_lb(fd);
if (result != ISC_R_SUCCESS && result != ISC_R_NOTIMPLEMENTED) {
fprintf(stderr,
"setup_ephemeral_port: isc__nm_socket_reuse_lb(): %s",
isc_result_totext(result));
close(fd);
return (-1);
}
#if IPV6_RECVERR
#define setsockopt_on(socket, level, name) \
setsockopt(socket, level, name, &(int){ 1 }, sizeof(int))
r = setsockopt_on(fd, IPPROTO_IPV6, IPV6_RECVERR);
if (r != 0) {
perror("setup_ephemeral_port");
close(fd);
return (r);
}
#endif
return (fd);
}
static int
_setup(void **state) {
UNUSED(state);
/* workers = isc_os_ncpus(); */
if (isc_test_begin(NULL, true, workers) != ISC_R_SUCCESS) {
return (-1);
}
signal(SIGPIPE, SIG_IGN);
return (0);
}
static int
_teardown(void **state) {
UNUSED(state);
isc_test_end();
return (0);
}
/* Generic */
thread_local uint8_t tcp_buffer_storage[4096];
thread_local size_t tcp_buffer_length = 0;
static int
nm_setup(void **state) {
size_t nworkers = ISC_MAX(ISC_MIN(workers, 32), 1);
int tcp_listen_sock = -1;
isc_nm_t **nm = NULL;
tcp_listen_addr = (isc_sockaddr_t){ .length = 0 };
tcp_listen_sock = setup_ephemeral_port(&tcp_listen_addr, SOCK_STREAM);
if (tcp_listen_sock < 0) {
return (-1);
}
close(tcp_listen_sock);
tcp_listen_sock = -1;
atomic_store(&nsends, NSENDS * NWRITES);
atomic_store(&csends, 0);
atomic_store(&creads, 0);
atomic_store(&sreads, 0);
atomic_store(&ssends, 0);
atomic_store(&saccepts, 0);
atomic_store(&ctimeouts, 0);
atomic_store(&cconnects, 0);
isc_nonce_buf(&send_magic, sizeof(send_magic));
isc_nonce_buf(&stop_magic, sizeof(stop_magic));
if (send_magic == stop_magic) {
return (-1);
}
nm = isc_mem_get(test_mctx, MAX_NM * sizeof(nm[0]));
for (size_t i = 0; i < MAX_NM; i++) {
nm[i] = isc_nm_start(test_mctx, nworkers);
assert_non_null(nm[i]);
}
*state = nm;
isc_quota_init(&listener_quota, 0);
atomic_store(&check_listener_quota, false);
return (0);
}
static int
nm_teardown(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
for (size_t i = 0; i < MAX_NM; i++) {
isc_nm_destroy(&nm[i]);
assert_null(nm[i]);
}
isc_mem_put(test_mctx, nm, MAX_NM * sizeof(nm[0]));
isc_quota_destroy(&listener_quota);
return (0);
}
thread_local size_t nwrites = NWRITES;
/* TCP Connect */
static void
tcp_connect_send_cb(isc_nmhandle_t *handle, isc_result_t eresult, void *cbarg);
static void
tcp_connect_send(isc_nmhandle_t *handle);
static void
tcp_connect_read_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg) {
uint64_t magic = 0;
UNUSED(cbarg);
assert_non_null(handle);
if (eresult != ISC_R_SUCCESS) {
goto unref;
}
memmove(tcp_buffer_storage + tcp_buffer_length, region->base,
region->length);
tcp_buffer_length += region->length;
if (tcp_buffer_length >= sizeof(magic)) {
isc_nm_pauseread(handle);
atomic_fetch_add(&creads, 1);
magic = *(uint64_t *)tcp_buffer_storage;
assert_true(magic == stop_magic || magic == send_magic);
tcp_buffer_length -= sizeof(magic);
memmove(tcp_buffer_storage, tcp_buffer_storage + sizeof(magic),
tcp_buffer_length);
if (magic == send_magic) {
tcp_connect_send(handle);
return;
} else if (magic == stop_magic) {
/* We are done, so we don't send anything back */
/* There should be no more packets in the buffer */
assert_int_equal(tcp_buffer_length, 0);
}
}
unref:
isc_nmhandle_detach(&handle);
}
static void
tcp_connect_send_cb(isc_nmhandle_t *handle, isc_result_t eresult, void *cbarg) {
assert_non_null(handle);
UNUSED(cbarg);
if (eresult == ISC_R_SUCCESS) {
atomic_fetch_add(&csends, 1);
isc_nm_resumeread(handle);
} else {
/* Send failed, we need to stop reading too */
isc_nm_cancelread(handle);
}
}
static void
tcp_connect_shutdown(isc_nmhandle_t *handle, isc_result_t eresult,
void *cbarg) {
UNUSED(cbarg);
assert_non_null(handle);
if (eresult == ISC_R_SUCCESS) {
atomic_fetch_add(&csends, 1);
} else {
isc_nm_cancelread(handle);
}
}
static void
tcp_connect_send(isc_nmhandle_t *handle) {
uint_fast64_t sends = atomic_load(&nsends);
while (sends > 0) {
/* Continue until we subtract or we are done */
if (atomic_compare_exchange_weak(&nsends, &sends, sends - 1)) {
sends--;
break;
}
}
if (sends == 0) {
isc_nm_send(handle, (isc_region_t *)&stop_msg,
tcp_connect_shutdown, NULL);
} else {
isc_nm_send(handle, (isc_region_t *)&send_msg,
tcp_connect_send_cb, NULL);
}
}
static void
tcp_connect_connect_cb(isc_nmhandle_t *handle, isc_result_t eresult,
void *cbarg) {
isc_nmhandle_t *readhandle = NULL;
UNUSED(cbarg);
if (eresult != ISC_R_SUCCESS) {
uint_fast64_t sends = atomic_load(&nsends);
/* We failed to connect; try again */
while (sends > 0) {
/* Continue until we subtract or we are done */
if (atomic_compare_exchange_weak(&nsends, &sends,
sends - 1)) {
sends--;
break;
}
}
return;
}
atomic_fetch_add(&cconnects, 1);
isc_nmhandle_attach(handle, &readhandle);
isc_nm_read(handle, tcp_connect_read_cb, NULL);
tcp_connect_send(handle);
}
static isc_result_t
tcp_listen_accept_cb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg);
static isc_threadresult_t
tcp_connect_thread(isc_threadarg_t arg) {
isc_nm_t *connect_nm = (isc_nm_t *)arg;
isc_sockaddr_t tcp_connect_addr;
tcp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&tcp_connect_addr, &in6addr_loopback, 0);
while (atomic_load(&nsends) > 0) {
(void)isc_nm_tcpconnect(connect_nm,
(isc_nmiface_t *)&tcp_connect_addr,
(isc_nmiface_t *)&tcp_listen_addr,
tcp_connect_connect_cb, NULL, 1000, 0);
}
return ((isc_threadresult_t)0);
}
static isc_quota_t *
tcp_listener_init_quota(size_t nthreads) {
isc_quota_t *quotap = NULL;
if (atomic_load(&check_listener_quota)) {
unsigned max_quota = ISC_MAX(nthreads / 2, 1);
isc_quota_max(&listener_quota, max_quota);
quotap = &listener_quota;
}
return quotap;
}
static void
tcp_recv_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_nmsocket_t *listen_sock = NULL;
isc_thread_t threads[32] = { 0 };
isc_quota_t *quotap = tcp_listener_init_quota(nthreads);
result = isc_nm_listentcp(listen_nm, (isc_nmiface_t *)&tcp_listen_addr,
tcp_listen_accept_cb, NULL, 0, 0, quotap,
&listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(tcp_connect_thread, connect_nm, &threads[i]);
}
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_closedown(connect_nm);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
X(saccepts);
/* assert_true(atomic_load(&csends) >= atomic_load(&sreads)); */
assert_true(atomic_load(&sreads) >= atomic_load(&ssends));
/* assert_true(atomic_load(&ssends) >= atomic_load(&creads)); */
assert_true(atomic_load(&creads) <= atomic_load(&csends));
assert_true(atomic_load(&creads) >= atomic_load(&ctimeouts));
}
static void
tcp_recv_half_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_nmsocket_t *listen_sock = NULL;
isc_thread_t threads[32] = { 0 };
isc_quota_t *quotap = tcp_listener_init_quota(nthreads);
result = isc_nm_listentcp(listen_nm, (isc_nmiface_t *)&tcp_listen_addr,
tcp_listen_accept_cb, NULL, 0, 0, quotap,
&listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(tcp_connect_thread, connect_nm, &threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_closedown(connect_nm);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
X(saccepts);
/* assert_true(atomic_load(&csends) >= atomic_load(&sreads)); */
assert_true(atomic_load(&sreads) >= atomic_load(&ssends));
/* assert_true(atomic_load(&ssends) >= atomic_load(&creads)); */
assert_true(atomic_load(&creads) <= atomic_load(&csends));
assert_true(atomic_load(&creads) >= atomic_load(&ctimeouts));
}
static void
tcp_half_recv_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_nmsocket_t *listen_sock = NULL;
isc_thread_t threads[32] = { 0 };
isc_quota_t *quotap = tcp_listener_init_quota(nthreads);
result = isc_nm_listentcp(listen_nm, (isc_nmiface_t *)&tcp_listen_addr,
tcp_listen_accept_cb, NULL, 0, 0, quotap,
&listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(tcp_connect_thread, connect_nm, &threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_closedown(connect_nm);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
X(saccepts);
/* assert_true(atomic_load(&csends) >= atomic_load(&sreads)); */
assert_true(atomic_load(&sreads) >= atomic_load(&ssends));
/* assert_true(atomic_load(&ssends) >= atomic_load(&creads)); */
assert_true(atomic_load(&creads) <= atomic_load(&csends));
assert_true(atomic_load(&creads) >= atomic_load(&ctimeouts));
}
static void
tcp_half_recv_half_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_nmsocket_t *listen_sock = NULL;
isc_thread_t threads[32] = { 0 };
isc_quota_t *quotap = tcp_listener_init_quota(nthreads);
result = isc_nm_listentcp(listen_nm, (isc_nmiface_t *)&tcp_listen_addr,
tcp_listen_accept_cb, NULL, 0, 0, quotap,
&listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(tcp_connect_thread, connect_nm, &threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_closedown(connect_nm);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
X(saccepts);
/* assert_true(atomic_load(&csends) >= atomic_load(&sreads)); */
assert_true(atomic_load(&sreads) >= atomic_load(&ssends));
/* assert_true(atomic_load(&ssends) >= atomic_load(&creads)); */
assert_true(atomic_load(&creads) <= atomic_load(&csends));
assert_true(atomic_load(&creads) >= atomic_load(&ctimeouts));
}
static void
tcp_recv_send_quota(void **state) {
atomic_store(&check_listener_quota, true);
tcp_recv_send(state);
}
static void
tcp_recv_half_send_quota(void **state) {
atomic_store(&check_listener_quota, true);
tcp_recv_half_send(state);
}
static void
tcp_half_recv_send_quota(void **state) {
atomic_store(&check_listener_quota, true);
tcp_half_recv_send(state);
}
static void
tcp_half_recv_half_send_quota(void **state) {
atomic_store(&check_listener_quota, true);
tcp_half_recv_half_send(state);
}
/* TCP Listener */
/*
* TODO:
* 1. write a timeout test
*/
static void
tcp_listen_read_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg);
static void
tcp_listen_send_cb(isc_nmhandle_t *handle, isc_result_t eresult, void *cbarg) {
UNUSED(eresult);
UNUSED(cbarg);
assert_non_null(handle);
if (eresult == ISC_R_SUCCESS) {
atomic_fetch_add(&ssends, 1);
isc_nm_resumeread(handle);
}
}
static void
tcp_listen_read_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg) {
uint64_t magic = 0;
UNUSED(cbarg);
assert_non_null(handle);
if (eresult != ISC_R_SUCCESS) {
goto unref;
}
atomic_fetch_add(&sreads, 1);
memmove(tcp_buffer_storage + tcp_buffer_length, region->base,
region->length);
tcp_buffer_length += region->length;
if (tcp_buffer_length >= sizeof(magic)) {
isc_nm_pauseread(handle);
magic = *(uint64_t *)tcp_buffer_storage;
assert_true(magic == stop_magic || magic == send_magic);
tcp_buffer_length -= sizeof(magic);
memmove(tcp_buffer_storage, tcp_buffer_storage + sizeof(magic),
tcp_buffer_length);
if (magic == send_magic) {
isc_nm_send(handle, region, tcp_listen_send_cb, NULL);
return;
} else if (magic == stop_magic) {
/* We are done, so we don't send anything back */
/* There should be no more packets in the buffer */
assert_int_equal(tcp_buffer_length, 0);
if (atomic_load(&check_listener_quota)) {
int_fast32_t concurrent =
isc__nm_tcp_listener_nactive(
handle->sock->server->parent);
assert_true(concurrent >= 0);
assert_true((uint_fast32_t)concurrent <=
isc_quota_getmax(&listener_quota));
P(concurrent);
}
}
}
unref:
isc_nmhandle_detach(&handle);
}
static isc_result_t
tcp_listen_accept_cb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
isc_nmhandle_t *readhandle = NULL;
UNUSED(cbarg);
if (result != ISC_R_SUCCESS) {
return (result);
}
tcp_buffer_length = 0;
atomic_fetch_add(&saccepts, 1);
if (atomic_load(&check_listener_quota)) {
int_fast32_t concurrent = isc__nm_tcp_listener_nactive(
handle->sock->server->parent);
assert_true(concurrent >= 0);
assert_true((uint_fast32_t)concurrent <=
isc_quota_getmax(&listener_quota));
P(concurrent);
}
isc_nmhandle_attach(handle, &readhandle);
isc_nm_read(handle, tcp_listen_read_cb, NULL);
return (ISC_R_SUCCESS);
}
int
main(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test_setup_teardown(tcp_recv_send_quota, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(tcp_recv_half_send_quota,
nm_setup, nm_teardown),
cmocka_unit_test_setup_teardown(tcp_half_recv_send_quota,
nm_setup, nm_teardown),
cmocka_unit_test_setup_teardown(tcp_half_recv_half_send_quota,
nm_setup, nm_teardown)
};
return (cmocka_run_group_tests(tests, _setup, _teardown));
}
#else /* HAVE_CMOCKA */
#include <stdio.h>
int
main(void) {
printf("1..0 # Skipped: cmocka not available\n");
return (0);
}
#endif /* if HAVE_CMOCKA */

File diff suppressed because it is too large Load Diff

879
lib/isc/tests/tcpdns_test.c Normal file
View File

@ -0,0 +1,879 @@
/*
* Copyright (C) Internet Systems Consortium, Inc. ("ISC")
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, you can obtain one at https://mozilla.org/MPL/2.0/.
*
* See the COPYRIGHT file distributed with this work for additional
* information regarding copyright ownership.
*/
#if HAVE_CMOCKA
#include <sched.h> /* IWYU pragma: keep */
#include <setjmp.h>
#include <stdarg.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#include <uv.h>
#define UNIT_TESTING
#include <cmocka.h>
#include <isc/atomic.h>
#include <isc/buffer.h>
#include <isc/condition.h>
#include <isc/mutex.h>
#include <isc/netmgr.h>
#include <isc/nonce.h>
#include <isc/os.h>
#include <isc/refcount.h>
#include <isc/sockaddr.h>
#include <isc/thread.h>
#include "../netmgr/netmgr-int.h"
#include "isctest.h"
#define MAX_NM 2
static isc_sockaddr_t tcpdns_listen_addr;
static uint64_t send_magic = 0;
static uint64_t stop_magic = 0;
static uv_buf_t send_msg = { .base = (char *)&send_magic,
.len = sizeof(send_magic) };
static uv_buf_t stop_msg = { .base = (char *)&stop_magic,
.len = sizeof(stop_magic) };
static atomic_uint_fast64_t nsends;
static atomic_uint_fast64_t ssends;
static atomic_uint_fast64_t sreads;
static atomic_uint_fast64_t cconnects;
static atomic_uint_fast64_t csends;
static atomic_uint_fast64_t creads;
static atomic_uint_fast64_t ctimeouts;
static unsigned int workers = 3;
static bool reuse_supported = true;
#define NSENDS 100
#define NWRITES 10
#define CHECK_RANGE_FULL(v) \
{ \
int __v = atomic_load(&v); \
assert_true(__v > NSENDS * NWRITES * 10 / 100); \
assert_true(__v <= NSENDS * NWRITES * 110 / 100); \
}
#define CHECK_RANGE_HALF(v) \
{ \
int __v = atomic_load(&v); \
assert_true(__v > NSENDS * NWRITES * 5 / 100); \
assert_true(__v <= NSENDS * NWRITES * 60 / 100); \
}
/* Enable this to print values while running tests */
#undef PRINT_DEBUG
#ifdef PRINT_DEBUG
#define X(v) fprintf(stderr, #v " = %" PRIu64 "\n", atomic_load(&v))
#else
#define X(v)
#endif
static int
setup_ephemeral_port(isc_sockaddr_t *addr, sa_family_t family) {
isc_result_t result;
socklen_t addrlen = sizeof(*addr);
int fd;
int r;
isc_sockaddr_fromin6(addr, &in6addr_loopback, 0);
fd = socket(AF_INET6, family, 0);
if (fd < 0) {
perror("setup_ephemeral_port: socket()");
return (-1);
}
r = bind(fd, (const struct sockaddr *)&addr->type.sa,
sizeof(addr->type.sin6));
if (r != 0) {
perror("setup_ephemeral_port: bind()");
close(fd);
return (r);
}
r = getsockname(fd, (struct sockaddr *)&addr->type.sa, &addrlen);
if (r != 0) {
perror("setup_ephemeral_port: getsockname()");
close(fd);
return (r);
}
result = isc__nm_socket_reuse(fd);
if (result != ISC_R_SUCCESS && result != ISC_R_NOTIMPLEMENTED) {
fprintf(stderr,
"setup_ephemeral_port: isc__nm_socket_reuse(): %s",
isc_result_totext(result));
close(fd);
return (-1);
}
result = isc__nm_socket_reuse_lb(fd);
if (result != ISC_R_SUCCESS && result != ISC_R_NOTIMPLEMENTED) {
fprintf(stderr,
"setup_ephemeral_port: isc__nm_socket_reuse_lb(): %s",
isc_result_totext(result));
close(fd);
return (-1);
}
if (result == ISC_R_NOTIMPLEMENTED) {
reuse_supported = false;
}
#if IPV6_RECVERR
#define setsockopt_on(socket, level, name) \
setsockopt(socket, level, name, &(int){ 1 }, sizeof(int))
r = setsockopt_on(fd, IPPROTO_IPV6, IPV6_RECVERR);
if (r != 0) {
perror("setup_ephemeral_port");
close(fd);
return (r);
}
#endif
return (fd);
}
static int
_setup(void **state) {
UNUSED(state);
/* workers = isc_os_ncpus(); */
if (isc_test_begin(NULL, true, workers) != ISC_R_SUCCESS) {
return (-1);
}
signal(SIGPIPE, SIG_IGN);
return (0);
}
static int
_teardown(void **state) {
UNUSED(state);
isc_test_end();
return (0);
}
/* Generic */
static void
noop_recv_cb(isc_nmhandle_t *handle, isc_result_t eresult, isc_region_t *region,
void *cbarg) {
UNUSED(handle);
UNUSED(eresult);
UNUSED(region);
UNUSED(cbarg);
}
static unsigned int
noop_accept_cb(isc_nmhandle_t *handle, unsigned int result, void *cbarg) {
UNUSED(handle);
UNUSED(result);
UNUSED(cbarg);
return (0);
}
static void
noop_connect_cb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
UNUSED(handle);
UNUSED(result);
UNUSED(cbarg);
}
thread_local uint8_t tcpdns_buffer_storage[4096];
thread_local size_t tcpdns_buffer_length = 0;
static int
nm_setup(void **state) {
size_t nworkers = ISC_MAX(ISC_MIN(workers, 32), 1);
int tcpdns_listen_sock = -1;
isc_nm_t **nm = NULL;
tcpdns_listen_addr = (isc_sockaddr_t){ .length = 0 };
tcpdns_listen_sock = setup_ephemeral_port(&tcpdns_listen_addr,
SOCK_STREAM);
if (tcpdns_listen_sock < 0) {
return (-1);
}
close(tcpdns_listen_sock);
tcpdns_listen_sock = -1;
atomic_store(&nsends, NSENDS * NWRITES);
atomic_store(&csends, 0);
atomic_store(&creads, 0);
atomic_store(&sreads, 0);
atomic_store(&ssends, 0);
atomic_store(&ctimeouts, 0);
atomic_store(&cconnects, 0);
isc_nonce_buf(&send_magic, sizeof(send_magic));
isc_nonce_buf(&stop_magic, sizeof(stop_magic));
if (send_magic == stop_magic) {
return (-1);
}
nm = isc_mem_get(test_mctx, MAX_NM * sizeof(nm[0]));
for (size_t i = 0; i < MAX_NM; i++) {
nm[i] = isc_nm_start(test_mctx, nworkers);
assert_non_null(nm[i]);
isc_nm_settimeouts(nm[i], 1000, 1000, 1000, 1000);
}
*state = nm;
return (0);
}
static int
nm_teardown(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
for (size_t i = 0; i < MAX_NM; i++) {
isc_nm_destroy(&nm[i]);
assert_null(nm[i]);
}
isc_mem_put(test_mctx, nm, MAX_NM * sizeof(nm[0]));
return (0);
}
thread_local size_t nwrites = NWRITES;
/* TCPDNS */
static void
tcpdns_connect_send_cb(isc_nmhandle_t *handle, isc_result_t eresult,
void *cbarg);
static void
tcpdns_connect_send(isc_nmhandle_t *handle);
static void
tcpdns_connect_send_cb(isc_nmhandle_t *handle, isc_result_t eresult,
void *cbarg) {
assert_non_null(handle);
UNUSED(cbarg);
if (eresult == ISC_R_SUCCESS) {
atomic_fetch_add(&csends, 1);
} else {
/* Send failed, we need to stop reading too */
isc_nm_cancelread(handle);
}
}
static void
tcpdns_connect_send(isc_nmhandle_t *handle) {
uint_fast64_t sends = atomic_load(&nsends);
/* Continue until we subtract or we are sent them all */
while (sends > 0) {
if (atomic_compare_exchange_weak(&nsends, &sends, sends - 1)) {
sends--;
break;
}
}
if (sends == 0) {
isc_nm_send(handle, (isc_region_t *)&stop_msg,
tcpdns_connect_send_cb, NULL);
} else {
isc_nm_send(handle, (isc_region_t *)&send_msg,
tcpdns_connect_send_cb, NULL);
}
}
static void
tcpdns_connect_read_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg) {
uint64_t magic = 0;
UNUSED(cbarg);
assert_non_null(handle);
if (eresult != ISC_R_SUCCESS) {
goto unref;
}
assert_int_equal(region->length, sizeof(magic));
atomic_fetch_add(&creads, 1);
magic = *(uint64_t *)region->base;
assert_true(magic == stop_magic || magic == send_magic);
unref:
isc_nmhandle_detach(&handle);
}
static void
tcpdns_connect_connect_cb(isc_nmhandle_t *handle, isc_result_t eresult,
void *cbarg) {
isc_nmhandle_t *readhandle = NULL;
UNUSED(cbarg);
if (eresult != ISC_R_SUCCESS) {
uint_fast64_t sends = atomic_load(&nsends);
/* We failed to connect; try again */
while (sends > 0) {
/* Continue until we subtract or we are done */
if (atomic_compare_exchange_weak(&nsends, &sends,
sends - 1)) {
sends--;
break;
}
}
return;
}
atomic_fetch_add(&cconnects, 1);
isc_nmhandle_attach(handle, &readhandle);
isc_nm_read(handle, tcpdns_connect_read_cb, NULL);
tcpdns_connect_send(handle);
}
static void
tcpdns_noop(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t tcpdns_connect_addr;
tcpdns_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&tcpdns_connect_addr, &in6addr_loopback, 0);
result = isc_nm_listentcpdns(
listen_nm, (isc_nmiface_t *)&tcpdns_listen_addr, noop_recv_cb,
NULL, noop_accept_cb, NULL, 0, 0, NULL, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
(void)isc_nm_tcpdnsconnect(connect_nm,
(isc_nmiface_t *)&tcpdns_connect_addr,
(isc_nmiface_t *)&tcpdns_listen_addr,
noop_connect_cb, NULL, 1000, 0);
isc_nm_closedown(connect_nm);
assert_int_equal(0, atomic_load(&cconnects));
assert_int_equal(0, atomic_load(&csends));
assert_int_equal(0, atomic_load(&creads));
assert_int_equal(0, atomic_load(&ctimeouts));
assert_int_equal(0, atomic_load(&sreads));
assert_int_equal(0, atomic_load(&ssends));
}
static void
tcpdns_noresponse(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t tcpdns_connect_addr;
tcpdns_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&tcpdns_connect_addr, &in6addr_loopback, 0);
result = isc_nm_listentcpdns(
listen_nm, (isc_nmiface_t *)&tcpdns_listen_addr, noop_recv_cb,
NULL, noop_accept_cb, NULL, 0, 0, NULL, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
(void)isc_nm_tcpdnsconnect(connect_nm,
(isc_nmiface_t *)&tcpdns_connect_addr,
(isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_connect_connect_cb, NULL, 1000, 0);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
isc_nm_closedown(connect_nm);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
assert_true(atomic_load(&cconnects) <= 1);
assert_true(atomic_load(&csends) <= 1);
assert_int_equal(0, atomic_load(&creads));
assert_int_equal(0, atomic_load(&ctimeouts));
assert_int_equal(0, atomic_load(&sreads));
assert_int_equal(0, atomic_load(&ssends));
}
static void
tcpdns_listen_read_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg);
static void
tcpdns_listen_send_cb(isc_nmhandle_t *handle, isc_result_t eresult,
void *cbarg) {
UNUSED(cbarg);
UNUSED(eresult);
assert_non_null(handle);
if (eresult != ISC_R_SUCCESS) {
return;
}
atomic_fetch_add(&ssends, 1);
}
static void
tcpdns_listen_read_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg) {
uint64_t magic = 0;
UNUSED(cbarg);
assert_non_null(handle);
if (eresult != ISC_R_SUCCESS) {
return;
}
atomic_fetch_add(&sreads, 1);
assert_int_equal(region->length, sizeof(magic));
magic = *(uint64_t *)region->base;
assert_true(magic == stop_magic || magic == send_magic);
if (magic == send_magic) {
isc_nm_send(handle, region, tcpdns_listen_send_cb, NULL);
return;
} else if (magic == stop_magic) {
/* We are done, we don't send anything back */
}
}
static isc_result_t
tcpdns_listen_accept_cb(isc_nmhandle_t *handle, isc_result_t eresult,
void *cbarg) {
UNUSED(handle);
UNUSED(cbarg);
return (eresult);
}
static isc_threadresult_t
tcpdns_connect_thread(isc_threadarg_t arg) {
isc_nm_t *connect_nm = (isc_nm_t *)arg;
isc_sockaddr_t tcpdns_connect_addr;
tcpdns_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&tcpdns_connect_addr, &in6addr_loopback, 0);
while (atomic_load(&nsends) > 0) {
(void)isc_nm_tcpdnsconnect(
connect_nm, (isc_nmiface_t *)&tcpdns_connect_addr,
(isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_connect_connect_cb, NULL, 1000, 0);
}
return ((isc_threadresult_t)0);
}
static void
tcpdns_recv_one(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t tcpdns_connect_addr;
tcpdns_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&tcpdns_connect_addr, &in6addr_loopback, 0);
atomic_store(&nsends, 1);
result = isc_nm_listentcpdns(
listen_nm, (isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_listen_read_cb, NULL, tcpdns_listen_accept_cb, NULL, 0,
0, NULL, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
(void)isc_nm_tcpdnsconnect(connect_nm,
(isc_nmiface_t *)&tcpdns_connect_addr,
(isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_connect_connect_cb, NULL, 1000, 0);
while (atomic_load(&nsends) > 0) {
isc_thread_yield();
}
while (atomic_load(&cconnects) != 1 || atomic_load(&ssends) != 0 ||
atomic_load(&sreads) != 1 || atomic_load(&creads) != 0 ||
atomic_load(&csends) != 1)
{
isc_thread_yield();
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
isc_nm_closedown(connect_nm);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
assert_int_equal(atomic_load(&cconnects), 1);
assert_int_equal(atomic_load(&csends), 1);
assert_int_equal(atomic_load(&creads), 0);
assert_int_equal(atomic_load(&ctimeouts), 0);
assert_int_equal(atomic_load(&sreads), 1);
assert_int_equal(atomic_load(&ssends), 0);
}
static void
tcpdns_recv_two(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t tcpdns_connect_addr;
tcpdns_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&tcpdns_connect_addr, &in6addr_loopback, 0);
atomic_store(&nsends, 2);
result = isc_nm_listentcpdns(
listen_nm, (isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_listen_read_cb, NULL, tcpdns_listen_accept_cb, NULL, 0,
0, NULL, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
result = isc_nm_tcpdnsconnect(connect_nm,
(isc_nmiface_t *)&tcpdns_connect_addr,
(isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_connect_connect_cb, NULL, 1000, 0);
assert_int_equal(result, ISC_R_SUCCESS);
isc_nm_settimeouts(connect_nm, 1000, 1000, 1000, 1000);
result = isc_nm_tcpdnsconnect(connect_nm,
(isc_nmiface_t *)&tcpdns_connect_addr,
(isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_connect_connect_cb, NULL, 1000, 0);
assert_int_equal(result, ISC_R_SUCCESS);
while (atomic_load(&nsends) > 0) {
isc_thread_yield();
}
while (atomic_load(&sreads) != 2 || atomic_load(&ssends) != 1 ||
atomic_load(&csends) != 2 || atomic_load(&creads) != 1)
{
isc_thread_yield();
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
isc_nm_closedown(connect_nm);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
assert_int_equal(atomic_load(&cconnects), 2);
assert_int_equal(atomic_load(&csends), 2);
assert_int_equal(atomic_load(&creads), 1);
assert_int_equal(atomic_load(&ctimeouts), 0);
assert_int_equal(atomic_load(&sreads), 2);
assert_int_equal(atomic_load(&ssends), 1);
}
static void
tcpdns_recv_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_thread_t threads[32] = { 0 };
if (!reuse_supported) {
skip();
return;
}
result = isc_nm_listentcpdns(
listen_nm, (isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_listen_read_cb, NULL, tcpdns_listen_accept_cb, NULL, 0,
0, NULL, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(tcpdns_connect_thread, connect_nm,
&threads[i]);
}
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_closedown(connect_nm);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
CHECK_RANGE_FULL(csends);
CHECK_RANGE_FULL(creads);
CHECK_RANGE_FULL(sreads);
CHECK_RANGE_FULL(ssends);
}
static void
tcpdns_recv_half_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_thread_t threads[32] = { 0 };
if (!reuse_supported) {
skip();
return;
}
result = isc_nm_listentcpdns(
listen_nm, (isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_listen_read_cb, NULL, tcpdns_listen_accept_cb, NULL, 0,
0, NULL, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(tcpdns_connect_thread, connect_nm,
&threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_closedown(connect_nm);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
CHECK_RANGE_HALF(csends);
CHECK_RANGE_HALF(creads);
CHECK_RANGE_HALF(sreads);
CHECK_RANGE_HALF(ssends);
}
static void
tcpdns_half_recv_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_thread_t threads[32] = { 0 };
if (!reuse_supported) {
skip();
return;
}
result = isc_nm_listentcpdns(
listen_nm, (isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_listen_read_cb, NULL, tcpdns_listen_accept_cb, NULL, 0,
0, NULL, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(tcpdns_connect_thread, connect_nm,
&threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_closedown(connect_nm);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
CHECK_RANGE_HALF(csends);
CHECK_RANGE_HALF(creads);
CHECK_RANGE_HALF(sreads);
CHECK_RANGE_HALF(ssends);
}
static void
tcpdns_half_recv_half_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_thread_t threads[32] = { 0 };
if (!reuse_supported) {
skip();
return;
}
result = isc_nm_listentcpdns(
listen_nm, (isc_nmiface_t *)&tcpdns_listen_addr,
tcpdns_listen_read_cb, NULL, tcpdns_listen_accept_cb, NULL, 0,
0, NULL, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(tcpdns_connect_thread, connect_nm,
&threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_closedown(connect_nm);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
CHECK_RANGE_HALF(csends);
CHECK_RANGE_HALF(creads);
CHECK_RANGE_HALF(sreads);
CHECK_RANGE_HALF(ssends);
}
int
main(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test_setup_teardown(tcpdns_recv_one, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(tcpdns_recv_two, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(tcpdns_noop, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(tcpdns_noresponse, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(tcpdns_recv_send, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(tcpdns_recv_half_send, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(tcpdns_half_recv_send, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(tcpdns_half_recv_half_send,
nm_setup, nm_teardown),
};
return (cmocka_run_group_tests(tests, _setup, _teardown));
}
#else /* HAVE_CMOCKA */
#include <stdio.h>
int
main(void) {
printf("1..0 # Skipped: cmocka not available\n");
return (0);
}
#endif /* if HAVE_CMOCKA */

892
lib/isc/tests/udp_test.c Normal file
View File

@ -0,0 +1,892 @@
/*
* Copyright (C) Internet Systems Consortium, Inc. ("ISC")
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, you can obtain one at https://mozilla.org/MPL/2.0/.
*
* See the COPYRIGHT file distributed with this work for additional
* information regarding copyright ownership.
*/
#if HAVE_CMOCKA
#include <sched.h> /* IWYU pragma: keep */
#include <setjmp.h>
#include <stdarg.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#include <uv.h>
#define UNIT_TESTING
#include <cmocka.h>
#include <isc/atomic.h>
#include <isc/buffer.h>
#include <isc/condition.h>
#include <isc/mutex.h>
#include <isc/netmgr.h>
#include <isc/nonce.h>
#include <isc/os.h>
#include <isc/refcount.h>
#include <isc/sockaddr.h>
#include <isc/thread.h>
#include "uv_wrap.h"
#define KEEP_BEFORE
#include "../netmgr/netmgr-int.h"
#include "../netmgr/udp.c"
#include "../netmgr/uv-compat.c"
#include "../netmgr/uv-compat.h"
#include "isctest.h"
#define MAX_NM 2
static isc_sockaddr_t udp_listen_addr;
static uint64_t send_magic = 0;
static uint64_t stop_magic = 0;
static uv_buf_t send_msg = { .base = (char *)&stop_magic,
.len = sizeof(stop_magic) };
static uv_buf_t stop_msg = { .base = (char *)&stop_magic,
.len = sizeof(stop_magic) };
static atomic_uint_fast64_t nsends;
static atomic_uint_fast64_t ssends;
static atomic_uint_fast64_t sreads;
static atomic_uint_fast64_t cconnects;
static atomic_uint_fast64_t csends;
static atomic_uint_fast64_t creads;
static atomic_uint_fast64_t ctimeouts;
static unsigned int workers = 3;
#define NSENDS 100
#define NWRITES 10
/*
* The UDP protocol doesn't protect against packet duplication, but instead of
* inventing de-duplication, we just ignore the upper bound.
*/
#define CHECK_RANGE_FULL(v) \
{ \
int __v = atomic_load(&v); \
assert_true(NSENDS *NWRITES * 20 / 100 <= __v); \
/* assert_true(__v <= NSENDS * NWRITES * 110 / 100); */ \
}
#define CHECK_RANGE_HALF(v) \
{ \
int __v = atomic_load(&v); \
assert_true(NSENDS *NWRITES * 10 / 100 <= __v); \
/* assert_true(__v <= NSENDS * NWRITES * 60 / 100); */ \
}
/* Enable this to print values while running tests */
#undef PRINT_DEBUG
#ifdef PRINT_DEBUG
#define X(v) fprintf(stderr, #v " = %" PRIu64 "\n", atomic_load(&v))
#else
#define X(v)
#endif
/* MOCK */
static int
setup_ephemeral_port(isc_sockaddr_t *addr, sa_family_t family) {
isc_result_t result;
socklen_t addrlen = sizeof(*addr);
int fd;
int r;
isc_sockaddr_fromin6(addr, &in6addr_loopback, 0);
fd = socket(AF_INET6, family, 0);
if (fd < 0) {
perror("setup_ephemeral_port: socket()");
return (-1);
}
r = bind(fd, (const struct sockaddr *)&addr->type.sa,
sizeof(addr->type.sin6));
if (r != 0) {
perror("setup_ephemeral_port: bind()");
close(fd);
return (r);
}
r = getsockname(fd, (struct sockaddr *)&addr->type.sa, &addrlen);
if (r != 0) {
perror("setup_ephemeral_port: getsockname()");
close(fd);
return (r);
}
result = isc__nm_socket_reuse(fd);
if (result != ISC_R_SUCCESS && result != ISC_R_NOTIMPLEMENTED) {
fprintf(stderr,
"setup_ephemeral_port: isc__nm_socket_reuse(): %s",
isc_result_totext(result));
close(fd);
return (-1);
}
result = isc__nm_socket_reuse_lb(fd);
if (result != ISC_R_SUCCESS && result != ISC_R_NOTIMPLEMENTED) {
fprintf(stderr,
"setup_ephemeral_port: isc__nm_socket_reuse_lb(): %s",
isc_result_totext(result));
close(fd);
return (-1);
}
#if IPV6_RECVERR
#define setsockopt_on(socket, level, name) \
setsockopt(socket, level, name, &(int){ 1 }, sizeof(int))
r = setsockopt_on(fd, IPPROTO_IPV6, IPV6_RECVERR);
if (r != 0) {
perror("setup_ephemeral_port");
close(fd);
return (r);
}
#endif
return (fd);
}
static int
_setup(void **state) {
UNUSED(state);
/* workers = isc_os_ncpus(); */
if (isc_test_begin(NULL, true, workers) != ISC_R_SUCCESS) {
return (-1);
}
signal(SIGPIPE, SIG_IGN);
return (0);
}
static int
_teardown(void **state) {
UNUSED(state);
isc_test_end();
return (0);
}
/* Generic */
static void
noop_recv_cb(isc_nmhandle_t *handle, isc_result_t eresult, isc_region_t *region,
void *cbarg) {
UNUSED(handle);
UNUSED(eresult);
UNUSED(region);
UNUSED(cbarg);
}
static void
noop_connect_cb(isc_nmhandle_t *handle, isc_result_t result, void *cbarg) {
UNUSED(handle);
UNUSED(result);
UNUSED(cbarg);
}
static int
nm_setup(void **state) {
size_t nworkers = ISC_MAX(ISC_MIN(workers, 32), 1);
int udp_listen_sock = -1;
isc_nm_t **nm = NULL;
udp_listen_addr = (isc_sockaddr_t){ .length = 0 };
udp_listen_sock = setup_ephemeral_port(&udp_listen_addr, SOCK_DGRAM);
if (udp_listen_sock < 0) {
return (-1);
}
close(udp_listen_sock);
udp_listen_sock = -1;
atomic_store(&nsends, NSENDS * NWRITES);
atomic_store(&csends, 0);
atomic_store(&creads, 0);
atomic_store(&sreads, 0);
atomic_store(&ssends, 0);
atomic_store(&ctimeouts, 0);
atomic_store(&cconnects, 0);
isc_nonce_buf(&send_magic, sizeof(send_magic));
isc_nonce_buf(&stop_magic, sizeof(stop_magic));
if (send_magic == stop_magic) {
return (-1);
}
nm = isc_mem_get(test_mctx, MAX_NM * sizeof(nm[0]));
for (size_t i = 0; i < MAX_NM; i++) {
nm[i] = isc_nm_start(test_mctx, nworkers);
assert_non_null(nm[i]);
}
*state = nm;
return (0);
}
static int
nm_teardown(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
for (size_t i = 0; i < MAX_NM; i++) {
isc_nm_destroy(&nm[i]);
assert_null(nm[i]);
}
isc_mem_put(test_mctx, nm, MAX_NM * sizeof(nm[0]));
return (0);
}
thread_local size_t nwrites = NWRITES;
/* UDP */
static void
udp_listen_send_cb(isc_nmhandle_t *handle, isc_result_t eresult, void *cbarg) {
assert_non_null(handle);
UNUSED(cbarg);
if (eresult == ISC_R_SUCCESS) {
atomic_fetch_add(&ssends, 1);
}
}
static void
udp_listen_recv_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg) {
uint64_t magic = 0;
assert_null(cbarg);
if (eresult != ISC_R_SUCCESS) {
return;
}
assert_int_equal(region->length, sizeof(send_magic));
atomic_fetch_add(&sreads, 1);
magic = *(uint64_t *)region->base;
assert_true(magic == stop_magic || magic == send_magic);
isc_nm_send(handle, region, udp_listen_send_cb, NULL);
}
static void
mock_listenudp_uv_udp_open(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
WILL_RETURN(uv_udp_open, UV_ENOMEM);
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
noop_recv_cb, NULL, 0, &listen_sock);
assert_int_not_equal(result, ISC_R_SUCCESS);
assert_null(listen_sock);
RESET_RETURN;
}
static void
mock_listenudp_uv_udp_bind(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
WILL_RETURN(uv_udp_bind, UV_EADDRINUSE);
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
noop_recv_cb, NULL, 0, &listen_sock);
assert_int_not_equal(result, ISC_R_SUCCESS);
assert_null(listen_sock);
RESET_RETURN;
}
static void
mock_listenudp_uv_udp_recv_start(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
WILL_RETURN(uv_udp_recv_start, UV_EADDRINUSE);
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
noop_recv_cb, NULL, 0, &listen_sock);
assert_int_not_equal(result, ISC_R_SUCCESS);
assert_null(listen_sock);
RESET_RETURN;
}
static void
mock_udpconnect_uv_udp_open(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
WILL_RETURN(uv_udp_open, UV_ENOMEM);
result = isc_nm_udpconnect(connect_nm,
(isc_nmiface_t *)&udp_connect_addr,
(isc_nmiface_t *)&udp_listen_addr,
noop_connect_cb, NULL, 1000, 0);
assert_int_not_equal(result, ISC_R_SUCCESS);
isc_nm_closedown(connect_nm);
RESET_RETURN;
}
static void
mock_udpconnect_uv_udp_bind(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
WILL_RETURN(uv_udp_bind, UV_ENOMEM);
result = isc_nm_udpconnect(connect_nm,
(isc_nmiface_t *)&udp_connect_addr,
(isc_nmiface_t *)&udp_listen_addr,
noop_connect_cb, NULL, 1000, 0);
assert_int_not_equal(result, ISC_R_SUCCESS);
isc_nm_closedown(connect_nm);
RESET_RETURN;
}
#if HAVE_UV_UDP_CONNECT
static void
mock_udpconnect_uv_udp_connect(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
WILL_RETURN(uv_udp_connect, UV_ENOMEM);
result = isc_nm_udpconnect(connect_nm,
(isc_nmiface_t *)&udp_connect_addr,
(isc_nmiface_t *)&udp_listen_addr,
noop_connect_cb, NULL, 1000, 0);
assert_int_not_equal(result, ISC_R_SUCCESS);
isc_nm_closedown(connect_nm);
RESET_RETURN;
}
#endif
static void
mock_udpconnect_uv_recv_buffer_size(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
WILL_RETURN(uv_recv_buffer_size, UV_ENOMEM);
result = isc_nm_udpconnect(connect_nm,
(isc_nmiface_t *)&udp_connect_addr,
(isc_nmiface_t *)&udp_listen_addr,
noop_connect_cb, NULL, 1000, 0);
assert_int_equal(result, ISC_R_SUCCESS); /* FIXME: should fail */
isc_nm_closedown(connect_nm);
RESET_RETURN;
}
static void
mock_udpconnect_uv_send_buffer_size(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
WILL_RETURN(uv_send_buffer_size, UV_ENOMEM);
result = isc_nm_udpconnect(connect_nm,
(isc_nmiface_t *)&udp_connect_addr,
(isc_nmiface_t *)&udp_listen_addr,
noop_connect_cb, NULL, 1000, 0);
assert_int_equal(result, ISC_R_SUCCESS); /* FIXME: should fail */
isc_nm_closedown(connect_nm);
RESET_RETURN;
}
static void
udp_noop(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
noop_recv_cb, NULL, 0, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
(void)isc_nm_udpconnect(connect_nm, (isc_nmiface_t *)&udp_connect_addr,
(isc_nmiface_t *)&udp_listen_addr,
noop_connect_cb, NULL, 1000, 0);
isc_nm_closedown(connect_nm);
assert_int_equal(0, atomic_load(&cconnects));
assert_int_equal(0, atomic_load(&csends));
assert_int_equal(0, atomic_load(&creads));
assert_int_equal(0, atomic_load(&ctimeouts));
assert_int_equal(0, atomic_load(&sreads));
assert_int_equal(0, atomic_load(&ssends));
}
static void
udp_connect_send_cb(isc_nmhandle_t *handle, isc_result_t eresult, void *cbarg);
static void
udp_connect_recv_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg);
static void
udp_connect_send_cb(isc_nmhandle_t *handle, isc_result_t eresult, void *cbarg) {
assert_non_null(handle);
UNUSED(eresult);
UNUSED(cbarg);
atomic_fetch_add(&csends, 1);
}
static void
udp_connect_send(isc_nmhandle_t *handle, isc_region_t *region) {
uint_fast64_t sends = atomic_load(&nsends);
while (sends > 0) {
/* Continue until we subtract or we are done */
if (atomic_compare_exchange_weak(&nsends, &sends, sends - 1)) {
break;
}
}
isc_nm_send(handle, region, udp_connect_send_cb, NULL);
}
static void
udp_connect_recv_cb(isc_nmhandle_t *handle, isc_result_t eresult,
isc_region_t *region, void *cbarg) {
uint64_t magic = 0;
UNUSED(cbarg);
assert_non_null(handle);
if (eresult != ISC_R_SUCCESS) {
goto unref;
}
assert_int_equal(region->length, sizeof(magic));
atomic_fetch_add(&creads, 1);
magic = *(uint64_t *)region->base;
assert_true(magic == stop_magic || magic == send_magic);
if (magic == stop_magic) {
goto unref;
}
if (isc_random_uniform(NWRITES) == 0) {
udp_connect_send(handle, (isc_region_t *)&stop_msg);
} else {
udp_connect_send(handle, (isc_region_t *)&send_msg);
}
unref:
isc_nmhandle_detach(&handle);
}
static void
udp_connect_connect_cb(isc_nmhandle_t *handle, isc_result_t eresult,
void *cbarg) {
isc_nmhandle_t *readhandle = NULL;
UNUSED(cbarg);
if (eresult != ISC_R_SUCCESS) {
uint_fast64_t sends = atomic_load(&nsends);
/* We failed to connect; try again */
while (sends > 0) {
/* Continue until we subtract or we are done */
if (atomic_compare_exchange_weak(&nsends, &sends,
sends - 1)) {
break;
}
}
return;
}
atomic_fetch_add(&cconnects, 1);
isc_nmhandle_attach(handle, &readhandle);
isc_nm_read(handle, udp_connect_recv_cb, NULL);
udp_connect_send(handle, (isc_region_t *)&send_msg);
}
static isc_threadresult_t
udp_connect_thread(isc_threadarg_t arg) {
isc_nm_t *connect_nm = (isc_nm_t *)arg;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
while (atomic_load(&nsends) > 0) {
(void)isc_nm_udpconnect(connect_nm,
(isc_nmiface_t *)&udp_connect_addr,
(isc_nmiface_t *)&udp_listen_addr,
udp_connect_connect_cb, NULL, 1000, 0);
}
return ((isc_threadresult_t)0);
}
static void
udp_noresponse(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
isc_sockaddr_t udp_connect_addr;
udp_connect_addr = (isc_sockaddr_t){ .length = 0 };
isc_sockaddr_fromin6(&udp_connect_addr, &in6addr_loopback, 0);
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
noop_recv_cb, NULL, 0, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
(void)isc_nm_udpconnect(connect_nm, (isc_nmiface_t *)&udp_connect_addr,
(isc_nmiface_t *)&udp_listen_addr,
udp_connect_connect_cb, NULL, 1000, 0);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
isc_nm_closedown(connect_nm);
while (atomic_load(&cconnects) != 1) {
isc_thread_yield();
}
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
assert_int_equal(1, atomic_load(&cconnects));
assert_true(atomic_load(&csends) <= 1);
assert_int_equal(0, atomic_load(&creads));
assert_int_equal(0, atomic_load(&ctimeouts));
assert_int_equal(0, atomic_load(&sreads));
assert_int_equal(0, atomic_load(&ssends));
}
static void
udp_recv_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_thread_t threads[32] = { 0 };
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
udp_listen_recv_cb, NULL, 0, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(udp_connect_thread, connect_nm, &threads[i]);
}
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
isc_nm_closedown(connect_nm);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
assert_true(atomic_load(&cconnects) >= (NSENDS - 1) * NWRITES);
CHECK_RANGE_FULL(csends);
CHECK_RANGE_FULL(creads);
CHECK_RANGE_FULL(sreads);
CHECK_RANGE_FULL(ssends);
}
static void
udp_recv_half_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_thread_t threads[32] = { 0 };
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
udp_listen_recv_cb, NULL, 0, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(udp_connect_thread, connect_nm, &threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_closedown(connect_nm);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
assert_true(atomic_load(&cconnects) >= (NSENDS - 1) * NWRITES);
CHECK_RANGE_FULL(csends);
CHECK_RANGE_HALF(creads);
CHECK_RANGE_HALF(sreads);
CHECK_RANGE_HALF(ssends);
}
static void
udp_half_recv_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_thread_t threads[32] = { 0 };
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
udp_listen_recv_cb, NULL, 0, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(udp_connect_thread, connect_nm, &threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
isc_nm_closedown(connect_nm);
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
assert_true(atomic_load(&cconnects) >= (NSENDS - 1) * NWRITES);
CHECK_RANGE_FULL(csends);
CHECK_RANGE_HALF(creads);
CHECK_RANGE_HALF(sreads);
CHECK_RANGE_HALF(ssends);
}
static void
udp_half_recv_half_send(void **state) {
isc_nm_t **nm = (isc_nm_t **)*state;
isc_nm_t *listen_nm = nm[0];
isc_nm_t *connect_nm = nm[1];
isc_result_t result = ISC_R_SUCCESS;
isc_nmsocket_t *listen_sock = NULL;
size_t nthreads = ISC_MAX(ISC_MIN(workers, 32), 1);
isc_thread_t threads[32] = { 0 };
result = isc_nm_listenudp(listen_nm, (isc_nmiface_t *)&udp_listen_addr,
udp_listen_recv_cb, NULL, 0, &listen_sock);
assert_int_equal(result, ISC_R_SUCCESS);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_create(udp_connect_thread, connect_nm, &threads[i]);
}
while (atomic_load(&nsends) >= (NSENDS * NWRITES) / 2) {
isc_thread_yield();
}
isc_nm_closedown(connect_nm);
isc_nm_stoplistening(listen_sock);
isc_nmsocket_close(&listen_sock);
assert_null(listen_sock);
for (size_t i = 0; i < nthreads; i++) {
isc_thread_join(threads[i], NULL);
}
X(cconnects);
X(csends);
X(creads);
X(ctimeouts);
X(sreads);
X(ssends);
assert_true(atomic_load(&cconnects) >= (NSENDS - 1) * NWRITES);
CHECK_RANGE_FULL(csends);
CHECK_RANGE_HALF(creads);
CHECK_RANGE_HALF(sreads);
CHECK_RANGE_HALF(ssends);
}
int
main(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test_setup_teardown(mock_listenudp_uv_udp_open,
nm_setup, nm_teardown),
cmocka_unit_test_setup_teardown(mock_listenudp_uv_udp_bind,
nm_setup, nm_teardown),
cmocka_unit_test_setup_teardown(
mock_listenudp_uv_udp_recv_start, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(mock_udpconnect_uv_udp_open,
nm_setup, nm_teardown),
cmocka_unit_test_setup_teardown(mock_udpconnect_uv_udp_bind,
nm_setup, nm_teardown),
#if HAVE_UV_UDP_CONNECT
cmocka_unit_test_setup_teardown(mock_udpconnect_uv_udp_connect,
nm_setup, nm_teardown),
#endif
cmocka_unit_test_setup_teardown(
mock_udpconnect_uv_recv_buffer_size, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(
mock_udpconnect_uv_send_buffer_size, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(udp_noop, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(udp_noresponse, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(udp_recv_send, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(udp_recv_half_send, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(udp_half_recv_send, nm_setup,
nm_teardown),
cmocka_unit_test_setup_teardown(udp_half_recv_half_send,
nm_setup, nm_teardown),
};
return (cmocka_run_group_tests(tests, _setup, _teardown));
}
#else /* HAVE_CMOCKA */
#include <stdio.h>
int
main(void) {
printf("1..0 # Skipped: cmocka not available\n");
return (0);
}
#endif /* if HAVE_CMOCKA */

319
lib/isc/tests/uv_wrap.h Normal file
View File

@ -0,0 +1,319 @@
/*
* Copyright (C) Internet Systems Consortium, Inc. ("ISC")
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, you can obtain one at https://mozilla.org/MPL/2.0/.
*
* See the COPYRIGHT file distributed with this work for additional
* information regarding copyright ownership.
*/
#if HAVE_CMOCKA
#include <inttypes.h>
#include <sched.h> /* IWYU pragma: keep */
#include <setjmp.h>
#include <stdarg.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>
#include <uv.h>
#include <isc/atomic.h>
#define UNIT_TESTING
#include <cmocka.h>
/* uv_udp_t */
int
__wrap_uv_udp_open(uv_udp_t *handle, uv_os_sock_t sock);
int
__wrap_uv_udp_bind(uv_udp_t *handle, const struct sockaddr *addr,
unsigned int flags);
#if HAVE_UV_UDP_CONNECT
int
__wrap_uv_udp_connect(uv_udp_t *handle, const struct sockaddr *addr);
int
__wrap_uv_udp_getpeername(const uv_udp_t *handle, struct sockaddr *name,
int *namelen);
#endif /* HAVE_UV_UDP_CONNECT */
int
__wrap_uv_udp_getsockname(const uv_udp_t *handle, struct sockaddr *name,
int *namelen);
int
__wrap_uv_udp_send(uv_udp_send_t *req, uv_udp_t *handle, const uv_buf_t bufs[],
unsigned int nbufs, const struct sockaddr *addr,
uv_udp_send_cb send_cb);
int
__wrap_uv_udp_recv_start(uv_udp_t *handle, uv_alloc_cb alloc_cb,
uv_udp_recv_cb recv_cb);
int
__wrap_uv_udp_recv_stop(uv_udp_t *handle);
/* uv_tcp_t */
int
__wrap_uv_tcp_open(uv_tcp_t *handle, uv_os_sock_t sock);
int
__wrap_uv_tcp_bind(uv_tcp_t *handle, const struct sockaddr *addr,
unsigned int flags);
int
__wrap_uv_tcp_getsockname(const uv_tcp_t *handle, struct sockaddr *name,
int *namelen);
int
__wrap_uv_tcp_getpeername(const uv_tcp_t *handle, struct sockaddr *name,
int *namelen);
int
__wrap_uv_tcp_connect(uv_connect_t *req, uv_tcp_t *handle,
const struct sockaddr *addr, uv_connect_cb cb);
/* uv_stream_t */
int
__wrap_uv_listen(uv_stream_t *stream, int backlog, uv_connection_cb cb);
int
__wrap_uv_accept(uv_stream_t *server, uv_stream_t *client);
/* uv_handle_t */
int
__wrap_uv_send_buffer_size(uv_handle_t *handle, int *value);
int
__wrap_uv_recv_buffer_size(uv_handle_t *handle, int *value);
int
__wrap_uv_fileno(const uv_handle_t *handle, uv_os_fd_t *fd);
/* uv_timer_t */
/* FIXME */
/*
* uv_timer_init
* uv_timer_start
*/
static atomic_int __state_uv_udp_open = ATOMIC_VAR_INIT(0);
int
__wrap_uv_udp_open(uv_udp_t *handle, uv_os_sock_t sock) {
if (atomic_load(&__state_uv_udp_open) == 0) {
return (uv_udp_open(handle, sock));
}
return (atomic_load(&__state_uv_udp_open));
}
static atomic_int __state_uv_udp_bind = ATOMIC_VAR_INIT(0);
int
__wrap_uv_udp_bind(uv_udp_t *handle, const struct sockaddr *addr,
unsigned int flags) {
if (atomic_load(&__state_uv_udp_bind) == 0) {
return (uv_udp_bind(handle, addr, flags));
}
return (atomic_load(&__state_uv_udp_bind));
}
static atomic_int __state_uv_udp_connect = ATOMIC_VAR_INIT(0);
#if HAVE_UV_UDP_CONNECT
int
__wrap_uv_udp_connect(uv_udp_t *handle, const struct sockaddr *addr) {
if (atomic_load(&__state_uv_udp_connect) == 0) {
return (uv_udp_connect(handle, addr));
}
return (atomic_load(&__state_uv_udp_connect));
}
#endif /* HAVE_UV_UDP_CONNECT */
static atomic_int __state_uv_udp_getpeername = ATOMIC_VAR_INIT(0);
#if HAVE_UV_UDP_CONNECT
int
__wrap_uv_udp_getpeername(const uv_udp_t *handle, struct sockaddr *name,
int *namelen) {
if (atomic_load(&__state_uv_udp_getpeername) == 0) {
return (uv_udp_getpeername(handle, name, namelen));
}
return (atomic_load(&__state_uv_udp_getpeername));
}
#endif /* HAVE_UV_UDP_CONNECT */
static atomic_int __state_uv_udp_getsockname = ATOMIC_VAR_INIT(0);
int
__wrap_uv_udp_getsockname(const uv_udp_t *handle, struct sockaddr *name,
int *namelen) {
if (atomic_load(&__state_uv_udp_getsockname) == 0) {
return (uv_udp_getsockname(handle, name, namelen));
}
return (atomic_load(&__state_uv_udp_getsockname));
}
static atomic_int __state_uv_udp_send = ATOMIC_VAR_INIT(0);
int
__wrap_uv_udp_send(uv_udp_send_t *req, uv_udp_t *handle, const uv_buf_t bufs[],
unsigned int nbufs, const struct sockaddr *addr,
uv_udp_send_cb send_cb) {
if (atomic_load(&__state_uv_udp_send) == 0) {
return (uv_udp_send(req, handle, bufs, nbufs, addr, send_cb));
}
return (atomic_load(&__state_uv_udp_send));
}
static atomic_int __state_uv_udp_recv_start = ATOMIC_VAR_INIT(0);
int
__wrap_uv_udp_recv_start(uv_udp_t *handle, uv_alloc_cb alloc_cb,
uv_udp_recv_cb recv_cb) {
if (atomic_load(&__state_uv_udp_recv_start) == 0) {
return (uv_udp_recv_start(handle, alloc_cb, recv_cb));
}
return (atomic_load(&__state_uv_udp_recv_start));
}
static atomic_int __state_uv_udp_recv_stop = ATOMIC_VAR_INIT(0);
int
__wrap_uv_udp_recv_stop(uv_udp_t *handle) {
if (atomic_load(&__state_uv_udp_recv_stop) == 0) {
return (uv_udp_recv_stop(handle));
}
return (atomic_load(&__state_uv_udp_recv_stop));
}
static atomic_int __state_uv_tcp_open = ATOMIC_VAR_INIT(0);
int
__wrap_uv_tcp_open(uv_tcp_t *handle, uv_os_sock_t sock) {
if (atomic_load(&__state_uv_tcp_open) == 0) {
return (uv_tcp_open(handle, sock));
}
return (atomic_load(&__state_uv_tcp_open));
}
static atomic_int __state_uv_tcp_bind = ATOMIC_VAR_INIT(0);
int
__wrap_uv_tcp_bind(uv_tcp_t *handle, const struct sockaddr *addr,
unsigned int flags) {
if (atomic_load(&__state_uv_tcp_bind) == 0) {
return (uv_tcp_bind(handle, addr, flags));
}
return (atomic_load(&__state_uv_tcp_bind));
}
static atomic_int __state_uv_tcp_getsockname = ATOMIC_VAR_INIT(0);
int
__wrap_uv_tcp_getsockname(const uv_tcp_t *handle, struct sockaddr *name,
int *namelen) {
if (atomic_load(&__state_uv_tcp_getsockname) == 0) {
return (uv_tcp_getsockname(handle, name, namelen));
}
return (atomic_load(&__state_uv_tcp_getsockname));
}
static atomic_int __state_uv_tcp_getpeername = ATOMIC_VAR_INIT(0);
int
__wrap_uv_tcp_getpeername(const uv_tcp_t *handle, struct sockaddr *name,
int *namelen) {
if (atomic_load(&__state_uv_tcp_getpeername) == 0) {
return (uv_tcp_getpeername(handle, name, namelen));
}
return (atomic_load(&__state_uv_tcp_getpeername));
}
static atomic_int __state_uv_tcp_connect = ATOMIC_VAR_INIT(0);
int
__wrap_uv_tcp_connect(uv_connect_t *req, uv_tcp_t *handle,
const struct sockaddr *addr, uv_connect_cb cb) {
if (atomic_load(&__state_uv_tcp_connect) == 0) {
return (uv_tcp_connect(req, handle, addr, cb));
}
return (atomic_load(&__state_uv_tcp_connect));
}
static atomic_int __state_uv_listen = ATOMIC_VAR_INIT(0);
int
__wrap_uv_listen(uv_stream_t *stream, int backlog, uv_connection_cb cb) {
if (atomic_load(&__state_uv_listen) == 0) {
return (uv_listen(stream, backlog, cb));
}
return (atomic_load(&__state_uv_listen));
}
static atomic_int __state_uv_accept = ATOMIC_VAR_INIT(0);
int
__wrap_uv_accept(uv_stream_t *server, uv_stream_t *client) {
if (atomic_load(&__state_uv_accept) == 0) {
return (uv_accept(server, client));
}
return (atomic_load(&__state_uv_accept));
}
static atomic_int __state_uv_send_buffer_size = ATOMIC_VAR_INIT(0);
int
__wrap_uv_send_buffer_size(uv_handle_t *handle, int *value) {
if (atomic_load(&__state_uv_send_buffer_size) == 0) {
return (uv_send_buffer_size(handle, value));
}
return (atomic_load(&__state_uv_send_buffer_size));
}
static atomic_int __state_uv_recv_buffer_size = ATOMIC_VAR_INIT(0);
int
__wrap_uv_recv_buffer_size(uv_handle_t *handle, int *value) {
if (atomic_load(&__state_uv_recv_buffer_size) == 0) {
return (uv_recv_buffer_size(handle, value));
}
return (atomic_load(&__state_uv_recv_buffer_size));
}
static atomic_int __state_uv_fileno = ATOMIC_VAR_INIT(0);
int
__wrap_uv_fileno(const uv_handle_t *handle, uv_os_fd_t *fd) {
if (atomic_load(&__state_uv_fileno) == 0) {
return (uv_fileno(handle, fd));
}
return (atomic_load(&__state_uv_fileno));
}
#define uv_udp_open(...) __wrap_uv_udp_open(__VA_ARGS__)
#define uv_udp_bind(...) __wrap_uv_udp_bind(__VA_ARGS__)
#if HAVE_UV_UDP_CONNECT
#define uv_udp_connect(...) __wrap_uv_udp_connect(__VA_ARGS__)
#define uv_udp_getpeername(...) __wrap_uv_udp_getpeername(__VA_ARGS__)
#endif /* HAVE_UV_UDP_CONNECT */
#define uv_udp_getsockname(...) __wrap_uv_udp_getsockname(__VA_ARGS__)
#define uv_udp_send(...) __wrap_uv_udp_send(__VA_ARGS__)
#define uv_udp_recv_start(...) __wrap_uv_udp_recv_start(__VA_ARGS__)
#define uv_udp_recv_stop(...) __wrap_uv_udp_recv_stop(__VA_ARGS__)
#define uv_tcp_open(...) __wrap_uv_tcp_open(__VA_ARGS__)
#define uv_tcp_bind(...) __wrap_uv_tcp_bind(__VA_ARGS__)
#define uv_tcp_getsockname(...) __wrap_uv_tcp_getsockname(__VA_ARGS__)
#define uv_tcp_getpeername(...) __wrap_uv_tcp_getpeername(__VA_ARGS__)
#define uv_tcp_connect(...) __wrap_uv_tcp_connect(__VA_ARGS__)
#define uv_listen(...) __wrap_uv_listen(__VA_ARGS__)
#define uv_accept(...) __wrap_uv_accept(__VA_ARGS__)
#define uv_send_buffer_size(...) __wrap_uv_send_buffer_size(__VA_ARGS__)
#define uv_recv_buffer_size(...) __wrap_uv_recv_buffer_size(__VA_ARGS__)
#define uv_fileno(...) __wrap_uv_fileno(__VA_ARGS__)
#define RESET_RETURN \
{ \
atomic_store(&__state_uv_udp_open, 0); \
atomic_store(&__state_uv_udp_bind, 0); \
atomic_store(&__state_uv_udp_connect, 0); \
atomic_store(&__state_uv_udp_getpeername, 0); \
atomic_store(&__state_uv_udp_getsockname, 0); \
atomic_store(&__state_uv_udp_send, 0); \
atomic_store(&__state_uv_udp_recv_start, 0); \
atomic_store(&__state_uv_udp_recv_stop, 0); \
atomic_store(&__state_uv_tcp_open, 0); \
atomic_store(&__state_uv_tcp_bind, 0); \
atomic_store(&__state_uv_tcp_getpeername, 0); \
atomic_store(&__state_uv_tcp_getsockname, 0); \
atomic_store(&__state_uv_tcp_connect, 0); \
atomic_store(&__state_uv_listen, 0); \
atomic_store(&__state_uv_accept, 0); \
atomic_store(&__state_uv_send_buffer_size, 0); \
atomic_store(&__state_uv_recv_buffer_size, 0); \
atomic_store(&__state_uv_fileno, 0); \
}
#define WILL_RETURN(func, value) atomic_store(&__state_##func, value)
#endif /* HAVE_CMOCKA */

View File

@ -434,8 +434,8 @@ isc_netaddr_setzone
isc_netaddr_totext
isc_netaddr_unspec
isc_netscope_pton
isc_nmhandle_attach
isc_nmhandle_detach
isc__nmhandle_attach
isc__nmhandle_detach
isc_nmhandle_getdata
isc_nmhandle_getextra
isc_nmhandle_is_stream
@ -463,8 +463,8 @@ isc_nm_start
isc_nm_stoplistening
isc_nm_tcpconnect
isc_nm_tcpdnsconnect
isc_nm_tcp_gettimeouts
isc_nm_tcp_settimeouts
isc_nm_gettimeouts
isc_nm_settimeouts
isc_nm_tcpdns_keepalive
isc_nm_tcpdns_sequential
isc_nm_tid

View File

@ -414,6 +414,7 @@ copy InstallFiles ..\Build\Release\
<ClCompile Include="..\netmgr\uv-compat.c" />
<ClCompile Include="..\netmgr\tcpdns.c" />
<ClCompile Include="..\netmgr\tls.c" />
<ClCompile Include="..\netmgr\tlsdns.c" />
<ClCompile Include="..\netscope.c" />
<ClCompile Include="..\nonce.c" />
<ClCompile Include="..\openssl_shim.c" />

View File

@ -1030,8 +1030,8 @@ no_nsid:
INSIST(count < DNS_EDNSOPTIONS);
isc_nm_tcp_gettimeouts(isc_nmhandle_netmgr(client->handle),
NULL, NULL, NULL, &adv);
isc_nm_gettimeouts(isc_nmhandle_netmgr(client->handle), NULL,
NULL, NULL, &adv);
isc_buffer_init(&buf, advtimo, sizeof(advtimo));
isc_buffer_putuint16(&buf, (uint16_t)adv);
ednsopts[count].code = DNS_OPT_TCP_KEEPALIVE;
@ -1644,7 +1644,9 @@ ns__client_request(isc_nmhandle_t *handle, isc_result_t eresult,
#endif /* ifdef HAVE_DNSTAP */
ifp = (ns_interface_t *)arg;
UNUSED(eresult);
if (eresult != ISC_R_SUCCESS) {
return;
}
mgr = ifp->clientmgr;
if (mgr == NULL) {
@ -2210,7 +2212,9 @@ ns__client_tcpconn(isc_nmhandle_t *handle, isc_result_t result, void *arg) {
isc_netaddr_t netaddr;
int match;
UNUSED(result);
if (result != ISC_R_SUCCESS) {
return (result);
}
if (handle != NULL) {
peeraddr = isc_nmhandle_peeraddr(handle);

View File

@ -1897,6 +1897,7 @@
./lib/isc/netmgr/tcp.c C 2019,2020
./lib/isc/netmgr/tcpdns.c C 2019,2020
./lib/isc/netmgr/tls.c C 2020
./lib/isc/netmgr/tlsdns.c C 2020
./lib/isc/netmgr/udp.c C 2019,2020
./lib/isc/netmgr/uv-compat.c C 2020
./lib/isc/netmgr/uv-compat.h C 2019,2020
@ -1952,7 +1953,6 @@
./lib/isc/tests/md_test.c C 2018,2019,2020
./lib/isc/tests/mem_test.c C 2015,2016,2017,2018,2019,2020
./lib/isc/tests/netaddr_test.c C 2016,2018,2019,2020
./lib/isc/tests/netmgr_test.c C 2020
./lib/isc/tests/parse_test.c C 2012,2013,2016,2018,2019,2020
./lib/isc/tests/pool_test.c C 2013,2016,2018,2019,2020
./lib/isc/tests/quota_test.c C 2020
@ -1967,9 +1967,14 @@
./lib/isc/tests/symtab_test.c C 2011,2012,2013,2016,2018,2019,2020
./lib/isc/tests/task_test.c C 2011,2012,2016,2017,2018,2019,2020
./lib/isc/tests/taskpool_test.c C 2011,2012,2016,2018,2019,2020
./lib/isc/tests/tcp_quota_test.c C 2020
./lib/isc/tests/tcp_test.c C 2020
./lib/isc/tests/tcpdns_test.c C 2020
./lib/isc/tests/testdata/file/keep X 2014,2018,2019,2020
./lib/isc/tests/time_test.c C 2014,2015,2016,2018,2019,2020
./lib/isc/tests/timer_test.c C 2018,2019,2020
./lib/isc/tests/udp_test.c C 2020
./lib/isc/tests/uv_wrap.h C 2020
./lib/isc/timer.c C 1998,1999,2000,2001,2002,2004,2005,2007,2008,2009,2011,2012,2013,2014,2015,2016,2017,2018,2019,2020
./lib/isc/tm.c C 2014,2016,2018,2019,2020
./lib/isc/unix/dir.c C 1999,2000,2001,2004,2005,2007,2008,2009,2011,2012,2016,2017,2018,2019,2020

View File

@ -1,3 +1,4 @@
unmatchedSuppression:*
preprocessorErrorDirective:*
unknownMacro:*
nullPointerRedundantCheck:*