Instead of using bind() and passing the listening socket to the children
threads using uv_export/uv_import use one thread that does the accepting,
and then passes the connected socket using uv_export/uv_import to a random
worker. The previous solution had thundering herd problems (all workers
waking up on one connection and trying to accept()), this one avoids this
and is simpler.
The tcp clients quota is simplified with isc_quota_attach_cb - a callback
is issued when the quota is available.
We could have a race between handle closing and processing async
callback. Deactivate the handle before issuing the callback - we
have the socket referenced anyway so it's not a problem.
tcpdns used transport-specific functions to operate on the outer socket.
Use generic ones instead, and select the proper call in netmgr.c.
Make the missing functions (e.g. isc_nm_read) generic and add type-specific
calls (isc__nm_tcp_read). This is the preparation for netmgr TLS layer.
Upcoming version of libuv will suport uv_recvmmsg and uv_sendmmsg. To
use uv_recvmmsg we need to provide a larger buffer and be able to
properly free it.
In tcp and udp stoplistening code we accessed libuv structures
from a different thread, which caused a shutdown crash when named
was under load. Also added additional DbC checks making sure we're
in a proper thread when accessing uv_ functions.
We had a race in which n UDP socket could have been already closing
by libuv but we still sent data to it. Mark socket as not-active
when stopping listening and verify that socket is not active when
trying to send data to it.
We pass interface as an opaque argument to tcpdns listening socket.
If we stop listening on an interface but still have in-flight connections
the opaque 'interface' is not properly reference counted, and we might
hit a dead memory. We put just a single source of truth in a listening
socket and make the child sockets use that instead of copying the
value from listening socket. We clean the callback when we stop listening.
- isc__netievent_storage_t was to small to contain
isc__netievent__socket_streaminfo_t on Windows
- handle isc_uv_export and isc_uv_import errors properly
- rewrite isc_uv_export and isc_uv_import on Windows
- the socket stat counters have been moved from socket.h to stats.h.
- isc_nm_t now attaches to the same stats counter group as
isc_socketmgr_t, so that both managers can increment the same
set of statistics
- isc__nmsocket_init() now takes an interface as a paramter so that
the address family can be determined when initializing the socket.
- based on the address family and socket type, a group of statistics
counters will be associated with the socket - for example, UDP4Active
with IPv4 UDP sockets and TCP6Active with IPv6 TCP sockets. note
that no counters are currently associated with TCPDNS sockets; those
stats will be handled by the underlying TCP socket.
- the counters are not actually used by netmgr sockets yet; counter
increment and decrement calls will be added in a later commit.
- use UV_{TC,UD}P_IPV6ONLY for IPv6 sockets, keeping the pre-netmgr
behaviour.
- add a new listening_error bool flag which is set if the child
listener fails to start listening. This fixes a bug where named would
hang if, e.g., we failed to bind to a TCP socket.
For multithreaded TCP listening we need to pass a bound socket to all
listening threads. Instead of using uv_pipe handle passing method which
is quite complex (lots of callbacks, each of them with its own error
handling) we now use isc_uv_export() to export the socket, pass it as a
member of the isc__netievent_tcpchildlisten_t structure, and then
isc_uv_import() it in the child thread, simplifying the process
significantly.
After the network manager rewrite, tcp-higwater stats was only being
updated when a valid DNS query was received over tcp.
It turns out tcp-quota is updated right after a tcp connection is
accepted, before any data is read, so in the event that some client
connect but don't send a valid query, it wouldn't be taken into
account to update tcp-highwater stats, that is wrong.
This commit fix tcp-highwater to update its stats whenever a tcp connection
is established, independent of what happens after (timeout/invalid
request, etc).
- make tcp listening IPC pipe name saner
- put the pipe in /tmp on unices
- add pid to the pipe name to avoid conflicts between processes
- fsync directory in which the pipe resides to make sure that the
child threads will see it and be able to open it
even when worker is paused (e.g. interface reconfiguration). This is
needed to prevent deadlocks when reconfiguring interfaces - as network
manager is paused then, but we still need to stop/start listening.
- Proper handling of TCP listen errors in netmgr - bind to the socket first,
then return the error code.
When listening for TCP connections we create a socket, bind it
and then pass it over IPC to all threads - which then listen on
in and accept connections. This sounds broken, but it's the
official way of dealing with multithreaded TCP listeners in libuv,
and works on all platforms supported by libuv.
The new ISC_THREAD_LOCAL macro unifies usage of platform dependent
Thread Local Storage definition thread_local vs __thread vs
__declspec(thread) to a single macro.
The commit also unifies the required level of support for TLS as for
some parts of the code it was mandatory and for some parts of the code
it wasn't.
- restore support for tcp-initial-timeout, tcp-idle-timeout,
tcp-keepalive-timeout and tcp-advertised-timeout configuration
options, which were ineffective previously.
- add timeout support for TCP and TCPDNS connections to protect against
slowloris style attacks. currently, all timeouts are hard-coded.
- rework and simplify the TCPDNS state machine.
when the TCPDNS_CLIENTS_PER_CONN limit has been exceeded for a TCP
DNS connection, switch to sequential mode to ensure that memory cannot
be exhausted by too many simultaneous queries.
This is a replacement for the existing isc_socket and isc_socketmgr
implementation. It uses libuv for asynchronous network communication;
"networker" objects will be distributed across worker threads reading
incoming packets and sending them for processing.
UDP listener sockets automatically create an array of "child" sockets
so each worker can listen separately.
TCP sockets are shared amongst worker threads.
A TCPDNS socket is a wrapper around a TCP socket, which handles the
the two-byte length field at the beginning of DNS messages over TCP.
(Other wrapper socket types can be implemented in the future to handle
DNS over TLS, DNS over HTTPS, etc.)