mirror of
https://gitlab.isc.org/isc-projects/kea
synced 2025-09-05 00:15:17 +00:00
[2524] Merge branch 'master' into trac2524
This commit is contained in:
3
.gitignore
vendored
3
.gitignore
vendored
@@ -34,5 +34,6 @@ TAGS
|
||||
/all.info
|
||||
/coverage-cpp-html
|
||||
/dns++.pc
|
||||
/report.info
|
||||
/local.zone.sqlite3
|
||||
/logger_lockfile
|
||||
/report.info
|
||||
|
108
ChangeLog
108
ChangeLog
@@ -1,4 +1,110 @@
|
||||
526. [bug] syephen
|
||||
538. [bug] muks
|
||||
Added escaping of special characters (double-quotes, semicolon,
|
||||
backslash, etc.) in text-like RRType's toText() implementation.
|
||||
Without this change, some TXT and SPF RDATA were incorrectly
|
||||
stored in SQLite3 datasource as they were not escaped.
|
||||
(Trac #2535, git f516fc484544b7e08475947d6945bc87636d4115)
|
||||
|
||||
537. [func] tomek
|
||||
b10-dhcp6: Support for RELEASE message has been added. Clients
|
||||
are now able to release their non-temporary IPv6 addresses.
|
||||
(Trac #2326, git 0974318566abe08d0702ddd185156842c6642424)
|
||||
|
||||
536. [build] jinmei
|
||||
Detect a build issue on FreeBSD with g++ 4.2 and Boost installed via
|
||||
FreeBSD ports at ./configure time. This seems to be a bug of
|
||||
FreeBSD ports setup and has been reported to the maintainer:
|
||||
http://www.freebsd.org/cgi/query-pr.cgi?pr=174753
|
||||
Until it's fixed, you need to build BIND 10 for FreeBSD that has
|
||||
this problem with specifying --without-werror, with clang++
|
||||
(development version), or with manually extracted Boost header
|
||||
files (no compiled Boost library is necessary).
|
||||
(Trac #1991, git 6b045bcd1f9613e3835551cdebd2616ea8319a36)
|
||||
|
||||
535. [bug] jelte
|
||||
The log4cplus internal logging mechanism has been disabled, and no
|
||||
output from the log4cplus library itself should be printed to
|
||||
stderr anymore. This output can be enabled by using the
|
||||
compile-time option --enable-debug.
|
||||
(Trac #1081, git db55f102b30e76b72b134cbd77bd183cd01f95c0)
|
||||
|
||||
534. [func]* vorner
|
||||
The b10-msgq now uses the same logging format as the rest
|
||||
of the system. However, it still doesn't obey the common
|
||||
configuration, as due to technical issues it is not able
|
||||
to read it yet.
|
||||
(git 9e6e821c0a33aab0cd0e70e51059d9a2761f76bb)
|
||||
|
||||
bind10-1.0.0-beta released on December 20, 2012
|
||||
|
||||
533. [build]* jreed
|
||||
Changed the package name in configure.ac from bind10-devel
|
||||
to bind10. This means the default sub-directories for
|
||||
etc, include, libexec, share, share/doc, and var are changed.
|
||||
If upgrading from a previous version, you may need to move
|
||||
and update your configurations or change references for the
|
||||
old locations.
|
||||
(git bf53fbd4e92ae835280d49fbfdeeebd33e0ce3f2)
|
||||
|
||||
532. [func] marcin
|
||||
Implemented configuration of DHCPv4 option values using
|
||||
the configuration manager. In order to set values for the
|
||||
data fields carried by a particular option, the user
|
||||
specifies a string of hexadecimal digits that is converted
|
||||
to binary data and stored in the option buffer. A more
|
||||
user-friendly way of specifying option content is planned.
|
||||
(Trac #2544, git fed1aab5a0f813c41637807f8c0c5f8830d71942)
|
||||
|
||||
531. [func] tomek
|
||||
b10-dhcp6: Added support for expired leases. Leases for IPv6
|
||||
addresses that are past their valid lifetime may be recycled, i.e.
|
||||
rellocated to other clients if needed.
|
||||
(Trac #2327, git 62a23854f619349d319d02c3a385d9bc55442d5e)
|
||||
|
||||
530. [func]* team
|
||||
b10-loadzone was fully overhauled. It now uses C++-based zone
|
||||
parser and loader library, performing stricter checks, having
|
||||
more complete support for master file formats, producing more
|
||||
helpful logs, is more extendable for various types of data
|
||||
sources, and yet much faster than the old version. In
|
||||
functionality the new version should be generally backwards
|
||||
compatible to the old version, but there are some
|
||||
incompatibilities: name fields of RDATA (in NS, SOA, etc) must
|
||||
be absolute for now; due to the stricter checks some input that was
|
||||
(incorrectly) accepted by the old version may now be rejected;
|
||||
command line options and arguments are not compatible.
|
||||
(Trac #2380, git 689b015753a9e219bc90af0a0b818ada26cc5968)
|
||||
|
||||
529. [func]* team
|
||||
The in-memory data source now uses a more complete master
|
||||
file parser to load textual zone files. As of this change
|
||||
it supports multi-line RR representation and more complete
|
||||
support for escaped and quoted strings. It also produces
|
||||
more helpful log messages when there is an error in the zone
|
||||
file. It will be enhanced as more specific tasks in the
|
||||
#2368 meta ticket are completed. The new parser is generally
|
||||
backward compatible to the previous one, but due to the
|
||||
tighter checks some input that has been accepted so far
|
||||
could now be rejected, so it's advisable to check if you
|
||||
use textual zone files directly loaded to memory.
|
||||
(Trac #2470, git c4cf36691115c15440b65cac16f1c7fcccc69521)
|
||||
|
||||
528. [func] marcin
|
||||
Implemented definitions for DHCPv4 option definitions identified
|
||||
by option codes: 1 to 63, 77, 81-82, 90-92, 118-119, 124-125.
|
||||
These definitions are now used by the DHCPv4 server to parse
|
||||
options received from a client.
|
||||
(Trac #2526, git 50a73567e8067fdbe4405b7ece5b08948ef87f98)
|
||||
|
||||
527. [bug] jelte
|
||||
Fixed a bug in the synchronous UDP server code where unexpected
|
||||
errors from ASIO or the system libraries could cause b10-auth to
|
||||
stop. In asynchronous mode these errors would be ignored
|
||||
completely. Both types have been updated to report the problem with
|
||||
an ERROR log message, drop the packet, and continue service.
|
||||
(Trac #2494, git db92f30af10e6688a7dc117b254cb821e54a6d95)
|
||||
|
||||
526. [bug] stephen
|
||||
Miscellaneous fixes to DHCP code including rationalisation of
|
||||
some methods in LeaseMgr and resolving some Doxygen/cppcheck
|
||||
issues.
|
||||
|
24
README
24
README
@@ -7,16 +7,20 @@ DHCP. BIND 10 is written in C++ and Python and provides a modular
|
||||
environment for serving, maintaining, and developing DNS and DHCP.
|
||||
|
||||
This release includes the bind10 master process, b10-msgq message
|
||||
bus, b10-auth authoritative DNS server (with SQLite3 and in-memory
|
||||
backends), b10-resolver recursive or forwarding DNS server, b10-cmdctl
|
||||
remote control daemon, b10-cfgmgr configuration manager, b10-xfrin
|
||||
AXFR inbound service, b10-xfrout outgoing AXFR service, b10-zonemgr
|
||||
secondary manager, b10-stats statistics collection and reporting
|
||||
daemon, b10-stats-httpd for HTTP access to XML-formatted stats,
|
||||
b10-host DNS lookup utility, and a new libdns++ library for C++
|
||||
with a python wrapper. BIND 10 also provides experimental DHCPv4
|
||||
and DHCPv6 servers, b10-dhcp4 and b10-dhcp6, a portable DHCP library,
|
||||
libdhcp++, and a DHCP benchmarking tool, perfdhcp.
|
||||
bus, b10-cmdctl remote control daemon, b10-cfgmgr configuration
|
||||
manager, b10-stats statistics collection and reporting daemon, and
|
||||
b10-stats-httpd for HTTP access to XML-formatted stats.
|
||||
|
||||
For DNS services, it provides the b10-auth authoritative DNS server
|
||||
(with SQLite3 and in-memory backends), b10-resolver recursive or
|
||||
forwarding DNS server, b10-xfrin IXFR/AXFR inbound service, b10-xfrout
|
||||
outgoing IXFR/AXFR service, b10-zonemgr secondary manager, libdns++
|
||||
library for C++ with a python wrapper, and many tests and example
|
||||
programs.
|
||||
|
||||
BIND 10 also provides experimental DHCPv4 and DHCPv6 servers,
|
||||
b10-dhcp4 and b10-dhcp6, a portable DHCP library, libdhcp++, and
|
||||
a DHCP benchmarking tool, perfdhcp.
|
||||
|
||||
Documentation is included with the source. See doc/guide/bind10-guide.txt
|
||||
(or bind10-guide.html) for installation instructions. The
|
||||
|
76
configure.ac
76
configure.ac
@@ -2,7 +2,7 @@
|
||||
# Process this file with autoconf to produce a configure script.
|
||||
|
||||
AC_PREREQ([2.59])
|
||||
AC_INIT(bind10-devel, 20120817, bind10-dev@isc.org)
|
||||
AC_INIT(bind10, 20121219, bind10-dev@isc.org)
|
||||
AC_CONFIG_SRCDIR(README)
|
||||
AM_INIT_AUTOMAKE([foreign])
|
||||
m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])])dnl be backward compatible
|
||||
@@ -166,8 +166,6 @@ fi
|
||||
|
||||
fi dnl GXX = yes
|
||||
|
||||
AM_CONDITIONAL(GCC_WERROR_OK, test $werror_ok = 1)
|
||||
|
||||
# allow building programs with static link. we need to make it selective
|
||||
# because loadable modules cannot be statically linked.
|
||||
AC_ARG_ENABLE([static-link],
|
||||
@@ -838,57 +836,22 @@ LIBS=$LIBS_SAVED
|
||||
#
|
||||
# Configure Boost header path
|
||||
#
|
||||
# If explicitly specified, use it.
|
||||
AC_ARG_WITH([boost-include],
|
||||
AC_HELP_STRING([--with-boost-include=PATH],
|
||||
[specify exact directory for Boost headers]),
|
||||
[boost_include_path="$withval"])
|
||||
# If not specified, try some common paths.
|
||||
if test -z "$with_boost_include"; then
|
||||
boostdirs="/usr/local /usr/pkg /opt /opt/local"
|
||||
for d in $boostdirs
|
||||
do
|
||||
if test -f $d/include/boost/shared_ptr.hpp; then
|
||||
boost_include_path=$d/include
|
||||
break
|
||||
fi
|
||||
done
|
||||
AX_BOOST_FOR_BIND10
|
||||
# Boost offset_ptr is required in one library and not optional right now, so
|
||||
# we unconditionally fail here if it doesn't work.
|
||||
if test "$BOOST_OFFSET_PTR_FAILURE" = "yes"; then
|
||||
AC_MSG_ERROR([Failed to compile a required header file. Try upgrading Boost to 1.44 or higher (when using clang++) or specifying --without-werror. See the ChangeLog entry for Trac no. 2147 for more details.])
|
||||
fi
|
||||
CPPFLAGS_SAVES="$CPPFLAGS"
|
||||
if test "${boost_include_path}" ; then
|
||||
BOOST_INCLUDES="-I${boost_include_path}"
|
||||
CPPFLAGS="$CPPFLAGS $BOOST_INCLUDES"
|
||||
|
||||
# There's a known bug in FreeBSD ports for Boost that would trigger a false
|
||||
# warning in build with g++ and -Werror (we exclude clang++ explicitly to
|
||||
# avoid unexpected false positives).
|
||||
if test "$BOOST_NUMERIC_CAST_WOULDFAIL" = "yes" -a X"$werror_ok" = X1 -a $CLANGPP = "no"; then
|
||||
AC_MSG_ERROR([Failed to compile a required header file. If you are using FreeBSD and Boost installed via ports, retry with specifying --without-werror. See the ChangeLog entry for Trac no. 1991 for more details.])
|
||||
fi
|
||||
AC_CHECK_HEADERS([boost/shared_ptr.hpp boost/foreach.hpp boost/interprocess/sync/interprocess_upgradable_mutex.hpp boost/date_time/posix_time/posix_time_types.hpp boost/bind.hpp boost/function.hpp],,
|
||||
AC_MSG_ERROR([Missing required header files.]))
|
||||
|
||||
# Detect whether Boost tries to use threads by default, and, if not,
|
||||
# make it sure explicitly. In some systems the automatic detection
|
||||
# may depend on preceding header files, and if inconsistency happens
|
||||
# it could lead to a critical disruption.
|
||||
AC_MSG_CHECKING([whether Boost tries to use threads])
|
||||
AC_TRY_COMPILE([
|
||||
#include <boost/config.hpp>
|
||||
#ifdef BOOST_HAS_THREADS
|
||||
#error "boost will use threads"
|
||||
#endif],,
|
||||
[AC_MSG_RESULT(no)
|
||||
CPPFLAGS_BOOST_THREADCONF="-DBOOST_DISABLE_THREADS=1"],
|
||||
[AC_MSG_RESULT(yes)])
|
||||
|
||||
# Boost offset_ptr is required in one library (not optional right now), and
|
||||
# it's known it doesn't compile on some platforms, depending on boost version,
|
||||
# its local configuration, and compiler.
|
||||
AC_MSG_CHECKING([Boost offset_ptr compiles])
|
||||
AC_TRY_COMPILE([
|
||||
#include <boost/interprocess/offset_ptr.hpp>
|
||||
],,
|
||||
[AC_MSG_RESULT(yes)],
|
||||
[AC_MSG_RESULT(no)
|
||||
AC_MSG_ERROR([Failed to compile a required header file. Try upgrading Boost to 1.44 or higher (when using clang++) or specifying --without-werror. See the ChangeLog entry for Trac no. 2147 for more details.])])
|
||||
|
||||
CPPFLAGS="$CPPFLAGS_SAVES $CPPFLAGS_BOOST_THREADCONF"
|
||||
AC_SUBST(BOOST_INCLUDES)
|
||||
# Add some default CPP flags needed for Boost, identified by the AX macro.
|
||||
CPPFLAGS="$CPPFLAGS $CPPFLAGS_BOOST_THREADCONF"
|
||||
|
||||
# I can't get some of the #include <asio.hpp> right without this
|
||||
# TODO: find the real cause of asio/boost wanting pthreads
|
||||
@@ -1176,8 +1139,8 @@ AC_CONFIG_FILES([Makefile
|
||||
src/bin/dbutil/tests/Makefile
|
||||
src/bin/dbutil/tests/testdata/Makefile
|
||||
src/bin/loadzone/Makefile
|
||||
src/bin/loadzone/tests/Makefile
|
||||
src/bin/loadzone/tests/correct/Makefile
|
||||
src/bin/loadzone/tests/error/Makefile
|
||||
src/bin/msgq/Makefile
|
||||
src/bin/msgq/tests/Makefile
|
||||
src/bin/auth/Makefile
|
||||
@@ -1350,15 +1313,16 @@ AC_OUTPUT([doc/version.ent
|
||||
src/bin/bindctl/tests/bindctl_test
|
||||
src/bin/loadzone/run_loadzone.sh
|
||||
src/bin/loadzone/tests/correct/correct_test.sh
|
||||
src/bin/loadzone/tests/error/error_test.sh
|
||||
src/bin/loadzone/b10-loadzone.py
|
||||
src/bin/loadzone/loadzone.py
|
||||
src/bin/usermgr/run_b10-cmdctl-usermgr.sh
|
||||
src/bin/usermgr/b10-cmdctl-usermgr.py
|
||||
src/bin/msgq/msgq.py
|
||||
src/bin/msgq/tests/msgq_test
|
||||
src/bin/msgq/run_msgq.sh
|
||||
src/bin/auth/auth.spec.pre
|
||||
src/bin/auth/spec_config.h.pre
|
||||
src/bin/auth/tests/testdata/example.zone
|
||||
src/bin/auth/tests/testdata/example-base.zone
|
||||
src/bin/auth/tests/testdata/example-nsec3.zone
|
||||
src/bin/dhcp4/spec_config.h.pre
|
||||
src/bin/dhcp6/spec_config.h.pre
|
||||
src/bin/tests/process_rename_test.py
|
||||
@@ -1418,11 +1382,9 @@ AC_OUTPUT([doc/version.ent
|
||||
chmod +x src/bin/bindctl/run_bindctl.sh
|
||||
chmod +x src/bin/loadzone/run_loadzone.sh
|
||||
chmod +x src/bin/loadzone/tests/correct/correct_test.sh
|
||||
chmod +x src/bin/loadzone/tests/error/error_test.sh
|
||||
chmod +x src/bin/sysinfo/run_sysinfo.sh
|
||||
chmod +x src/bin/usermgr/run_b10-cmdctl-usermgr.sh
|
||||
chmod +x src/bin/msgq/run_msgq.sh
|
||||
chmod +x src/bin/msgq/tests/msgq_test
|
||||
chmod +x src/lib/dns/gen-rdatacode.py
|
||||
chmod +x src/lib/log/tests/console_test.sh
|
||||
chmod +x src/lib/log/tests/destination_test.sh
|
||||
|
@@ -449,8 +449,10 @@ var/
|
||||
|
||||
<listitem>
|
||||
<para>Load desired zone file(s), for example:
|
||||
<screen>$ <userinput>b10-loadzone <replaceable>your.zone.example.org</replaceable></userinput></screen>
|
||||
<screen>$ <userinput>b10-loadzone <replaceable>-c '{"database_file": "/usr/local/var/bind10/zone.sqlite3"}'</replaceable> <replaceable>your.zone.example.org</replaceable> <replaceable>your.zone.file</replaceable></userinput></screen>
|
||||
</para>
|
||||
(If you use the sqlite3 data source with the default DB
|
||||
file, you can omit the -c option).
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
@@ -501,7 +503,7 @@ var/
|
||||
</listitem>
|
||||
<listitem>
|
||||
<simpara>
|
||||
<filename>etc/bind10-devel/</filename> —
|
||||
<filename>etc/bind10/</filename> —
|
||||
configuration files.
|
||||
</simpara>
|
||||
</listitem>
|
||||
@@ -513,7 +515,7 @@ var/
|
||||
</listitem>
|
||||
<listitem>
|
||||
<simpara>
|
||||
<filename>libexec/bind10-devel/</filename> —
|
||||
<filename>libexec/bind10/</filename> —
|
||||
executables that a user wouldn't normally run directly and
|
||||
are not run independently.
|
||||
These are the BIND 10 modules which are daemons started by
|
||||
@@ -528,13 +530,13 @@ var/
|
||||
</listitem>
|
||||
<listitem>
|
||||
<simpara>
|
||||
<filename>share/bind10-devel/</filename> —
|
||||
<filename>share/bind10/</filename> —
|
||||
configuration specifications.
|
||||
</simpara>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<simpara>
|
||||
<filename>share/doc/bind10-devel/</filename> —
|
||||
<filename>share/doc/bind10/</filename> —
|
||||
this guide and other supplementary documentation.
|
||||
</simpara>
|
||||
</listitem>
|
||||
@@ -546,7 +548,7 @@ var/
|
||||
</listitem>
|
||||
<listitem>
|
||||
<simpara>
|
||||
<filename>var/bind10-devel/</filename> —
|
||||
<filename>var/bind10/</filename> —
|
||||
data source and configuration databases.
|
||||
</simpara>
|
||||
</listitem>
|
||||
@@ -908,7 +910,7 @@ as a dependency earlier -->
|
||||
Administrators do not communicate directly with the
|
||||
<command>b10-msgq</command> daemon.
|
||||
By default, BIND 10 uses a UNIX domain socket file named
|
||||
<filename>/usr/local/var/bind10-devel/msg_socket</filename>
|
||||
<filename>/usr/local/var/bind10/msg_socket</filename>
|
||||
for this interprocess communication.
|
||||
</para>
|
||||
|
||||
@@ -970,7 +972,7 @@ config changes are actually commands to cfgmgr
|
||||
<!-- TODO: what about command line switch to change this? -->
|
||||
<para>
|
||||
The stored configuration file is at
|
||||
<filename>/usr/local/var/bind10-devel/b10-config.db</filename>.
|
||||
<filename>/usr/local/var/bind10/b10-config.db</filename>.
|
||||
(The directory is what was defined at build configure time for
|
||||
<option>--localstatedir</option>.
|
||||
The default is <filename>/usr/local/var/</filename>.)
|
||||
@@ -1063,13 +1065,13 @@ but you might wanna check with likun
|
||||
<para>The HTTPS server requires a private key,
|
||||
such as a RSA PRIVATE KEY.
|
||||
The default location is at
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-keyfile.pem</filename>.
|
||||
<filename>/usr/local/etc/bind10/cmdctl-keyfile.pem</filename>.
|
||||
(A sample key is at
|
||||
<filename>/usr/local/share/bind10-devel/cmdctl-keyfile.pem</filename>.)
|
||||
<filename>/usr/local/share/bind10/cmdctl-keyfile.pem</filename>.)
|
||||
It also uses a certificate located at
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-certfile.pem</filename>.
|
||||
<filename>/usr/local/etc/bind10/cmdctl-certfile.pem</filename>.
|
||||
(A sample certificate is at
|
||||
<filename>/usr/local/share/bind10-devel/cmdctl-certfile.pem</filename>.)
|
||||
<filename>/usr/local/share/bind10/cmdctl-certfile.pem</filename>.)
|
||||
This may be a self-signed certificate or purchased from a
|
||||
certification authority.
|
||||
</para>
|
||||
@@ -1105,11 +1107,11 @@ but that is a single file, maybe this should go back to that format?
|
||||
<para>
|
||||
The <command>b10-cmdctl</command> daemon also requires
|
||||
the user account file located at
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-accounts.csv</filename>.
|
||||
<filename>/usr/local/etc/bind10/cmdctl-accounts.csv</filename>.
|
||||
This comma-delimited file lists the accounts with a user name,
|
||||
hashed password, and salt.
|
||||
(A sample file is at
|
||||
<filename>/usr/local/share/bind10-devel/cmdctl-accounts.csv</filename>.
|
||||
<filename>/usr/local/share/bind10/cmdctl-accounts.csv</filename>.
|
||||
It contains the user named <quote>root</quote> with the password
|
||||
<quote>bind10</quote>.)
|
||||
</para>
|
||||
@@ -1139,14 +1141,14 @@ or accounts database -->
|
||||
The configuration items for <command>b10-cmdctl</command> are:
|
||||
<varname>accounts_file</varname> which defines the path to the
|
||||
user accounts database (the default is
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-accounts.csv</filename>);
|
||||
<filename>/usr/local/etc/bind10/cmdctl-accounts.csv</filename>);
|
||||
<varname>cert_file</varname> which defines the path to the
|
||||
PEM certificate file (the default is
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-certfile.pem</filename>);
|
||||
<filename>/usr/local/etc/bind10/cmdctl-certfile.pem</filename>);
|
||||
and
|
||||
<varname>key_file</varname> which defines the path to the
|
||||
PEM private key file (the default is
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-keyfile.pem</filename>).
|
||||
<filename>/usr/local/etc/bind10/cmdctl-keyfile.pem</filename>).
|
||||
</para>
|
||||
|
||||
</section>
|
||||
@@ -1870,7 +1872,7 @@ tsig_keys/keys[0] "example.key.:c2VjcmV0" string (modified)
|
||||
Each rule is a name-value mapping (a dictionary, in the JSON
|
||||
terminology). Each rule must contain exactly one mapping called
|
||||
"action", which describes what should happen if the rule applies.
|
||||
There may be more mappings, calld matches, which describe the
|
||||
There may be more mappings, called matches, which describe the
|
||||
conditions under which the rule applies.
|
||||
</para>
|
||||
|
||||
@@ -2457,7 +2459,7 @@ can use various data source backends.
|
||||
data source — one that serves things like
|
||||
<quote>AUTHORS.BIND.</quote>. The IN class contains single SQLite3
|
||||
data source with database file located at
|
||||
<filename>/usr/local/var/bind10-devel/zone.sqlite3</filename>.
|
||||
<filename>/usr/local/var/bind10/zone.sqlite3</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -2636,19 +2638,10 @@ can use various data source backends.
|
||||
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The <option>-o</option> argument may be used to define the
|
||||
default origin for loaded zone file records.
|
||||
</para>
|
||||
|
||||
<note>
|
||||
<para>
|
||||
In the current release, only the SQLite3 back
|
||||
end is used by <command>b10-loadzone</command>.
|
||||
By default, it stores the zone data in
|
||||
<filename>/usr/local/var/bind10-devel/zone.sqlite3</filename>
|
||||
unless the <option>-d</option> switch is used to set the
|
||||
database filename.
|
||||
Multiple zones are stored in a single SQLite3 zone database.
|
||||
</para>
|
||||
</note>
|
||||
@@ -3680,7 +3673,7 @@ mysql></screen>
|
||||
<para>
|
||||
3. Create the database tables:
|
||||
<screen>mysql> <userinput>CONNECT kea;</userinput>
|
||||
mysql> <userinput>SOURCE <replaceable><path-to-bind10></replaceable>/share/bind10-devel/dhcpdb_create.mysql</userinput></screen>
|
||||
mysql> <userinput>SOURCE <replaceable><path-to-bind10></replaceable>/share/bind10/dhcpdb_create.mysql</userinput></screen>
|
||||
</para>
|
||||
<para>
|
||||
4. Create the user under which BIND 10 will access the database and grant it access to the database tables:
|
||||
@@ -3878,7 +3871,7 @@ Dhcp6/subnet6 [] list (default)</screen>
|
||||
<section id="dhcp6-limit">
|
||||
<title>DHCPv6 Server Limitations</title>
|
||||
<para> These are the current limitations and known problems
|
||||
with the the DHCPv6 server
|
||||
with the DHCPv6 server
|
||||
software. Most of them are reflections of the early stage of
|
||||
development and should be treated as <quote>not implemented
|
||||
yet</quote>, rather than actual limitations.</para>
|
||||
@@ -4163,7 +4156,7 @@ specify module-wide logging and see what appears...
|
||||
If there are multiple logger specifications in the
|
||||
configuration that might match a particular logger, the
|
||||
specification with the more specific logger name takes
|
||||
precedence. For example, if there are entries for for
|
||||
precedence. For example, if there are entries for
|
||||
both <quote>*</quote> and <quote>Resolver</quote>, the
|
||||
resolver module — and all libraries it uses —
|
||||
will log messages according to the configuration in the
|
||||
|
@@ -15,7 +15,7 @@ the "m4" subdirectory as a template for your own project. The key is
|
||||
to call the AX_ISC_BIND10 function (as the sample configure.ac does)
|
||||
from your configure.ac. Then it will check the availability of
|
||||
necessary stuff and set some corresponding AC variables. You can then
|
||||
use the resulting variables in your Makefile.in or Makefile.ac.
|
||||
use the resulting variables in your Makefile.in or Makefile.am.
|
||||
|
||||
If you use automake, don't forget adding the following line to the top
|
||||
level Makefile.am:
|
||||
|
112
m4macros/ax_boost_for_bind10.m4
Normal file
112
m4macros/ax_boost_for_bind10.m4
Normal file
@@ -0,0 +1,112 @@
|
||||
dnl @synopsis AX_BOOST_FOR_BIND10
|
||||
dnl
|
||||
dnl Test for the Boost C++ header files intended to be used within BIND 10
|
||||
dnl
|
||||
dnl If no path to the installed boost header files is given via the
|
||||
dnl --with-boost-include option, the macro searchs under
|
||||
dnl /usr/local /usr/pkg /opt /opt/local directories.
|
||||
dnl If it cannot detect any workable path for Boost, this macro treats it
|
||||
dnl as a fatal error (so it cannot be called if the availability of Boost
|
||||
dnl is optional).
|
||||
dnl
|
||||
dnl This macro also tries to identify some known portability issues, and
|
||||
dnl sets corresponding variables so the caller can react to (or ignore,
|
||||
dnl depending on other configuration) specific issues appropriately.
|
||||
dnl
|
||||
dnl This macro calls:
|
||||
dnl
|
||||
dnl AC_SUBST(BOOST_INCLUDES)
|
||||
dnl
|
||||
dnl And possibly sets:
|
||||
dnl CPPFLAGS_BOOST_THREADCONF should be added to CPPFLAGS by caller
|
||||
dnl BOOST_OFFSET_PTR_WOULDFAIL set to "yes" if offset_ptr would cause build
|
||||
dnl error; otherwise set to "no"
|
||||
dnl BOOST_NUMERIC_CAST_WOULDFAIL set to "yes" if numeric_cast would cause
|
||||
dnl build error; otherwise set to "no"
|
||||
dnl
|
||||
|
||||
AC_DEFUN([AX_BOOST_FOR_BIND10], [
|
||||
AC_LANG_SAVE
|
||||
AC_LANG([C++])
|
||||
|
||||
#
|
||||
# Configure Boost header path
|
||||
#
|
||||
# If explicitly specified, use it.
|
||||
AC_ARG_WITH([boost-include],
|
||||
AC_HELP_STRING([--with-boost-include=PATH],
|
||||
[specify exact directory for Boost headers]),
|
||||
[boost_include_path="$withval"])
|
||||
# If not specified, try some common paths.
|
||||
if test -z "$with_boost_include"; then
|
||||
boostdirs="/usr/local /usr/pkg /opt /opt/local"
|
||||
for d in $boostdirs
|
||||
do
|
||||
if test -f $d/include/boost/shared_ptr.hpp; then
|
||||
boost_include_path=$d/include
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Check the path with some specific headers.
|
||||
CPPFLAGS_SAVED="$CPPFLAGS"
|
||||
if test "${boost_include_path}" ; then
|
||||
BOOST_INCLUDES="-I${boost_include_path}"
|
||||
CPPFLAGS="$CPPFLAGS $BOOST_INCLUDES"
|
||||
fi
|
||||
AC_CHECK_HEADERS([boost/shared_ptr.hpp boost/foreach.hpp boost/interprocess/sync/interprocess_upgradable_mutex.hpp boost/date_time/posix_time/posix_time_types.hpp boost/bind.hpp boost/function.hpp],,
|
||||
AC_MSG_ERROR([Missing required header files.]))
|
||||
|
||||
# Detect whether Boost tries to use threads by default, and, if not,
|
||||
# make it sure explicitly. In some systems the automatic detection
|
||||
# may depend on preceding header files, and if inconsistency happens
|
||||
# it could lead to a critical disruption.
|
||||
AC_MSG_CHECKING([whether Boost tries to use threads])
|
||||
AC_TRY_COMPILE([
|
||||
#include <boost/config.hpp>
|
||||
#ifdef BOOST_HAS_THREADS
|
||||
#error "boost will use threads"
|
||||
#endif],,
|
||||
[AC_MSG_RESULT(no)
|
||||
CPPFLAGS_BOOST_THREADCONF="-DBOOST_DISABLE_THREADS=1"],
|
||||
[AC_MSG_RESULT(yes)])
|
||||
|
||||
# Boost offset_ptr is known to not compile on some platforms, depending on
|
||||
# boost version, its local configuration, and compiler. Detect it.
|
||||
AC_MSG_CHECKING([Boost offset_ptr compiles])
|
||||
AC_TRY_COMPILE([
|
||||
#include <boost/interprocess/offset_ptr.hpp>
|
||||
],,
|
||||
[AC_MSG_RESULT(yes)
|
||||
BOOST_OFFSET_PTR_WOULDFAIL=no],
|
||||
[AC_MSG_RESULT(no)
|
||||
BOOST_OFFSET_PTR_WOULDFAIL=yes])
|
||||
|
||||
# Detect build failure case known to happen with Boost installed via
|
||||
# FreeBSD ports
|
||||
if test "X$GXX" = "Xyes"; then
|
||||
CXXFLAGS_SAVED="$CXXFLAGS"
|
||||
CXXFLAGS="$CXXFLAGS -Werror"
|
||||
|
||||
AC_MSG_CHECKING([Boost numeric_cast compiles with -Werror])
|
||||
AC_TRY_COMPILE([
|
||||
#include <boost/numeric/conversion/cast.hpp>
|
||||
],[
|
||||
return (boost::numeric_cast<short>(0));
|
||||
],[AC_MSG_RESULT(yes)
|
||||
BOOST_NUMERIC_CAST_WOULDFAIL=no],
|
||||
[AC_MSG_RESULT(no)
|
||||
BOOST_NUMERIC_CAST_WOULDFAIL=yes])
|
||||
|
||||
CXXFLAGS="$CXXFLAGS_SAVED"
|
||||
else
|
||||
# This doesn't matter for non-g++
|
||||
BOOST_NUMERIC_CAST_WOULDFAIL=no
|
||||
fi
|
||||
|
||||
AC_SUBST(BOOST_INCLUDES)
|
||||
|
||||
CPPFLAGS="$CPPFLAGS_SAVED"
|
||||
AC_LANG_RESTORE
|
||||
])dnl AX_BOOST_FOR_BIND10
|
@@ -141,7 +141,7 @@ The specific problem is printed in the log message.
|
||||
The thread for maintaining data source clients has received a command to
|
||||
reconfigure, and has now started this process.
|
||||
|
||||
% AUTH_DATASRC_CLIENTS_BUILDER_RECONFIGURE_SUCCESS data source reconfiguration completed succesfully
|
||||
% AUTH_DATASRC_CLIENTS_BUILDER_RECONFIGURE_SUCCESS data source reconfiguration completed successfully
|
||||
The thread for maintaining data source clients has finished reconfiguring
|
||||
the data source clients, and is now running with the new configuration.
|
||||
|
||||
@@ -169,7 +169,7 @@ probably better to stop and restart it.
|
||||
|
||||
% AUTH_DATA_SOURCE data source database file: %1
|
||||
This is a debug message produced by the authoritative server when it accesses a
|
||||
datebase data source, listing the file that is being accessed.
|
||||
database data source, listing the file that is being accessed.
|
||||
|
||||
% AUTH_DNS_SERVICES_CREATED DNS services created
|
||||
This is a debug message indicating that the component that will handling
|
||||
@@ -184,7 +184,7 @@ reason for the failure is given in the message.) The server will drop the
|
||||
packet.
|
||||
|
||||
% AUTH_INVALID_STATISTICS_DATA invalid specification of statistics data specified
|
||||
An error was encountered when the authoritiative server specified
|
||||
An error was encountered when the authoritative server specified
|
||||
statistics data which is invalid for the auth specification file.
|
||||
|
||||
% AUTH_LOAD_TSIG loading TSIG keys
|
||||
@@ -208,7 +208,7 @@ requests to b10-ddns) to handle it, but it failed. The authoritative
|
||||
server returns SERVFAIL to the client on behalf of the separate
|
||||
process. The error could be configuration mismatch between b10-auth
|
||||
and the recipient component, or it may be because the requests are
|
||||
coming too fast and the receipient process cannot keep up with the
|
||||
coming too fast and the recipient process cannot keep up with the
|
||||
rate, or some system level failure. In either case this means the
|
||||
BIND 10 system is not working as expected, so the administrator should
|
||||
look into the cause and address the issue. The log message includes
|
||||
|
@@ -20,7 +20,7 @@
|
||||
<refentry>
|
||||
|
||||
<refentryinfo>
|
||||
<date>June 20, 2012</date>
|
||||
<date>December 18, 2012</date>
|
||||
</refentryinfo>
|
||||
|
||||
<refmeta>
|
||||
@@ -100,7 +100,7 @@
|
||||
<varname>database_file</varname> defines the path to the
|
||||
SQLite3 zone file when using the sqlite datasource.
|
||||
The default is
|
||||
<filename>/usr/local/var/bind10-devel/zone.sqlite3</filename>.
|
||||
<filename>/usr/local/var/bind10/zone.sqlite3</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -157,6 +157,7 @@
|
||||
incoming TCP connections, in milliseconds. If the query
|
||||
is not sent within this time, the connection is closed.
|
||||
Setting this to 0 will disable TCP timeouts completely.
|
||||
The default is 5000 (five seconds).
|
||||
</para>
|
||||
|
||||
<!-- TODO: formating -->
|
||||
@@ -164,6 +165,15 @@
|
||||
The configuration commands are:
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<command>getstats</command> tells <command>b10-auth</command>
|
||||
to report its defined statistics data in JSON format.
|
||||
It will not report about unused counters.
|
||||
This is used by the
|
||||
<citerefentry><refentrytitle>b10-stats</refentrytitle><manvolnum>8</manvolnum></citerefentry> daemon.
|
||||
(The <command>sendstats</command> command is deprecated.)
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<command>loadzone</command> tells <command>b10-auth</command>
|
||||
to load or reload a zone file. The arguments include:
|
||||
@@ -180,13 +190,6 @@
|
||||
</simpara></note>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<command>sendstats</command> tells <command>b10-auth</command>
|
||||
to send its statistics data to
|
||||
<citerefentry><refentrytitle>b10-stats</refentrytitle><manvolnum>8</manvolnum></citerefentry>
|
||||
immediately.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<command>shutdown</command> exits <command>b10-auth</command>.
|
||||
This has an optional <varname>pid</varname> argument to
|
||||
@@ -195,6 +198,28 @@
|
||||
if configured.)
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<command>start_ddns_forwarder</command> starts (or restarts) the
|
||||
internal forwarding of DDNS Update messages.
|
||||
This is used by the
|
||||
<citerefentry><refentrytitle>b10-ddns</refentrytitle><manvolnum>8</manvolnum></citerefentry>
|
||||
daemon to tell <command>b10-auth</command> that DDNS Update
|
||||
messages can be forwarded.
|
||||
<note><simpara>This is not expected to be called by administrators;
|
||||
it will be removed as a public command in the future.</simpara></note>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<command>stop_ddns_forwarder</command> stops the internal
|
||||
forwarding of DDNS Update messages.
|
||||
This is used by the
|
||||
<citerefentry><refentrytitle>b10-ddns</refentrytitle><manvolnum>8</manvolnum></citerefentry>
|
||||
daemon to tell <command>b10-auth</command> that DDNS Update
|
||||
messages should not be forwarded.
|
||||
<note><simpara>This is not expected to be called by administrators;
|
||||
it will be removed as a public command in the future.</simpara></note>
|
||||
</para>
|
||||
|
||||
</refsect1>
|
||||
|
||||
<refsect1>
|
||||
@@ -230,7 +255,7 @@
|
||||
<refsect1>
|
||||
<title>FILES</title>
|
||||
<para>
|
||||
<filename>/usr/local/var/bind10-devel/zone.sqlite3</filename>
|
||||
<filename>/usr/local/var/bind10/zone.sqlite3</filename>
|
||||
— Location for the SQLite3 zone database
|
||||
when <emphasis>database_file</emphasis> configuration is not
|
||||
defined.
|
||||
@@ -243,6 +268,9 @@
|
||||
<citerefentry>
|
||||
<refentrytitle>b10-cfgmgr</refentrytitle><manvolnum>8</manvolnum>
|
||||
</citerefentry>,
|
||||
<citerefentry>
|
||||
<refentrytitle>b10-ddns</refentrytitle><manvolnum>8</manvolnum>
|
||||
</citerefentry>,
|
||||
<citerefentry>
|
||||
<refentrytitle>b10-loadzone</refentrytitle><manvolnum>8</manvolnum>
|
||||
</citerefentry>,
|
||||
|
2
src/bin/auth/tests/.gitignore
vendored
2
src/bin/auth/tests/.gitignore
vendored
@@ -1 +1,3 @@
|
||||
/run_unittests
|
||||
/example_base_inc.cc
|
||||
/example_nsec3_inc.cc
|
||||
|
@@ -7,7 +7,8 @@ AM_CPPFLAGS += -I$(top_builddir)/src/lib/cc
|
||||
AM_CPPFLAGS += $(BOOST_INCLUDES)
|
||||
AM_CPPFLAGS += -DAUTH_OBJ_DIR=\"$(abs_top_builddir)/src/bin/auth\"
|
||||
AM_CPPFLAGS += -DTEST_DATA_DIR=\"$(abs_top_srcdir)/src/lib/testutils/testdata\"
|
||||
AM_CPPFLAGS += -DTEST_OWN_DATA_DIR=\"$(abs_top_srcdir)/src/bin/auth/tests/testdata\"
|
||||
AM_CPPFLAGS += -DTEST_OWN_DATA_DIR=\"$(abs_srcdir)/testdata\"
|
||||
AM_CPPFLAGS += -DTEST_OWN_DATA_BUILDDIR=\"$(abs_builddir)/testdata\"
|
||||
AM_CPPFLAGS += -DTEST_DATA_BUILDDIR=\"$(abs_top_builddir)/src/lib/testutils/testdata\"
|
||||
AM_CPPFLAGS += -DDSRC_DIR=\"$(abs_top_builddir)/src/lib/datasrc\"
|
||||
AM_CPPFLAGS += -DPLUGIN_DATA_PATH=\"$(abs_top_builddir)/src/bin/cfgmgr/plugins\"
|
||||
@@ -50,7 +51,6 @@ run_unittests_SOURCES += config_syntax_unittest.cc
|
||||
run_unittests_SOURCES += command_unittest.cc
|
||||
run_unittests_SOURCES += common_unittest.cc
|
||||
run_unittests_SOURCES += query_unittest.cc
|
||||
run_unittests_SOURCES += query_inmemory_unittest.cc
|
||||
run_unittests_SOURCES += statistics_unittest.cc
|
||||
run_unittests_SOURCES += test_datasrc_clients_mgr.h test_datasrc_clients_mgr.cc
|
||||
run_unittests_SOURCES += datasrc_clients_builder_unittest.cc
|
||||
@@ -81,6 +81,40 @@ run_unittests_LDADD += $(top_builddir)/src/lib/util/threads/libb10-threads.la
|
||||
run_unittests_LDADD += $(GTEST_LDADD)
|
||||
run_unittests_LDADD += $(SQLITE_LIBS)
|
||||
|
||||
# The following are definitions for auto-generating test data for query
|
||||
# tests.
|
||||
BUILT_SOURCES = example_base_inc.cc example_nsec3_inc.cc
|
||||
BUILT_SOURCES += testdata/example-base.sqlite3
|
||||
BUILT_SOURCES += testdata/example-nsec3.sqlite3
|
||||
|
||||
EXTRA_DIST = gen-query-testdata.py
|
||||
|
||||
CLEANFILES += example_base_inc.cc example_nsec3_inc.cc
|
||||
|
||||
example_base_inc.cc: $(srcdir)/testdata/example-base-inc.zone
|
||||
$(PYTHON) $(srcdir)/gen-query-testdata.py \
|
||||
$(srcdir)/testdata/example-base-inc.zone example_base_inc.cc
|
||||
|
||||
example_nsec3_inc.cc: $(srcdir)/testdata/example-nsec3-inc.zone
|
||||
$(PYTHON) $(srcdir)/gen-query-testdata.py \
|
||||
$(srcdir)/testdata/example-nsec3-inc.zone example_nsec3_inc.cc
|
||||
|
||||
testdata/example-base.sqlite3: testdata/example-base.zone
|
||||
$(top_srcdir)/install-sh -c \
|
||||
$(srcdir)/testdata/example-common-inc-template.zone \
|
||||
testdata/example-common-inc.zone
|
||||
$(SHELL) $(top_builddir)/src/bin/loadzone/run_loadzone.sh \
|
||||
-c "{\"database_file\": \"$(builddir)/testdata/example-base.sqlite3\"}" \
|
||||
example.com testdata/example-base.zone
|
||||
|
||||
testdata/example-nsec3.sqlite3: testdata/example-nsec3.zone
|
||||
$(top_srcdir)/install-sh -c \
|
||||
$(srcdir)/testdata/example-common-inc-template.zone \
|
||||
testdata/example-common-inc.zone
|
||||
$(SHELL) $(top_builddir)/src/bin/loadzone/run_loadzone.sh \
|
||||
-c "{\"database_file\": \"$(builddir)/testdata/example-nsec3.sqlite3\"}" \
|
||||
example.com testdata/example-nsec3.zone
|
||||
|
||||
check-local:
|
||||
B10_FROM_BUILD=${abs_top_builddir} ./run_unittests
|
||||
|
||||
|
98
src/bin/auth/tests/gen-query-testdata.py
Executable file
98
src/bin/auth/tests/gen-query-testdata.py
Executable file
@@ -0,0 +1,98 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# Copyright (C) 2012 Internet Systems Consortium, Inc. ("ISC")
|
||||
#
|
||||
# Permission to use, copy, modify, and/or distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH
|
||||
# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
|
||||
# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
|
||||
# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
|
||||
# PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
"""\
|
||||
This is a supplemental script to auto generate test data in the form of
|
||||
C++ source code from a DNS zone file.
|
||||
|
||||
Usage: python gen-query-testdata.py source_file output-cc-file
|
||||
|
||||
The usage doesn't matter much, though, because it's expected to be invoked
|
||||
from Makefile, and that would be only use case of this script.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import re
|
||||
|
||||
# Markup for variable definition
|
||||
re_start_rr = re.compile('^;var=(.*)')
|
||||
|
||||
# Skip lines starting with ';' (comments) or empty lines. re_start_rr
|
||||
# will also match this expression, so it should be checked first.
|
||||
re_skip = re.compile('(^;)|(^\s*$)')
|
||||
|
||||
def parse_input(input_file):
|
||||
'''Build an internal list of RR data from the input source file.
|
||||
|
||||
It generates a list of (variable_name, list of RR) tuples, where
|
||||
variable_name is the expected C++ variable name for the subsequent RRs
|
||||
if they are expected to be named. It can be an empty string if the RRs
|
||||
are only expected to appear in the zone file.
|
||||
The second element of the tuple is a list of strings, each of which
|
||||
represents a single RR, e.g., "example.com 3600 IN A 192.0.2.1".
|
||||
|
||||
'''
|
||||
result = []
|
||||
rrs = None
|
||||
with open(input_file) as f:
|
||||
for line in f:
|
||||
m = re_start_rr.match(line)
|
||||
if m:
|
||||
if rrs is not None:
|
||||
result.append((rr_varname, rrs))
|
||||
rrs = []
|
||||
rr_varname = m.group(1)
|
||||
elif re_skip.match(line):
|
||||
continue
|
||||
else:
|
||||
rrs.append(line.rstrip('\n'))
|
||||
|
||||
# if needed, store the last RRs (they are not followed by 'var=' mark)
|
||||
if rrs is not None:
|
||||
result.append((rr_varname, rrs))
|
||||
|
||||
return result
|
||||
|
||||
def generate_variables(out_file, rrsets_data):
|
||||
'''Generate a C++ source file containing a C-string variables for RRs.
|
||||
|
||||
This produces a definition of C-string for each RRset that is expected
|
||||
to be named as follows:
|
||||
const char* const var_name =
|
||||
"example.com. 3600 IN A 192.0.2.1\n"
|
||||
"example.com. 3600 IN A 192.0.2.2\n";
|
||||
|
||||
Escape character '\' in the string will be further escaped so it will
|
||||
compile.
|
||||
|
||||
'''
|
||||
with open(out_file, 'w') as out:
|
||||
for (var_name, rrs) in rrsets_data:
|
||||
if len(var_name) > 0:
|
||||
out.write('const char* const ' + var_name + ' =\n')
|
||||
# Combine all RRs, escaping '\' as a C-string
|
||||
out.write('\n'.join([' \"%s\\n\"' %
|
||||
(rr.replace('\\', '\\\\'))
|
||||
for rr in rrs]))
|
||||
out.write(';\n')
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) < 3:
|
||||
sys.stderr.write('gen-query-testdata.py require 2 args\n')
|
||||
sys.exit(1)
|
||||
rrsets_data = parse_input(sys.argv[1])
|
||||
generate_variables(sys.argv[2], rrsets_data)
|
||||
|
@@ -1,123 +0,0 @@
|
||||
// Copyright (C) 2012 Internet Systems Consortium, Inc. ("ISC")
|
||||
//
|
||||
// Permission to use, copy, modify, and/or distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH
|
||||
// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
|
||||
// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
|
||||
// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
|
||||
// PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
#include <dns/name.h>
|
||||
#include <dns/message.h>
|
||||
#include <dns/rcode.h>
|
||||
#include <dns/opcode.h>
|
||||
|
||||
#include <cc/data.h>
|
||||
|
||||
#include <datasrc/client_list.h>
|
||||
|
||||
#include <auth/query.h>
|
||||
|
||||
#include <testutils/dnsmessage_test.h>
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
#include <string>
|
||||
|
||||
using namespace isc::dns;
|
||||
using namespace isc::auth;
|
||||
using namespace isc::testutils;
|
||||
using isc::datasrc::ConfigurableClientList;
|
||||
using std::string;
|
||||
|
||||
namespace {
|
||||
|
||||
// The DNAME to do tests against
|
||||
const char* const dname_txt =
|
||||
"dname.example.com. 3600 IN DNAME "
|
||||
"somethinglong.dnametarget.example.com.\n";
|
||||
// This is not inside the zone, this is created at runtime
|
||||
const char* const synthetized_cname_txt =
|
||||
"www.dname.example.com. 3600 IN CNAME "
|
||||
"www.somethinglong.dnametarget.example.com.\n";
|
||||
|
||||
// This is a subset of QueryTest using (subset of) the same test data, but
|
||||
// with the production in-memory data source. Both tests should be eventually
|
||||
// unified to avoid duplicates.
|
||||
class InMemoryQueryTest : public ::testing::Test {
|
||||
protected:
|
||||
InMemoryQueryTest() : list(RRClass::IN()), response(Message::RENDER) {
|
||||
response.setRcode(Rcode::NOERROR());
|
||||
response.setOpcode(Opcode::QUERY());
|
||||
list.configure(isc::data::Element::fromJSON(
|
||||
"[{\"type\": \"MasterFiles\","
|
||||
" \"cache-enable\": true, "
|
||||
" \"params\": {\"example.com\": \"" +
|
||||
string(TEST_OWN_DATA_DIR "/example.zone") +
|
||||
"\"}}]"), true);
|
||||
}
|
||||
|
||||
ConfigurableClientList list;
|
||||
Message response;
|
||||
Query query;
|
||||
};
|
||||
|
||||
// A wrapper to check resulting response message commonly used in
|
||||
// tests below.
|
||||
// check_origin needs to be specified only when the authority section has
|
||||
// an SOA RR. The interface is not generic enough but should be okay
|
||||
// for our test cases in practice.
|
||||
void
|
||||
responseCheck(Message& response, const isc::dns::Rcode& rcode,
|
||||
unsigned int flags, const unsigned int ancount,
|
||||
const unsigned int nscount, const unsigned int arcount,
|
||||
const char* const expected_answer,
|
||||
const char* const expected_authority,
|
||||
const char* const expected_additional,
|
||||
const Name& check_origin = Name::ROOT_NAME())
|
||||
{
|
||||
// In our test cases QID, Opcode, and QDCOUNT should be constant, so
|
||||
// we don't bother the test cases specifying these values.
|
||||
headerCheck(response, response.getQid(), rcode, Opcode::QUERY().getCode(),
|
||||
flags, 0, ancount, nscount, arcount);
|
||||
if (expected_answer != NULL) {
|
||||
rrsetsCheck(expected_answer,
|
||||
response.beginSection(Message::SECTION_ANSWER),
|
||||
response.endSection(Message::SECTION_ANSWER),
|
||||
check_origin);
|
||||
}
|
||||
if (expected_authority != NULL) {
|
||||
rrsetsCheck(expected_authority,
|
||||
response.beginSection(Message::SECTION_AUTHORITY),
|
||||
response.endSection(Message::SECTION_AUTHORITY),
|
||||
check_origin);
|
||||
}
|
||||
if (expected_additional != NULL) {
|
||||
rrsetsCheck(expected_additional,
|
||||
response.beginSection(Message::SECTION_ADDITIONAL),
|
||||
response.endSection(Message::SECTION_ADDITIONAL));
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Test a query under a domain with DNAME. We should get a synthetized CNAME
|
||||
* as well as the DNAME.
|
||||
*
|
||||
* TODO: Once we have CNAME chaining, check it works with synthetized CNAMEs
|
||||
* as well. This includes tests pointing inside the zone, outside the zone,
|
||||
* pointing to NXRRSET and NXDOMAIN cases (similarly as with CNAME).
|
||||
*/
|
||||
TEST_F(InMemoryQueryTest, DNAME) {
|
||||
query.process(list, Name("www.dname.example.com"), RRType::A(),
|
||||
response);
|
||||
|
||||
responseCheck(response, Rcode::NOERROR(), AA_FLAG, 2, 0, 0,
|
||||
(string(dname_txt) + synthetized_cname_txt).c_str(),
|
||||
NULL, NULL);
|
||||
}
|
||||
}
|
File diff suppressed because it is too large
Load Diff
10
src/bin/auth/tests/testdata/.gitignore
vendored
10
src/bin/auth/tests/testdata/.gitignore
vendored
@@ -6,3 +6,13 @@
|
||||
/shortanswer_fromWire.wire
|
||||
/simplequery_fromWire.wire
|
||||
/simpleresponse_fromWire.wire
|
||||
/example-base.sqlite3
|
||||
/example-base.sqlite3.copied
|
||||
/example-base.zone
|
||||
/example-base.zone
|
||||
/example-common-inc.zone
|
||||
/example-nsec3-inc.zone
|
||||
/example-nsec3.sqlite3
|
||||
/example-nsec3.sqlite3.copied
|
||||
/example-nsec3.zone
|
||||
/example.zone
|
||||
|
7
src/bin/auth/tests/testdata/Makefile.am
vendored
7
src/bin/auth/tests/testdata/Makefile.am
vendored
@@ -1,4 +1,6 @@
|
||||
CLEANFILES = *.wire
|
||||
CLEANFILES = *.wire *.copied
|
||||
CLEANFILES += example-base.sqlite3 example-nsec3.sqlite3
|
||||
CLEANFILES += example-common-inc.zone
|
||||
|
||||
BUILT_SOURCES = badExampleQuery_fromWire.wire examplequery_fromWire.wire
|
||||
BUILT_SOURCES += iqueryresponse_fromWire.wire multiquestion_fromWire.wire
|
||||
@@ -24,5 +26,8 @@ EXTRA_DIST += example.com
|
||||
EXTRA_DIST += example.zone
|
||||
EXTRA_DIST += example.sqlite3
|
||||
|
||||
EXTRA_DIST += example-base-inc.zone example-nsec3-inc.zone
|
||||
EXTRA_DIST += example-common-inc-template.zone
|
||||
|
||||
.spec.wire:
|
||||
$(PYTHON) $(top_builddir)/src/lib/util/python/gen_wiredata.py -o $@ $<
|
||||
|
236
src/bin/auth/tests/testdata/example-base-inc.zone
vendored
Normal file
236
src/bin/auth/tests/testdata/example-base-inc.zone
vendored
Normal file
@@ -0,0 +1,236 @@
|
||||
;; This file defines a set of RRs commonly used in query tests in the
|
||||
;; form of standard master zone file.
|
||||
;;
|
||||
;; It's a sequence of the following pattern:
|
||||
;; ;var=<var_name>
|
||||
;; RR_1
|
||||
;; RR_2
|
||||
;; ..
|
||||
;; RR_n
|
||||
;;
|
||||
;; where var_name is a string that can be used as a variable name in a
|
||||
;; C/C++ source file or an empty string. RR_x is a single-line
|
||||
;; textual representation of an arbitrary DNS RR.
|
||||
;;
|
||||
;; If var_name is non empty, the generator script will define a C
|
||||
;; variable of C-string type for that set of RRs so that it can be referred
|
||||
;; to in the test source file.
|
||||
;;
|
||||
;; Note that lines beginning ';var=' is no different from other
|
||||
;; comment lines as a zone file. It has special meaning only for the
|
||||
;; generator script. Obviously, real comment lines cannot begin with
|
||||
;; ';var=' (which should be less likely to happen in practice though).
|
||||
;;
|
||||
;; These RRs will be loaded into in-memory data source in that order.
|
||||
;; Note that it may impose stricter restriction on the order of RRs.
|
||||
;; In general, each RRset of the same name and type and its RRSIG (if
|
||||
;; any) is expected to be grouped.
|
||||
|
||||
;var=soa_txt
|
||||
example.com. 3600 IN SOA . . 1 0 0 0 0
|
||||
;var=zone_ns_txt
|
||||
example.com. 3600 IN NS glue.delegation.example.com.
|
||||
example.com. 3600 IN NS noglue.example.com.
|
||||
example.com. 3600 IN NS example.net.
|
||||
|
||||
;var=
|
||||
example.com. 3600 IN RRSIG SOA 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
example.com. 3600 IN RRSIG NS 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; Note: the position of the next RR is tricky. It's placed here to
|
||||
;; be grouped with the subsequent A RR of the name. But we also want
|
||||
;; to group the A RR with other RRs of a different owner name, so the RRSIG
|
||||
;; cannot be placed after the A RR. The empty 'var=' specification is
|
||||
;; not necessary here, but in case we want to reorganize the ordering
|
||||
;; (in which case it's more likely to be needed), we keep it here.
|
||||
;var=
|
||||
noglue.example.com. 3600 IN RRSIG A 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;var=ns_addrs_txt
|
||||
noglue.example.com. 3600 IN A 192.0.2.53
|
||||
glue.delegation.example.com. 3600 IN A 192.0.2.153
|
||||
glue.delegation.example.com. 3600 IN AAAA 2001:db8::53
|
||||
|
||||
;var=
|
||||
glue.delegation.example.com. 3600 IN RRSIG A 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
glue.delegation.example.com. 3600 IN RRSIG AAAA 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;var=delegation_txt
|
||||
delegation.example.com. 3600 IN NS glue.delegation.example.com.
|
||||
delegation.example.com. 3600 IN NS noglue.example.com.
|
||||
delegation.example.com. 3600 IN NS cname.example.com.
|
||||
delegation.example.com. 3600 IN NS example.org.
|
||||
|
||||
;var=
|
||||
delegation.example.com. 3600 IN RRSIG DS 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; Borrowed from the RFC4035
|
||||
;var=delegation_ds_txt
|
||||
delegation.example.com. 3600 IN DS 57855 5 1 B6DCD485719ADCA18E5F3D48A2331627FDD3 636B
|
||||
;var=mx_txt
|
||||
mx.example.com. 3600 IN MX 10 www.example.com.
|
||||
mx.example.com. 3600 IN MX 20 mailer.example.org.
|
||||
mx.example.com. 3600 IN MX 30 mx.delegation.example.com.
|
||||
;var=www_a_txt
|
||||
www.example.com. 3600 IN A 192.0.2.80
|
||||
|
||||
;var=
|
||||
www.example.com. 3600 IN RRSIG A 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;var=cname_txt
|
||||
cname.example.com. 3600 IN CNAME www.example.com.
|
||||
;var=cname_nxdom_txt
|
||||
cnamenxdom.example.com. 3600 IN CNAME nxdomain.example.com.
|
||||
;; CNAME Leading out of zone
|
||||
;var=cname_out_txt
|
||||
cnameout.example.com. 3600 IN CNAME www.example.org.
|
||||
;; The DNAME to do tests against
|
||||
;var=dname_txt
|
||||
dname.example.com. 3600 IN DNAME somethinglong.dnametarget.example.com.
|
||||
;; Some data at the dname node (allowed by RFC 2672)
|
||||
;var=dname_a_txt
|
||||
dname.example.com. 3600 IN A 192.0.2.5
|
||||
;; This is not inside the zone, this is created at runtime
|
||||
;; www.dname.example.com. 3600 IN CNAME www.somethinglong.dnametarget.example.com.
|
||||
;; The rest of data won't be referenced from the test cases.
|
||||
;var=other_zone_rrs
|
||||
cnamemailer.example.com. 3600 IN CNAME www.example.com.
|
||||
cnamemx.example.com. 3600 IN MX 10 cnamemailer.example.com.
|
||||
mx.delegation.example.com. 3600 IN A 192.0.2.100
|
||||
;; Wildcards
|
||||
;var=wild_txt
|
||||
*.wild.example.com. 3600 IN A 192.0.2.7
|
||||
;var=nsec_wild_txt
|
||||
*.wild.example.com. 3600 IN NSEC www.example.com. A NSEC RRSIG
|
||||
|
||||
;var=
|
||||
*.wild.example.com. 3600 IN RRSIG A 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
*.wild.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;var=cnamewild_txt
|
||||
*.cnamewild.example.com. 3600 IN CNAME www.example.org.
|
||||
;var=nsec_cnamewild_txt
|
||||
*.cnamewild.example.com. 3600 IN NSEC delegation.example.com. CNAME NSEC RRSIG
|
||||
|
||||
;var=
|
||||
*.cnamewild.example.com. 3600 IN RRSIG CNAME 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
*.cnamewild.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; Wildcard_nxrrset
|
||||
;var=wild_txt_nxrrset
|
||||
*.uwild.example.com. 3600 IN A 192.0.2.9
|
||||
;var=nsec_wild_txt_nxrrset
|
||||
*.uwild.example.com. 3600 IN NSEC www.uwild.example.com. A NSEC RRSIG
|
||||
;var=
|
||||
*.uwild.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
;var=wild_txt_next
|
||||
www.uwild.example.com. 3600 IN A 192.0.2.11
|
||||
;var=
|
||||
www.uwild.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
;var=nsec_wild_txt_next
|
||||
www.uwild.example.com. 3600 IN NSEC *.wild.example.com. A NSEC RRSIG
|
||||
;; Wildcard empty
|
||||
;var=empty_txt
|
||||
b.*.t.example.com. 3600 IN A 192.0.2.13
|
||||
;var=nsec_empty_txt
|
||||
b.*.t.example.com. 3600 IN NSEC *.uwild.example.com. A NSEC RRSIG
|
||||
|
||||
;var=
|
||||
b.*.t.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;var=empty_prev_txt
|
||||
t.example.com. 3600 IN A 192.0.2.15
|
||||
;var=nsec_empty_prev_txt
|
||||
t.example.com. 3600 IN NSEC b.*.t.example.com. A NSEC RRSIG
|
||||
|
||||
;var=
|
||||
t.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; Used in NXDOMAIN proof test. We are going to test some unusual case where
|
||||
;; the best possible wildcard is below the "next domain" of the NSEC RR that
|
||||
;; proves the NXDOMAIN, i.e.,
|
||||
;; mx.example.com. (exist)
|
||||
;; (.no.example.com. (qname, NXDOMAIN)
|
||||
;; ).no.example.com. (exist)
|
||||
;; *.no.example.com. (best possible wildcard, not exist)
|
||||
;var=no_txt
|
||||
\).no.example.com. 3600 IN AAAA 2001:db8::53
|
||||
;; NSEC records.
|
||||
;var=nsec_apex_txt
|
||||
example.com. 3600 IN NSEC cname.example.com. NS SOA NSEC RRSIG
|
||||
;var=
|
||||
example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
;var=nsec_mx_txt
|
||||
mx.example.com. 3600 IN NSEC \).no.example.com. MX NSEC RRSIG
|
||||
|
||||
;var=
|
||||
mx.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;var=nsec_no_txt
|
||||
\).no.example.com. 3600 IN NSEC nz.no.example.com. AAAA NSEC RRSIG
|
||||
|
||||
;var=
|
||||
\).no.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; We'll also test the case where a single NSEC proves both NXDOMAIN and the
|
||||
;; non existence of wildcard. The following records will be used for that
|
||||
;; test.
|
||||
;; ).no.example.com. (exist, whose NSEC proves everything)
|
||||
;; *.no.example.com. (best possible wildcard, not exist)
|
||||
;; nx.no.example.com. (NXDOMAIN)
|
||||
;; nz.no.example.com. (exist)
|
||||
;var=nz_txt
|
||||
nz.no.example.com. 3600 IN AAAA 2001:db8::5300
|
||||
;var=nsec_nz_txt
|
||||
nz.no.example.com. 3600 IN NSEC noglue.example.com. AAAA NSEC RRSIG
|
||||
;var=nsec_nxdomain_txt
|
||||
noglue.example.com. 3600 IN NSEC nonsec.example.com. A
|
||||
|
||||
;var=
|
||||
noglue.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; NSEC for the normal NXRRSET case
|
||||
;var=nsec_www_txt
|
||||
www.example.com. 3600 IN NSEC example.com. A NSEC RRSIG
|
||||
|
||||
;var=
|
||||
www.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; Authoritative data without NSEC
|
||||
;var=nonsec_a_txt
|
||||
nonsec.example.com. 3600 IN A 192.0.2.0
|
||||
|
||||
;; (Secure) delegation data; Delegation with DS record
|
||||
;var=signed_delegation_txt
|
||||
signed-delegation.example.com. 3600 IN NS ns.example.net.
|
||||
;var=signed_delegation_ds_txt
|
||||
signed-delegation.example.com. 3600 IN DS 12345 8 2 764501411DE58E8618945054A3F620B36202E115D015A7773F4B78E0F952CECA
|
||||
|
||||
;var=
|
||||
signed-delegation.example.com. 3600 IN RRSIG DS 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; (Secure) delegation data; Delegation without DS record (and both NSEC
|
||||
;; and NSEC3 denying its existence)
|
||||
;var=unsigned_delegation_txt
|
||||
unsigned-delegation.example.com. 3600 IN NS ns.example.net.
|
||||
;var=unsigned_delegation_nsec_txt
|
||||
unsigned-delegation.example.com. 3600 IN NSEC unsigned-delegation-optout.example.com. NS RRSIG NSEC
|
||||
|
||||
;var=
|
||||
unsigned-delegation.example.com. 3600 IN RRSIG NSEC 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; Delegation without DS record, and no direct matching NSEC3 record
|
||||
;var=unsigned_delegation_optout_txt
|
||||
unsigned-delegation-optout.example.com. 3600 IN NS ns.example.net.
|
||||
;var=unsigned_delegation_optout_nsec_txt
|
||||
unsigned-delegation-optout.example.com. 3600 IN NSEC *.uwild.example.com. NS RRSIG NSEC
|
||||
|
||||
;; (Secure) delegation data; Delegation where the DS lookup will raise an
|
||||
;; exception.
|
||||
;var=bad_delegation_txt
|
||||
bad-delegation.example.com. 3600 IN NS ns.example.net.
|
||||
|
||||
;; Delegation from an unsigned parent. There's no DS, and there's no NSEC
|
||||
;; or NSEC3 that proves it.
|
||||
;var=nosec_delegation_txt
|
||||
nosec-delegation.example.com. 3600 IN NS ns.nosec.example.net.
|
7
src/bin/auth/tests/testdata/example-base.zone.in
vendored
Normal file
7
src/bin/auth/tests/testdata/example-base.zone.in
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
;;
|
||||
;; This is a complete (but crafted and somewhat broken) zone file used
|
||||
;; in query tests.
|
||||
;;
|
||||
|
||||
$INCLUDE @abs_srcdir@/example-base-inc.zone
|
||||
$INCLUDE @abs_builddir@/example-common-inc.zone
|
5
src/bin/auth/tests/testdata/example-common-inc-template.zone
vendored
Normal file
5
src/bin/auth/tests/testdata/example-common-inc-template.zone
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
;;
|
||||
;; This is an initial template of part of test zone file used in query test
|
||||
;; and expected to be included from other zone files. This is
|
||||
;; intentionally kept empty.
|
||||
;;
|
16
src/bin/auth/tests/testdata/example-nsec3-inc.zone
vendored
Normal file
16
src/bin/auth/tests/testdata/example-nsec3-inc.zone
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
;; See query_testzone_data.txt for general notes.
|
||||
|
||||
;; NSEC3PARAM. This is needed for database-based data source to
|
||||
;; signal the zone is NSEC3-signed
|
||||
;var=
|
||||
example.com. 3600 IN NSEC3PARAM 1 1 12 aabbccdd
|
||||
|
||||
;; NSEC3 RRs. You may also need to add mapping to MockZoneFinder::hash_map_.
|
||||
;var=nsec3_apex_txt
|
||||
0p9mhaveqvm6t7vbl5lop2u3t2rp3tom.example.com. 3600 IN NSEC3 1 1 12 aabbccdd 2t7b4g4vsa5smi47k61mv5bv1a22bojr NS SOA NSEC3PARAM RRSIG
|
||||
;var=nsec3_apex_rrsig_txt
|
||||
0p9mhaveqvm6t7vbl5lop2u3t2rp3tom.example.com. 3600 IN RRSIG NSEC3 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
;var=nsec3_www_txt
|
||||
q04jkcevqvmu85r014c7dkba38o0ji5r.example.com. 3600 IN NSEC3 1 1 12 aabbccdd r53bq7cc2uvmubfu5ocmm6pers9tk9en A RRSIG
|
||||
;var=nsec3_www_rrsig_txt
|
||||
q04jkcevqvmu85r014c7dkba38o0ji5r.example.com. 3600 IN RRSIG NSEC3 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
8
src/bin/auth/tests/testdata/example-nsec3.zone.in
vendored
Normal file
8
src/bin/auth/tests/testdata/example-nsec3.zone.in
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
;;
|
||||
;; This is a complete (but crafted and somewhat broken) zone file used
|
||||
;; in query tests including NSEC3 records, making the zone is "NSEC3 signed".
|
||||
;;
|
||||
|
||||
$INCLUDE @abs_srcdir@/example-base-inc.zone
|
||||
$INCLUDE @abs_srcdir@/example-nsec3-inc.zone
|
||||
$INCLUDE @abs_builddir@/example-common-inc.zone
|
121
src/bin/auth/tests/testdata/example.zone
vendored
121
src/bin/auth/tests/testdata/example.zone
vendored
@@ -1,121 +0,0 @@
|
||||
;;
|
||||
;; This is a complete (but crafted and somewhat broken) zone file used
|
||||
;; in query tests.
|
||||
;;
|
||||
|
||||
example.com. 3600 IN SOA . . 0 0 0 0 0
|
||||
example.com. 3600 IN NS glue.delegation.example.com.
|
||||
example.com. 3600 IN NS noglue.example.com.
|
||||
example.com. 3600 IN NS example.net.
|
||||
example.com. 3600 IN DS 57855 5 1 B6DCD485719ADCA18E5F3D48A2331627FDD3 636B
|
||||
glue.delegation.example.com. 3600 IN A 192.0.2.153
|
||||
glue.delegation.example.com. 3600 IN AAAA 2001:db8::53
|
||||
noglue.example.com. 3600 IN A 192.0.2.53
|
||||
delegation.example.com. 3600 IN NS glue.delegation.example.com.
|
||||
delegation.example.com. 3600 IN NS noglue.example.com.
|
||||
delegation.example.com. 3600 IN NS cname.example.com.
|
||||
delegation.example.com. 3600 IN NS example.org.
|
||||
;; Borrowed from the RFC4035
|
||||
delegation.example.com. 3600 IN DS 57855 5 1 B6DCD485719ADCA18E5F3D48A2331627FDD3 636B
|
||||
mx.example.com. 3600 IN MX 10 www.example.com.
|
||||
mx.example.com. 3600 IN MX 20 mailer.example.org.
|
||||
mx.example.com. 3600 IN MX 30 mx.delegation.example.com.
|
||||
www.example.com. 3600 IN A 192.0.2.80
|
||||
cname.example.com. 3600 IN CNAME www.example.com.
|
||||
cnamenxdom.example.com. 3600 IN CNAME nxdomain.example.com.
|
||||
;; CNAME Leading out of zone
|
||||
cnameout.example.com. 3600 IN CNAME www.example.org.
|
||||
;; The DNAME to do tests against
|
||||
dname.example.com. 3600 IN DNAME somethinglong.dnametarget.example.com.
|
||||
;; Some data at the dname node (allowed by RFC 2672)
|
||||
dname.example.com. 3600 IN A 192.0.2.5
|
||||
;; The rest of data won't be referenced from the test cases.
|
||||
cnamemailer.example.com. 3600 IN CNAME www.example.com.
|
||||
cnamemx.example.com. 3600 IN MX 10 cnamemailer.example.com.
|
||||
mx.delegation.example.com. 3600 IN A 192.0.2.100
|
||||
;; Wildcards
|
||||
*.wild.example.com. 3600 IN A 192.0.2.7
|
||||
*.wild.example.com. 3600 IN NSEC www.example.com. A NSEC RRSIG
|
||||
*.cnamewild.example.com. 3600 IN CNAME www.example.org.
|
||||
*.cnamewild.example.com. 3600 IN NSEC delegation.example.com. CNAME NSEC RRSIG
|
||||
;; Wildcard_nxrrset
|
||||
*.uwild.example.com. 3600 IN A 192.0.2.9
|
||||
*.uwild.example.com. 3600 IN NSEC www.uwild.example.com. A NSEC RRSIG
|
||||
www.uwild.example.com. 3600 IN A 192.0.2.11
|
||||
www.uwild.example.com. 3600 IN NSEC *.wild.example.com. A NSEC RRSIG
|
||||
;; Wildcard empty
|
||||
b.*.t.example.com. 3600 IN A 192.0.2.13
|
||||
b.*.t.example.com. 3600 IN NSEC *.uwild.example.com. A NSEC RRSIG
|
||||
t.example.com. 3600 IN A 192.0.2.15
|
||||
t.example.com. 3600 IN NSEC b.*.t.example.com. A NSEC RRSIG
|
||||
;; Used in NXDOMAIN proof test. We are going to test some unusual case where
|
||||
;; the best possible wildcard is below the "next domain" of the NSEC RR that
|
||||
;; proves the NXDOMAIN, i.e.,
|
||||
;; mx.example.com. (exist)
|
||||
;; (.no.example.com. (qname, NXDOMAIN)
|
||||
;; ).no.example.com. (exist)
|
||||
;; *.no.example.com. (best possible wildcard, not exist)
|
||||
).no.example.com. 3600 IN AAAA 2001:db8::53
|
||||
;; NSEC records.
|
||||
example.com. 3600 IN NSEC cname.example.com. NS SOA NSEC RRSIG
|
||||
mx.example.com. 3600 IN NSEC ).no.example.com. MX NSEC RRSIG
|
||||
).no.example.com. 3600 IN NSEC nz.no.example.com. AAAA NSEC RRSIG
|
||||
;; We'll also test the case where a single NSEC proves both NXDOMAIN and the
|
||||
;; non existence of wildcard. The following records will be used for that
|
||||
;; test.
|
||||
;; ).no.example.com. (exist, whose NSEC proves everything)
|
||||
;; *.no.example.com. (best possible wildcard, not exist)
|
||||
;; nx.no.example.com. (NXDOMAIN)
|
||||
;; nz.no.example.com. (exist)
|
||||
nz.no.example.com. 3600 IN AAAA 2001:db8::5300
|
||||
nz.no.example.com. 3600 IN NSEC noglue.example.com. AAAA NSEC RRSIG
|
||||
noglue.example.com. 3600 IN NSEC nonsec.example.com. A
|
||||
|
||||
;; NSEC for the normal NXRRSET case
|
||||
www.example.com. 3600 IN NSEC example.com. A NSEC RRSIG
|
||||
|
||||
;; Authoritative data without NSEC
|
||||
nonsec.example.com. 3600 IN A 192.0.2.0
|
||||
|
||||
;; NSEC3 RRs. You may also need to add mapping to MockZoneFinder::hash_map_.
|
||||
0p9mhaveqvm6t7vbl5lop2u3t2rp3tom.example.com. 3600 IN NSEC3 1 1 12 aabbccdd 2t7b4g4vsa5smi47k61mv5bv1a22bojr NS SOA NSEC3PARAM RRSIG
|
||||
0p9mhaveqvm6t7vbl5lop2u3t2rp3tom.example.com. 3600 IN RRSIG NSEC3 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
q04jkcevqvmu85r014c7dkba38o0ji5r.example.com. 3600 IN NSEC3 1 1 12 aabbccdd r53bq7cc2uvmubfu5ocmm6pers9tk9en A RRSIG
|
||||
q04jkcevqvmu85r014c7dkba38o0ji5r.example.com. 3600 IN RRSIG NSEC3 5 3 3600 20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE
|
||||
|
||||
;; NSEC3 for wild.example.com (used in wildcard tests, will be added on
|
||||
;; demand not to confuse other tests)
|
||||
ji6neoaepv8b5o6k4ev33abha8ht9fgc.example.com. 3600 IN NSEC3 1 1 12 aabbccdd r53bq7cc2uvmubfu5ocmm6pers9tk9en
|
||||
|
||||
;; NSEC3 for cnamewild.example.com (used in wildcard tests, will be added on
|
||||
;; demand not to confuse other tests)
|
||||
k8udemvp1j2f7eg6jebps17vp3n8i58h.example.com. 3600 IN NSEC3 1 1 12 aabbccdd r53bq7cc2uvmubfu5ocmm6pers9tk9en
|
||||
|
||||
;; NSEC3 for *.uwild.example.com (will be added on demand not to confuse
|
||||
;; other tests)
|
||||
b4um86eghhds6nea196smvmlo4ors995.example.com. 3600 IN NSEC3 1 1 12 aabbccdd r53bq7cc2uvmubfu5ocmm6pers9tk9en A RRSIG
|
||||
;; NSEC3 for uwild.example.com. (will be added on demand)
|
||||
t644ebqk9bibcna874givr6joj62mlhv.example.com. 3600 IN NSEC3 1 1 12 aabbccdd r53bq7cc2uvmubfu5ocmm6pers9tk9en A RRSIG
|
||||
|
||||
;; (Secure) delegation data; Delegation with DS record
|
||||
signed-delegation.example.com. 3600 IN NS ns.example.net.
|
||||
signed-delegation.example.com. 3600 IN DS 12345 8 2 764501411DE58E8618945054A3F620B36202E115D015A7773F4B78E0F952CECA
|
||||
|
||||
;; (Secure) delegation data; Delegation without DS record (and both NSEC
|
||||
;; and NSEC3 denying its existence)
|
||||
unsigned-delegation.example.com. 3600 IN NS ns.example.net.
|
||||
unsigned-delegation.example.com. 3600 IN NSEC unsigned-delegation-optout.example.com. NS RRSIG NSEC
|
||||
;; This one will be added on demand
|
||||
q81r598950igr1eqvc60aedlq66425b5.example.com. 3600 IN NSEC3 1 1 12 aabbccdd 0p9mhaveqvm6t7vbl5lop2u3t2rp3tom NS RRSIG
|
||||
|
||||
;; Delegation without DS record, and no direct matching NSEC3 record
|
||||
unsigned-delegation-optout.example.com. 3600 IN NS ns.example.net.
|
||||
unsigned-delegation-optout.example.com. 3600 IN NSEC *.uwild.example.com. NS RRSIG NSEC
|
||||
|
||||
;; (Secure) delegation data; Delegation where the DS lookup will raise an
|
||||
;; exception.
|
||||
bad-delegation.example.com. 3600 IN NS ns.example.net.
|
||||
|
||||
;; Delegation from an unsigned parent. There's no DS, and there's no NSEC
|
||||
;; or NSEC3 that proves it.
|
||||
nosec-delegation.example.com. 3600 IN NS ns.nosec.example.net.
|
6
src/bin/auth/tests/testdata/example.zone.in
vendored
Normal file
6
src/bin/auth/tests/testdata/example.zone.in
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
;;
|
||||
;; This is a complete (but crafted and somewhat broken) zone file used
|
||||
;; in query tests, excluding NSEC3 records.
|
||||
;;
|
||||
|
||||
$INCLUDE @abs_builddir@/example-base.zone
|
@@ -160,7 +160,7 @@
|
||||
<citerefentry><refentrytitle>b10-msgq</refentrytitle><manvolnum>8</manvolnum></citerefentry>
|
||||
daemon to use.
|
||||
The default is
|
||||
<filename>/usr/local/var/bind10-devel/msg_socket</filename>.
|
||||
<filename>/usr/local/var/bind10/msg_socket</filename>.
|
||||
<!-- @localstatedir@/@PACKAGE_NAME@/msg_socket -->
|
||||
</para>
|
||||
</listitem>
|
||||
|
@@ -154,7 +154,7 @@ The boss module received the given signal.
|
||||
% BIND10_RESTART_COMPONENT_SKIPPED Skipped restarting a component %1
|
||||
The boss module tried to restart a component after it failed (crashed)
|
||||
unexpectedly, but the boss then found that the component had been removed
|
||||
from its local configuration of components to run. This is an unusal
|
||||
from its local configuration of components to run. This is an unusual
|
||||
situation but can happen if the administrator removes the component from
|
||||
the configuration after the component's crash and before the restart time.
|
||||
The boss module simply skipped restarting that module, and the whole system
|
||||
@@ -262,7 +262,7 @@ indicated OS API function with given error.
|
||||
The boss forwards a request for a socket to the socket creator.
|
||||
|
||||
% BIND10_STARTED_CC started configuration/command session
|
||||
Debug message given when BIND 10 has successfull started the object that
|
||||
Debug message given when BIND 10 has successfully started the object that
|
||||
handles configuration and commands.
|
||||
|
||||
% BIND10_STARTED_PROCESS started %1
|
||||
|
@@ -136,7 +136,7 @@
|
||||
<refsect1>
|
||||
<title>FILES</title>
|
||||
<!-- TODO: fix path -->
|
||||
<para><filename>/usr/local/var/bind10-devel/b10-config.db</filename>
|
||||
<para><filename>/usr/local/var/bind10/b10-config.db</filename>
|
||||
— Configuration storage file.
|
||||
</para>
|
||||
</refsect1>
|
||||
|
@@ -190,7 +190,7 @@
|
||||
To update an expired certificate in BIND 10 that has been installed to
|
||||
/usr/local:
|
||||
<screen>
|
||||
$> cd /usr/local/etc/bind10-devel/
|
||||
$> cd /usr/local/etc/bind10/
|
||||
|
||||
$> b10-certgen
|
||||
cmdctl-certfile.pem failed to verify: certificate has expired
|
||||
|
@@ -147,21 +147,21 @@
|
||||
<varname>accounts_file</varname> defines the path to the
|
||||
user accounts database.
|
||||
The default is
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-accounts.csv</filename>.
|
||||
<filename>/usr/local/etc/bind10/cmdctl-accounts.csv</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<varname>cert_file</varname> defines the path to the
|
||||
PEM certificate file.
|
||||
The default is
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-certfile.pem</filename>.
|
||||
<filename>/usr/local/etc/bind10/cmdctl-certfile.pem</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<varname>key_file</varname> defines the path to the PEM private key
|
||||
file.
|
||||
The default is
|
||||
<filename>/usr/local/etc/bind10-devel/cmdctl-keyfile.pem</filename>.
|
||||
<filename>/usr/local/etc/bind10/cmdctl-keyfile.pem</filename>.
|
||||
</para>
|
||||
|
||||
<!-- TODO: formating -->
|
||||
@@ -187,17 +187,17 @@
|
||||
<!-- TODO: permissions -->
|
||||
<!-- TODO: what about multiple accounts? -->
|
||||
<!-- TODO: shouldn't the password file name say cmdctl in it? -->
|
||||
<para><filename>/usr/local/etc/bind10-devel/cmdctl-accounts.csv</filename>
|
||||
<para><filename>/usr/local/etc/bind10/cmdctl-accounts.csv</filename>
|
||||
— account database containing the name, hashed password,
|
||||
and the salt.
|
||||
</para>
|
||||
<!-- TODO: replace /usr/local -->
|
||||
<!-- TODO: permissions -->
|
||||
<!-- TODO: shouldn't have both in same file, will be configurable -->
|
||||
<para><filename>/usr/local/etc/bind10-devel/cmdctl-keyfile.pem</filename>
|
||||
<para><filename>/usr/local/etc/bind10/cmdctl-keyfile.pem</filename>
|
||||
— contains the Private key.
|
||||
</para>
|
||||
<para><filename>/usr/local/etc/bind10-devel/cmdctl-certfile.pem</filename>
|
||||
<para><filename>/usr/local/etc/bind10/cmdctl-certfile.pem</filename>
|
||||
— contains the Certificate.
|
||||
</para>
|
||||
</refsect1>
|
||||
|
@@ -53,7 +53,7 @@ inconsistent state, and it is advised to restore it from the backup that was
|
||||
created when b10-dbutil started.
|
||||
|
||||
% DBUTIL_EXECUTE Executing SQL statement: %1
|
||||
Debug message; the given SQL statement is executed
|
||||
Debug message; the given SQL statement is executed.
|
||||
|
||||
% DBUTIL_FILE Database file: %1
|
||||
The database file that is being checked.
|
||||
@@ -67,7 +67,7 @@ The given database statement failed to execute. The error is shown in the
|
||||
message.
|
||||
|
||||
% DBUTIL_TOO_MANY_ARGUMENTS too many arguments to the command, maximum of one expected
|
||||
There were too many command-line arguments to b10-dbutil
|
||||
There were too many command-line arguments to b10-dbutil.
|
||||
|
||||
% DBUTIL_UPGRADE_CANCELED upgrade canceled; database has not been changed
|
||||
The user aborted the upgrade, and b10-dbutil will now exit.
|
||||
@@ -95,7 +95,7 @@ again.
|
||||
|
||||
% DBUTIL_UPGRADE_PREPARATION_FAILED upgrade preparation failed: %1
|
||||
An unexpected error occurred while b10-dbutil was preparing to upgrade the
|
||||
database schema. The error is shown in the message
|
||||
database schema. The error is shown in the message.
|
||||
|
||||
% DBUTIL_UPGRADE_SUCCESFUL database upgrade successfully completed
|
||||
The database schema update was completed successfully.
|
||||
|
@@ -58,6 +58,7 @@ b10_dhcp4_CXXFLAGS = -Wno-unused-parameter
|
||||
endif
|
||||
|
||||
b10_dhcp4_LDADD = $(top_builddir)/src/lib/dhcp/libb10-dhcp++.la
|
||||
b10_dhcp4_LDADD += $(top_builddir)/src/lib/util/libb10-util.la
|
||||
b10_dhcp4_LDADD += $(top_builddir)/src/lib/dhcpsrv/libb10-dhcpsrv.la
|
||||
b10_dhcp4_LDADD += $(top_builddir)/src/lib/exceptions/libb10-exceptions.la
|
||||
b10_dhcp4_LDADD += $(top_builddir)/src/lib/asiolink/libb10-asiolink.la
|
||||
@@ -65,6 +66,5 @@ b10_dhcp4_LDADD += $(top_builddir)/src/lib/log/libb10-log.la
|
||||
b10_dhcp4_LDADD += $(top_builddir)/src/lib/config/libb10-cfgclient.la
|
||||
b10_dhcp4_LDADD += $(top_builddir)/src/lib/cc/libb10-cc.la
|
||||
|
||||
|
||||
b10_dhcp4dir = $(pkgdatadir)
|
||||
b10_dhcp4_DATA = dhcp4.spec
|
||||
|
@@ -13,9 +13,12 @@
|
||||
// PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
#include <config/ccsession.h>
|
||||
#include <dhcpsrv/cfgmgr.h>
|
||||
#include <dhcp4/config_parser.h>
|
||||
#include <dhcp4/dhcp4_log.h>
|
||||
#include <dhcp/libdhcp++.h>
|
||||
#include <dhcp/option_definition.h>
|
||||
#include <dhcpsrv/cfgmgr.h>
|
||||
#include <util/encode/hex.h>
|
||||
#include <boost/foreach.hpp>
|
||||
#include <boost/lexical_cast.hpp>
|
||||
#include <boost/algorithm/string.hpp>
|
||||
@@ -46,12 +49,20 @@ typedef std::map<std::string, ParserFactory*> FactoryMap;
|
||||
/// no subnet object created yet to store them.
|
||||
typedef std::vector<Pool4Ptr> PoolStorage;
|
||||
|
||||
/// @brief Collection of option descriptors. This container allows searching for
|
||||
/// options using the option code or persistency flag. This is useful when merging
|
||||
/// existing options with newly configured options.
|
||||
typedef Subnet::OptionContainer OptionStorage;
|
||||
|
||||
/// @brief Global uint32 parameters that will be used as defaults.
|
||||
Uint32Storage uint32_defaults;
|
||||
|
||||
/// @brief global string parameters that will be used as defaults.
|
||||
StringStorage string_defaults;
|
||||
|
||||
/// @brief Global storage for options that will be used as defaults.
|
||||
OptionStorage option_defaults;
|
||||
|
||||
/// @brief a dummy configuration parser
|
||||
///
|
||||
/// It is a debugging parser. It does not configure anything,
|
||||
@@ -451,6 +462,344 @@ private:
|
||||
PoolStorage* pools_;
|
||||
};
|
||||
|
||||
/// @brief Parser for option data value.
|
||||
///
|
||||
/// This parser parses configuration entries that specify value of
|
||||
/// a single option. These entries include option name, option code
|
||||
/// and data carried by the option. If parsing is successful then an
|
||||
/// instance of an option is created and added to the storage provided
|
||||
/// by the calling class.
|
||||
///
|
||||
/// @todo This class parses and validates the option name. However it is
|
||||
/// not used anywhere until support for option spaces is implemented
|
||||
/// (see tickets #2319, #2314). When option spaces are implemented
|
||||
/// there will be a way to reference the particular option using
|
||||
/// its type (code) or option name.
|
||||
class OptionDataParser : public Dhcp4ConfigParser {
|
||||
public:
|
||||
|
||||
/// @brief Constructor.
|
||||
///
|
||||
/// Class constructor.
|
||||
OptionDataParser(const std::string&)
|
||||
: options_(NULL),
|
||||
// initialize option to NULL ptr
|
||||
option_descriptor_(false) { }
|
||||
|
||||
/// @brief Parses the single option data.
|
||||
///
|
||||
/// This method parses the data of a single option from the configuration.
|
||||
/// The option data includes option name, option code and data being
|
||||
/// carried by this option. Eventually it creates the instance of the
|
||||
/// option.
|
||||
///
|
||||
/// @warning setStorage must be called with valid storage pointer prior
|
||||
/// to calling this method.
|
||||
///
|
||||
/// @param option_data_entries collection of entries that define value
|
||||
/// for a particular option.
|
||||
/// @throw Dhcp4ConfigError if invalid parameter specified in
|
||||
/// the configuration.
|
||||
/// @throw isc::InvalidOperation if failed to set storage prior to
|
||||
/// calling build.
|
||||
/// @throw isc::BadValue if option data storage is invalid.
|
||||
virtual void build(ConstElementPtr option_data_entries) {
|
||||
if (options_ == NULL) {
|
||||
isc_throw(isc::InvalidOperation, "Parser logic error: storage must be set before "
|
||||
"parsing option data.");
|
||||
}
|
||||
BOOST_FOREACH(ConfigPair param, option_data_entries->mapValue()) {
|
||||
ParserPtr parser;
|
||||
if (param.first == "name") {
|
||||
boost::shared_ptr<StringParser>
|
||||
name_parser(dynamic_cast<StringParser*>(StringParser::Factory(param.first)));
|
||||
if (name_parser) {
|
||||
name_parser->setStorage(&string_values_);
|
||||
parser = name_parser;
|
||||
}
|
||||
} else if (param.first == "code") {
|
||||
boost::shared_ptr<Uint32Parser>
|
||||
code_parser(dynamic_cast<Uint32Parser*>(Uint32Parser::Factory(param.first)));
|
||||
if (code_parser) {
|
||||
code_parser->setStorage(&uint32_values_);
|
||||
parser = code_parser;
|
||||
}
|
||||
} else if (param.first == "data") {
|
||||
boost::shared_ptr<StringParser>
|
||||
value_parser(dynamic_cast<StringParser*>(StringParser::Factory(param.first)));
|
||||
if (value_parser) {
|
||||
value_parser->setStorage(&string_values_);
|
||||
parser = value_parser;
|
||||
}
|
||||
} else {
|
||||
isc_throw(Dhcp4ConfigError,
|
||||
"Parser error: option-data parameter not supported: "
|
||||
<< param.first);
|
||||
}
|
||||
parser->build(param.second);
|
||||
}
|
||||
// Try to create the option instance.
|
||||
createOption();
|
||||
}
|
||||
|
||||
/// @brief Commits option value.
|
||||
///
|
||||
/// This function adds a new option to the storage or replaces an existing option
|
||||
/// with the same code.
|
||||
///
|
||||
/// @throw isc::InvalidOperation if failed to set pointer to storage or failed
|
||||
/// to call build() prior to commit. If that happens data in the storage
|
||||
/// remain un-modified.
|
||||
virtual void commit() {
|
||||
if (options_ == NULL) {
|
||||
isc_throw(isc::InvalidOperation, "Parser logic error: storage must be set before "
|
||||
"commiting option data.");
|
||||
} else if (!option_descriptor_.option) {
|
||||
// Before we can commit the new option should be configured. If it is not
|
||||
// than somebody must have called commit() before build().
|
||||
isc_throw(isc::InvalidOperation, "Parser logic error: no option has been configured and"
|
||||
" thus there is nothing to commit. Has build() been called?");
|
||||
}
|
||||
uint16_t opt_type = option_descriptor_.option->getType();
|
||||
Subnet::OptionContainerTypeIndex& idx = options_->get<1>();
|
||||
// Try to find options with the particular option code in the main
|
||||
// storage. If found, remove these options because they will be
|
||||
// replaced with new one.
|
||||
Subnet::OptionContainerTypeRange range =
|
||||
idx.equal_range(opt_type);
|
||||
if (std::distance(range.first, range.second) > 0) {
|
||||
idx.erase(range.first, range.second);
|
||||
}
|
||||
// Append new option to the main storage.
|
||||
options_->push_back(option_descriptor_);
|
||||
}
|
||||
|
||||
/// @brief Set storage for the parser.
|
||||
///
|
||||
/// Sets storage for the parser. This storage points to the
|
||||
/// vector of options and is used by multiple instances of
|
||||
/// OptionDataParser. Each instance creates exactly one object
|
||||
/// of dhcp::Option or derived type and appends it to this
|
||||
/// storage.
|
||||
///
|
||||
/// @param storage pointer to the options storage
|
||||
void setStorage(OptionStorage* storage) {
|
||||
options_ = storage;
|
||||
}
|
||||
|
||||
private:
|
||||
|
||||
/// @brief Create option instance.
|
||||
///
|
||||
/// Creates an instance of an option and adds it to the provided
|
||||
/// options storage. If the option data parsed by \ref build function
|
||||
/// are invalid or insufficient this function emits an exception.
|
||||
///
|
||||
/// @warning this function does not check if options_ storage pointer
|
||||
/// is intitialized but this check is not needed here because it is done
|
||||
/// in the \ref build function.
|
||||
///
|
||||
/// @throw Dhcp4ConfigError if parameters provided in the configuration
|
||||
/// are invalid.
|
||||
void createOption() {
|
||||
// Option code is held in the uint32_t storage but is supposed to
|
||||
// be uint16_t value. We need to check that value in the configuration
|
||||
// does not exceed range of uint16_t and is not zero.
|
||||
uint32_t option_code = getUint32Param("code");
|
||||
if (option_code == 0) {
|
||||
isc_throw(Dhcp4ConfigError, "Parser error: value of 'code' must not"
|
||||
<< " be equal to zero. Option code '0' is reserved in"
|
||||
<< " DHCPv4.");
|
||||
} else if (option_code > std::numeric_limits<uint16_t>::max()) {
|
||||
isc_throw(Dhcp4ConfigError, "Parser error: value of 'code' must not"
|
||||
<< " exceed " << std::numeric_limits<uint16_t>::max());
|
||||
}
|
||||
// Check that the option name has been specified, is non-empty and does not
|
||||
// contain spaces.
|
||||
// @todo possibly some more restrictions apply here?
|
||||
std::string option_name = getStringParam("name");
|
||||
if (option_name.empty()) {
|
||||
isc_throw(Dhcp4ConfigError, "Parser error: option name must not be"
|
||||
<< " empty");
|
||||
} else if (option_name.find(" ") != std::string::npos) {
|
||||
isc_throw(Dhcp4ConfigError, "Parser error: option name must not contain"
|
||||
<< " spaces");
|
||||
}
|
||||
|
||||
// Get option data from the configuration database ('data' field).
|
||||
// Option data is specified by the user as case insensitive string
|
||||
// of hexadecimal digits for each option.
|
||||
std::string option_data = getStringParam("data");
|
||||
// Transform string of hexadecimal digits into binary format.
|
||||
std::vector<uint8_t> binary;
|
||||
try {
|
||||
util::encode::decodeHex(option_data, binary);
|
||||
} catch (...) {
|
||||
isc_throw(Dhcp4ConfigError, "Parser error: option data is not a valid"
|
||||
<< " string of hexadecimal digits: " << option_data);
|
||||
}
|
||||
// Get all existing DHCPv4 option definitions. The one that matches
|
||||
// our option will be picked and used to create it.
|
||||
OptionDefContainer option_defs = LibDHCP::getOptionDefs(Option::V4);
|
||||
// Get search index #1. It allows searching for options definitions
|
||||
// using option type value.
|
||||
const OptionDefContainerTypeIndex& idx = option_defs.get<1>();
|
||||
// Get all option definitions matching option code we want to create.
|
||||
const OptionDefContainerTypeRange& range = idx.equal_range(option_code);
|
||||
size_t num_defs = std::distance(range.first, range.second);
|
||||
OptionPtr option;
|
||||
// Currently we do not allow duplicated definitions and if there are
|
||||
// any duplicates we issue internal server error.
|
||||
if (num_defs > 1) {
|
||||
isc_throw(Dhcp4ConfigError, "Internal error: currently it is not"
|
||||
<< " supported to initialize multiple option definitions"
|
||||
<< " for the same option code. This will be supported once"
|
||||
<< " there option spaces are implemented.");
|
||||
} else if (num_defs == 0) {
|
||||
// @todo We have a limited set of option definitions intiialized at the moment.
|
||||
// In the future we want to initialize option definitions for all options.
|
||||
// Consequently an error will be issued if an option definition does not exist
|
||||
// for a particular option code. For now it is ok to create generic option
|
||||
// if definition does not exist.
|
||||
OptionPtr option(new Option(Option::V4, static_cast<uint16_t>(option_code),
|
||||
binary));
|
||||
// The created option is stored in option_descriptor_ class member until the
|
||||
// commit stage when it is inserted into the main storage. If an option with the
|
||||
// same code exists in main storage already the old option is replaced.
|
||||
option_descriptor_.option = option;
|
||||
option_descriptor_.persistent = false;
|
||||
} else {
|
||||
// We have exactly one option definition for the particular option code
|
||||
// use it to create the option instance.
|
||||
const OptionDefinitionPtr& def = *(range.first);
|
||||
try {
|
||||
OptionPtr option = def->optionFactory(Option::V4, option_code, binary);
|
||||
Subnet::OptionDescriptor desc(option, false);
|
||||
option_descriptor_.option = option;
|
||||
option_descriptor_.persistent = false;
|
||||
} catch (const isc::Exception& ex) {
|
||||
isc_throw(Dhcp4ConfigError, "Parser error: option data does not match"
|
||||
<< " option definition (code " << option_code << "): "
|
||||
<< ex.what());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// @brief Get a parameter from the strings storage.
|
||||
///
|
||||
/// @param param_id parameter identifier.
|
||||
/// @throw Dhcp4ConfigError if parameter has not been found.
|
||||
std::string getStringParam(const std::string& param_id) const {
|
||||
StringStorage::const_iterator param = string_values_.find(param_id);
|
||||
if (param == string_values_.end()) {
|
||||
isc_throw(Dhcp4ConfigError, "Parser error: option-data parameter"
|
||||
<< " '" << param_id << "' not specified");
|
||||
}
|
||||
return (param->second);
|
||||
}
|
||||
|
||||
/// @brief Get a parameter from the uint32 values storage.
|
||||
///
|
||||
/// @param param_id parameter identifier.
|
||||
/// @throw Dhcp4ConfigError if parameter has not been found.
|
||||
uint32_t getUint32Param(const std::string& param_id) const {
|
||||
Uint32Storage::const_iterator param = uint32_values_.find(param_id);
|
||||
if (param == uint32_values_.end()) {
|
||||
isc_throw(Dhcp4ConfigError, "Parser error: option-data parameter"
|
||||
<< " '" << param_id << "' not specified");
|
||||
}
|
||||
return (param->second);
|
||||
}
|
||||
|
||||
/// Storage for uint32 values (e.g. option code).
|
||||
Uint32Storage uint32_values_;
|
||||
/// Storage for string values (e.g. option name or data).
|
||||
StringStorage string_values_;
|
||||
/// Pointer to options storage. This storage is provided by
|
||||
/// the calling class and is shared by all OptionDataParser objects.
|
||||
OptionStorage* options_;
|
||||
/// Option descriptor holds newly configured option.
|
||||
Subnet::OptionDescriptor option_descriptor_;
|
||||
};
|
||||
|
||||
/// @brief Parser for option data values within a subnet.
|
||||
///
|
||||
/// This parser iterates over all entries that define options
|
||||
/// data for a particular subnet and creates a collection of options.
|
||||
/// If parsing is successful, all these options are added to the Subnet
|
||||
/// object.
|
||||
class OptionDataListParser : public Dhcp4ConfigParser {
|
||||
public:
|
||||
|
||||
/// @brief Constructor.
|
||||
///
|
||||
/// Unless otherwise specified, parsed options will be stored in
|
||||
/// a global option container (option_default). That storage location
|
||||
/// is overriden on a subnet basis.
|
||||
OptionDataListParser(const std::string&)
|
||||
: options_(&option_defaults), local_options_() { }
|
||||
|
||||
/// @brief Parses entries that define options' data for a subnet.
|
||||
///
|
||||
/// This method iterates over all entries that define option data
|
||||
/// for options within a single subnet and creates options' instances.
|
||||
///
|
||||
/// @param option_data_list pointer to a list of options' data sets.
|
||||
/// @throw Dhcp4ConfigError if option parsing failed.
|
||||
void build(ConstElementPtr option_data_list) {
|
||||
BOOST_FOREACH(ConstElementPtr option_value, option_data_list->listValue()) {
|
||||
boost::shared_ptr<OptionDataParser> parser(new OptionDataParser("option-data"));
|
||||
// options_ member will hold instances of all options thus
|
||||
// each OptionDataParser takes it as a storage.
|
||||
parser->setStorage(&local_options_);
|
||||
// Build the instance of a single option.
|
||||
parser->build(option_value);
|
||||
// Store a parser as it will be used to commit.
|
||||
parsers_.push_back(parser);
|
||||
}
|
||||
}
|
||||
|
||||
/// @brief Set storage for option instances.
|
||||
///
|
||||
/// @param storage pointer to options storage.
|
||||
void setStorage(OptionStorage* storage) {
|
||||
options_ = storage;
|
||||
}
|
||||
|
||||
|
||||
/// @brief Commit all option values.
|
||||
///
|
||||
/// This function invokes commit for all option values.
|
||||
void commit() {
|
||||
BOOST_FOREACH(ParserPtr parser, parsers_) {
|
||||
parser->commit();
|
||||
}
|
||||
// Parsing was successful and we have all configured
|
||||
// options in local storage. We can now replace old values
|
||||
// with new values.
|
||||
std::swap(local_options_, *options_);
|
||||
}
|
||||
|
||||
/// @brief Create DhcpDataListParser object
|
||||
///
|
||||
/// @param param_name param name.
|
||||
///
|
||||
/// @return DhcpConfigParser object.
|
||||
static Dhcp4ConfigParser* Factory(const std::string& param_name) {
|
||||
return (new OptionDataListParser(param_name));
|
||||
}
|
||||
|
||||
/// Intermediate option storage. This storage is used by
|
||||
/// lower level parsers to add new options. Values held
|
||||
/// in this storage are assigned to main storage (options_)
|
||||
/// if overall parsing was successful.
|
||||
OptionStorage local_options_;
|
||||
/// Pointer to options instances storage.
|
||||
OptionStorage* options_;
|
||||
/// Collection of parsers;
|
||||
ParserCollection parsers_;
|
||||
};
|
||||
|
||||
/// @brief this class parses a single subnet
|
||||
///
|
||||
/// This class parses the whole subnet definition. It creates parsers
|
||||
@@ -470,35 +819,31 @@ public:
|
||||
void build(ConstElementPtr subnet) {
|
||||
|
||||
BOOST_FOREACH(ConfigPair param, subnet->mapValue()) {
|
||||
|
||||
ParserPtr parser(createSubnet4ConfigParser(param.first));
|
||||
// The actual type of the parser is unknown here. We have to discover
|
||||
// the parser type here to invoke the corresponding setStorage function
|
||||
// on it. We discover parser type by trying to cast the parser to various
|
||||
// parser types and checking which one was successful. For this one
|
||||
// a setStorage and build methods are invoked.
|
||||
|
||||
// if this is an Uint32 parser, tell it to store the values
|
||||
// in values_, rather than in global storage
|
||||
boost::shared_ptr<Uint32Parser> uint_parser =
|
||||
boost::dynamic_pointer_cast<Uint32Parser>(parser);
|
||||
if (uint_parser) {
|
||||
uint_parser->setStorage(&uint32_values_);
|
||||
} else {
|
||||
|
||||
boost::shared_ptr<StringParser> string_parser =
|
||||
boost::dynamic_pointer_cast<StringParser>(parser);
|
||||
if (string_parser) {
|
||||
string_parser->setStorage(&string_values_);
|
||||
} else {
|
||||
|
||||
boost::shared_ptr<PoolParser> pool_parser =
|
||||
boost::dynamic_pointer_cast<PoolParser>(parser);
|
||||
if (pool_parser) {
|
||||
pool_parser->setStorage(&pools_);
|
||||
}
|
||||
}
|
||||
// Try uint32 type parser.
|
||||
if (!buildParser<Uint32Parser, Uint32Storage >(parser, uint32_values_,
|
||||
param.second) &&
|
||||
// Try string type parser.
|
||||
!buildParser<StringParser, StringStorage >(parser, string_values_,
|
||||
param.second) &&
|
||||
// Try pool parser.
|
||||
!buildParser<PoolParser, PoolStorage >(parser, pools_,
|
||||
param.second) &&
|
||||
// Try option data parser.
|
||||
!buildParser<OptionDataListParser, OptionStorage >(parser, options_,
|
||||
param.second)) {
|
||||
// Appropriate parsers are created in the createSubnet6ConfigParser
|
||||
// and they should be limited to those that we check here for. Thus,
|
||||
// if we fail to find a matching parser here it is a programming error.
|
||||
isc_throw(Dhcp4ConfigError, "failed to find suitable parser");
|
||||
}
|
||||
|
||||
parser->build(param.second);
|
||||
parsers_.push_back(parser);
|
||||
}
|
||||
|
||||
// Ok, we now have subnet parsed
|
||||
}
|
||||
|
||||
@@ -510,6 +855,10 @@ public:
|
||||
/// objects. Subnet4 are then added to DHCP CfgMgr.
|
||||
/// @throw Dhcp4ConfigError if there are any issues encountered during commit
|
||||
void commit() {
|
||||
// Invoke commit on all sub-data parsers.
|
||||
BOOST_FOREACH(ParserPtr parser, parsers_) {
|
||||
parser->commit();
|
||||
}
|
||||
|
||||
StringStorage::const_iterator it = string_values_.find("subnet");
|
||||
if (it == string_values_.end()) {
|
||||
@@ -545,11 +894,79 @@ public:
|
||||
subnet->addPool4(*it);
|
||||
}
|
||||
|
||||
const Subnet::OptionContainer& options = subnet->getOptions();
|
||||
const Subnet::OptionContainerTypeIndex& idx = options.get<1>();
|
||||
|
||||
// Add subnet specific options.
|
||||
BOOST_FOREACH(Subnet::OptionDescriptor desc, options_) {
|
||||
Subnet::OptionContainerTypeRange range = idx.equal_range(desc.option->getType());
|
||||
if (std::distance(range.first, range.second) > 0) {
|
||||
LOG_WARN(dhcp4_logger, DHCP4_CONFIG_OPTION_DUPLICATE)
|
||||
.arg(desc.option->getType()).arg(addr.toText());
|
||||
}
|
||||
subnet->addOption(desc.option);
|
||||
}
|
||||
|
||||
// Check all global options and add them to the subnet object if
|
||||
// they have been configured in the global scope. If they have been
|
||||
// configured in the subnet scope we don't add global option because
|
||||
// the one configured in the subnet scope always takes precedence.
|
||||
BOOST_FOREACH(Subnet::OptionDescriptor desc, option_defaults) {
|
||||
// Get all options specified locally in the subnet and having
|
||||
// code equal to global option's code.
|
||||
Subnet::OptionContainerTypeRange range = idx.equal_range(desc.option->getType());
|
||||
// @todo: In the future we will be searching for options using either
|
||||
// an option code or namespace. Currently we have only the option
|
||||
// code available so if there is at least one option found with the
|
||||
// specific code we don't add the globally configured option.
|
||||
// @todo with this code the first globally configured option
|
||||
// with the given code will be added to a subnet. We may
|
||||
// want to issue a warning about dropping the configuration of
|
||||
// a global option if one already exsists.
|
||||
if (std::distance(range.first, range.second) == 0) {
|
||||
subnet->addOption(desc.option);
|
||||
}
|
||||
}
|
||||
|
||||
CfgMgr::instance().addSubnet4(subnet);
|
||||
}
|
||||
|
||||
private:
|
||||
|
||||
/// @brief Set storage for a parser and invoke build.
|
||||
///
|
||||
/// This helper method casts the provided parser pointer to the specified
|
||||
/// type. If the cast is successful it sets the corresponding storage for
|
||||
/// this parser, invokes build on it and saves the parser.
|
||||
///
|
||||
/// @tparam T parser type to which parser argument should be cast.
|
||||
/// @tparam Y storage type for the specified parser type.
|
||||
/// @param parser parser on which build must be invoked.
|
||||
/// @param storage reference to a storage that will be set for a parser.
|
||||
/// @param subnet subnet element read from the configuration and being parsed.
|
||||
/// @return true if parser pointer was successfully cast to specialized
|
||||
/// parser type provided as Y.
|
||||
template<typename T, typename Y>
|
||||
bool buildParser(const ParserPtr& parser, Y& storage, const ConstElementPtr& subnet) {
|
||||
// We need to cast to T in order to set storage for the parser.
|
||||
boost::shared_ptr<T> cast_parser = boost::dynamic_pointer_cast<T>(parser);
|
||||
// It is common that this cast is not successful because we try to cast to all
|
||||
// supported parser types as we don't know the type of a parser in advance.
|
||||
if (cast_parser) {
|
||||
// Cast, successful so we go ahead with setting storage and actual parse.
|
||||
cast_parser->setStorage(&storage);
|
||||
parser->build(subnet);
|
||||
parsers_.push_back(parser);
|
||||
// We indicate that cast was successful so as the calling function
|
||||
// may skip attempts to cast to other parser types and proceed to
|
||||
// next element.
|
||||
return (true);
|
||||
}
|
||||
// It was not successful. Indicate that another parser type
|
||||
// should be tried.
|
||||
return (false);
|
||||
}
|
||||
|
||||
/// @brief creates parsers for entries in subnet definition
|
||||
///
|
||||
/// @todo Add subnet-specific things here (e.g. subnet-specific options)
|
||||
@@ -565,6 +982,7 @@ private:
|
||||
factories["rebind-timer"] = Uint32Parser::Factory;
|
||||
factories["subnet"] = StringParser::Factory;
|
||||
factories["pool"] = PoolParser::Factory;
|
||||
factories["option-data"] = OptionDataListParser::Factory;
|
||||
|
||||
FactoryMap::iterator f = factories.find(config_id);
|
||||
if (f == factories.end()) {
|
||||
@@ -620,6 +1038,9 @@ private:
|
||||
/// storage for pools belonging to this subnet
|
||||
PoolStorage pools_;
|
||||
|
||||
/// storage for options belonging to this subnet
|
||||
OptionStorage options_;
|
||||
|
||||
/// parsers are stored here
|
||||
ParserCollection parsers_;
|
||||
};
|
||||
@@ -650,7 +1071,6 @@ public:
|
||||
// used: Subnet4ConfigParser
|
||||
|
||||
BOOST_FOREACH(ConstElementPtr subnet, subnets_list->listValue()) {
|
||||
|
||||
ParserPtr parser(new Subnet4ConfigParser("subnet"));
|
||||
parser->build(subnet);
|
||||
subnets_.push_back(parser);
|
||||
@@ -702,6 +1122,7 @@ Dhcp4ConfigParser* createGlobalDhcp4ConfigParser(const std::string& config_id) {
|
||||
factories["rebind-timer"] = Uint32Parser::Factory;
|
||||
factories["interface"] = InterfaceListConfigParser::Factory;
|
||||
factories["subnet4"] = Subnets4ListConfigParser::Factory;
|
||||
factories["option-data"] = OptionDataListParser::Factory;
|
||||
factories["version"] = StringParser::Factory;
|
||||
|
||||
FactoryMap::iterator f = factories.find(config_id);
|
||||
@@ -739,7 +1160,7 @@ configureDhcp4Server(Dhcpv4Srv& , ConstElementPtr config_set) {
|
||||
}
|
||||
} catch (const isc::Exception& ex) {
|
||||
ConstElementPtr answer = isc::config::createAnswer(1,
|
||||
string("Configuration parsing failed:") + ex.what());
|
||||
string("Configuration parsing failed: ") + ex.what());
|
||||
return (answer);
|
||||
} catch (...) {
|
||||
// for things like bad_cast in boost::lexical_cast
|
||||
@@ -754,7 +1175,7 @@ configureDhcp4Server(Dhcpv4Srv& , ConstElementPtr config_set) {
|
||||
}
|
||||
catch (const isc::Exception& ex) {
|
||||
ConstElementPtr answer = isc::config::createAnswer(2,
|
||||
string("Configuration commit failed:") + ex.what());
|
||||
string("Configuration commit failed: ") + ex.what());
|
||||
return (answer);
|
||||
} catch (...) {
|
||||
// for things like bad_cast in boost::lexical_cast
|
||||
|
@@ -14,6 +14,7 @@
|
||||
|
||||
#include <exceptions/exceptions.h>
|
||||
#include <cc/data.h>
|
||||
#include <stdint.h>
|
||||
#include <string>
|
||||
|
||||
#ifndef DHCP4_CONFIG_PARSER_H
|
||||
|
@@ -34,6 +34,37 @@
|
||||
"item_default": 4000
|
||||
},
|
||||
|
||||
{ "item_name": "option-data",
|
||||
"item_type": "list",
|
||||
"item_optional": false,
|
||||
"item_default": [],
|
||||
"list_item_spec":
|
||||
{
|
||||
"item_name": "single-option-data",
|
||||
"item_type": "map",
|
||||
"item_optional": false,
|
||||
"item_default": {},
|
||||
"map_item_spec": [
|
||||
{
|
||||
"item_name": "name",
|
||||
"item_type": "string",
|
||||
"item_optional": false,
|
||||
"item_default": ""
|
||||
},
|
||||
|
||||
{ "item_name": "code",
|
||||
"item_type": "integer",
|
||||
"item_optional": false,
|
||||
"item_default": 0
|
||||
},
|
||||
{ "item_name": "data",
|
||||
"item_type": "string",
|
||||
"item_optional": false,
|
||||
"item_default": ""
|
||||
} ]
|
||||
}
|
||||
},
|
||||
|
||||
{ "item_name": "subnet4",
|
||||
"item_type": "list",
|
||||
"item_optional": false,
|
||||
@@ -80,9 +111,40 @@
|
||||
"item_optional": false,
|
||||
"item_default": ""
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
|
||||
{ "item_name": "option-data",
|
||||
"item_type": "list",
|
||||
"item_optional": false,
|
||||
"item_default": [],
|
||||
"list_item_spec":
|
||||
{
|
||||
"item_name": "single-option-data",
|
||||
"item_type": "map",
|
||||
"item_optional": false,
|
||||
"item_default": {},
|
||||
"map_item_spec": [
|
||||
{
|
||||
"item_name": "name",
|
||||
"item_type": "string",
|
||||
"item_optional": false,
|
||||
"item_default": ""
|
||||
},
|
||||
{
|
||||
"item_name": "code",
|
||||
"item_type": "integer",
|
||||
"item_optional": false,
|
||||
"item_default": 0
|
||||
},
|
||||
{
|
||||
"item_name": "data",
|
||||
"item_type": "string",
|
||||
"item_optional": false,
|
||||
"item_default": ""
|
||||
} ]
|
||||
}
|
||||
} ]
|
||||
}
|
||||
}
|
||||
],
|
||||
"commands": [
|
||||
|
@@ -26,29 +26,34 @@ to establish a session with the BIND 10 control channel.
|
||||
A debug message listing the command (and possible arguments) received
|
||||
from the BIND 10 control system by the IPv4 DHCP server.
|
||||
|
||||
% DHCP4_CONFIG_COMPLETE DHCPv4 server has completed configuration: %1
|
||||
This is an informational message announcing the successful processing of a
|
||||
new configuration. it is output during server startup, and when an updated
|
||||
configuration is committed by the administrator. Additional information
|
||||
may be provided.
|
||||
|
||||
% DHCP4_CONFIG_LOAD_FAIL failed to load configuration: %1
|
||||
This critical error message indicates that the initial DHCPv4
|
||||
configuration has failed. The server will start, but nothing will be
|
||||
served until the configuration has been corrected.
|
||||
|
||||
% DHCP4_CONFIG_UPDATE updated configuration received: %1
|
||||
A debug message indicating that the IPv4 DHCP server has received an
|
||||
updated configuration from the BIND 10 configuration system.
|
||||
% DHCP4_CONFIG_NEW_SUBNET A new subnet has been added to configuration: %1
|
||||
This is an informational message reporting that the configuration has
|
||||
been extended to include the specified IPv4 subnet.
|
||||
|
||||
% DHCP4_CONFIG_START DHCPv4 server is processing the following configuration: %1
|
||||
This is a debug message that is issued every time the server receives a
|
||||
configuration. That happens at start up and also when a server configuration
|
||||
change is committed by the administrator.
|
||||
|
||||
% DHCP4_CONFIG_NEW_SUBNET A new subnet has been added to configuration: %1
|
||||
This is an informational message reporting that the configuration has
|
||||
been extended to include the specified IPv4 subnet.
|
||||
% DHCP4_CONFIG_UPDATE updated configuration received: %1
|
||||
A debug message indicating that the IPv4 DHCP server has received an
|
||||
updated configuration from the BIND 10 configuration system.
|
||||
|
||||
% DHCP4_CONFIG_COMPLETE DHCPv4 server has completed configuration: %1
|
||||
This is an informational message announcing the successful processing of a
|
||||
new configuration. it is output during server startup, and when an updated
|
||||
configuration is committed by the administrator. Additional information
|
||||
may be provided.
|
||||
% DHCP4_CONFIG_OPTION_DUPLICATE multiple options with the code: %1 added to the subnet: %2
|
||||
This warning message is issued on an attempt to configure multiple options with the
|
||||
same option code for the particular subnet. Adding multiple options is uncommon
|
||||
for DHCPv4, but it is not prohibited.
|
||||
|
||||
% DHCP4_NOT_RUNNING IPv4 DHCP server is not running
|
||||
A warning message is issued when an attempt is made to shut down the
|
||||
@@ -70,7 +75,7 @@ may well be a valid DHCP packet, just a type not expected by the server
|
||||
|
||||
% DHCP4_PACKET_RECEIVE_FAIL error on attempt to receive packet: %1
|
||||
The IPv4 DHCP server tried to receive a packet but an error
|
||||
occured during this attempt. The reason for the error is included in
|
||||
occurred during this attempt. The reason for the error is included in
|
||||
the message.
|
||||
|
||||
% DHCP4_PACKET_SEND_FAIL failed to send DHCPv4 packet: %1
|
||||
|
@@ -66,13 +66,13 @@ dhcp4_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES)
|
||||
dhcp4_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS)
|
||||
dhcp4_unittests_LDADD = $(GTEST_LDADD)
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/asiolink/libb10-asiolink.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/cc/libb10-cc.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/config/libb10-cfgclient.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/dhcp/libb10-dhcp++.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/dhcpsrv/libb10-dhcpsrv.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libb10-exceptions.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/log/libb10-log.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/asiolink/libb10-asiolink.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/config/libb10-cfgclient.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/cc/libb10-cc.la
|
||||
dhcp4_unittests_LDADD += $(top_builddir)/src/lib/util/libb10-util.la
|
||||
endif
|
||||
|
||||
noinst_PROGRAMS = $(TESTS)
|
||||
|
@@ -22,6 +22,7 @@
|
||||
#include <config/ccsession.h>
|
||||
#include <dhcpsrv/subnet.h>
|
||||
#include <dhcpsrv/cfgmgr.h>
|
||||
#include <boost/foreach.hpp>
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
@@ -73,9 +74,188 @@ public:
|
||||
}
|
||||
|
||||
~Dhcp4ParserTest() {
|
||||
resetConfiguration();
|
||||
delete srv_;
|
||||
};
|
||||
|
||||
/// @brief Create the simple configuration with single option.
|
||||
///
|
||||
/// This function allows to set one of the parameters that configure
|
||||
/// option value. These parameters are: "name", "code" and "data".
|
||||
///
|
||||
/// @param param_value string holiding option parameter value to be
|
||||
/// injected into the configuration string.
|
||||
/// @param parameter name of the parameter to be configured with
|
||||
/// param value.
|
||||
/// @return configuration string containing custom values of parameters
|
||||
/// describing an option.
|
||||
std::string createConfigWithOption(const std::string& param_value,
|
||||
const std::string& parameter) {
|
||||
std::map<std::string, std::string> params;
|
||||
if (parameter == "name") {
|
||||
params["name"] = param_value;
|
||||
params["code"] = "56";
|
||||
params["data"] = "AB CDEF0105";
|
||||
} else if (parameter == "code") {
|
||||
params["name"] = "option_foo";
|
||||
params["code"] = param_value;
|
||||
params["data"] = "AB CDEF0105";
|
||||
} else if (parameter == "data") {
|
||||
params["name"] = "option_foo";
|
||||
params["code"] = "56";
|
||||
params["data"] = param_value;
|
||||
}
|
||||
return (createConfigWithOption(params));
|
||||
}
|
||||
|
||||
/// @brief Create simple configuration with single option.
|
||||
///
|
||||
/// This function creates a configuration for a single option with
|
||||
/// custom values for all parameters that describe the option.
|
||||
///
|
||||
/// @params params map holding parameters and their values.
|
||||
/// @return configuration string containing custom values of parameters
|
||||
/// describing an option.
|
||||
std::string createConfigWithOption(const std::map<std::string, std::string>& params) {
|
||||
std::ostringstream stream;
|
||||
stream << "{ \"interface\": [ \"all\" ],"
|
||||
"\"rebind-timer\": 2000, "
|
||||
"\"renew-timer\": 1000, "
|
||||
"\"subnet4\": [ { "
|
||||
" \"pool\": [ \"192.0.2.1 - 192.0.2.100\" ],"
|
||||
" \"subnet\": \"192.0.2.0/24\", "
|
||||
" \"option-data\": [ {";
|
||||
bool first = true;
|
||||
typedef std::pair<std::string, std::string> ParamPair;
|
||||
BOOST_FOREACH(ParamPair param, params) {
|
||||
if (!first) {
|
||||
stream << ", ";
|
||||
} else {
|
||||
// cppcheck-suppress unreadVariable
|
||||
first = false;
|
||||
}
|
||||
if (param.first == "name") {
|
||||
stream << "\"name\": \"" << param.second << "\"";
|
||||
} else if (param.first == "code") {
|
||||
stream << "\"code\": " << param.second << "";
|
||||
} else if (param.first == "data") {
|
||||
stream << "\"data\": \"" << param.second << "\"";
|
||||
}
|
||||
}
|
||||
stream <<
|
||||
" } ]"
|
||||
" } ],"
|
||||
"\"valid-lifetime\": 4000 }";
|
||||
return (stream.str());
|
||||
}
|
||||
|
||||
/// @brief Test invalid option parameter value.
|
||||
///
|
||||
/// This test function constructs the simple configuration
|
||||
/// string and injects invalid option configuration into it.
|
||||
/// It expects that parser will fail with provided option code.
|
||||
///
|
||||
/// @param param_value string holding invalid option parameter value
|
||||
/// to be injected into configuration string.
|
||||
/// @param parameter name of the parameter to be configured with
|
||||
/// param_value (can be any of "name", "code", "data")
|
||||
void testInvalidOptionParam(const std::string& param_value,
|
||||
const std::string& parameter) {
|
||||
ConstElementPtr x;
|
||||
std::string config = createConfigWithOption(param_value, parameter);
|
||||
ElementPtr json = Element::fromJSON(config);
|
||||
EXPECT_NO_THROW(x = configureDhcp4Server(*srv_, json));
|
||||
ASSERT_TRUE(x);
|
||||
comment_ = parseAnswer(rcode_, x);
|
||||
ASSERT_EQ(1, rcode_);
|
||||
}
|
||||
|
||||
/// @brief Test option against given code and data.
|
||||
///
|
||||
/// @param option_desc option descriptor that carries the option to
|
||||
/// be tested.
|
||||
/// @param expected_code expected code of the option.
|
||||
/// @param expected_data expected data in the option.
|
||||
/// @param expected_data_len length of the reference data.
|
||||
/// @param extra_data if true extra data is allowed in an option
|
||||
/// after tested data.
|
||||
void testOption(const Subnet::OptionDescriptor& option_desc,
|
||||
uint16_t expected_code, const uint8_t* expected_data,
|
||||
size_t expected_data_len,
|
||||
bool extra_data = false) {
|
||||
// Check if option descriptor contains valid option pointer.
|
||||
ASSERT_TRUE(option_desc.option);
|
||||
// Verify option type.
|
||||
EXPECT_EQ(expected_code, option_desc.option->getType());
|
||||
// We may have many different option types being created. Some of them
|
||||
// have dedicated classes derived from Option class. In such case if
|
||||
// we want to verify the option contents against expected_data we have
|
||||
// to prepare raw buffer with the contents of the option. The easiest
|
||||
// way is to call pack() which will prepare on-wire data.
|
||||
util::OutputBuffer buf(option_desc.option->getData().size());
|
||||
option_desc.option->pack(buf);
|
||||
if (extra_data) {
|
||||
// The length of the buffer must be at least equal to size of the
|
||||
// reference data but it can sometimes be greater than that. This is
|
||||
// because some options carry suboptions that increase the overall
|
||||
// length.
|
||||
ASSERT_GE(buf.getLength() - option_desc.option->getHeaderLen(),
|
||||
expected_data_len);
|
||||
} else {
|
||||
ASSERT_EQ(buf.getLength() - option_desc.option->getHeaderLen(),
|
||||
expected_data_len);
|
||||
}
|
||||
// Verify that the data is correct. Do not verify suboptions and a header.
|
||||
const uint8_t* data = static_cast<const uint8_t*>(buf.getData());
|
||||
EXPECT_EQ(0, memcmp(expected_data, data + option_desc.option->getHeaderLen(),
|
||||
expected_data_len));
|
||||
}
|
||||
|
||||
/// @brief Reset configuration database.
|
||||
///
|
||||
/// This function resets configuration data base by
|
||||
/// removing all subnets and option-data. Reset must
|
||||
/// be performed after each test to make sure that
|
||||
/// contents of the database do not affect result of
|
||||
/// subsequent tests.
|
||||
void resetConfiguration() {
|
||||
ConstElementPtr status;
|
||||
|
||||
string config = "{ \"interface\": [ \"all\" ],"
|
||||
"\"rebind-timer\": 2000, "
|
||||
"\"renew-timer\": 1000, "
|
||||
"\"valid-lifetime\": 4000, "
|
||||
"\"subnet4\": [ ], "
|
||||
"\"option-data\": [ ] }";
|
||||
|
||||
try {
|
||||
ElementPtr json = Element::fromJSON(config);
|
||||
status = configureDhcp4Server(*srv_, json);
|
||||
} catch (const std::exception& ex) {
|
||||
FAIL() << "Fatal error: unable to reset configuration database"
|
||||
<< " after the test. The following configuration was used"
|
||||
<< " to reset database: " << std::endl
|
||||
<< config << std::endl
|
||||
<< " and the following error message was returned:"
|
||||
<< ex.what() << std::endl;
|
||||
}
|
||||
|
||||
// status object must not be NULL
|
||||
if (!status) {
|
||||
FAIL() << "Fatal error: unable to reset configuration database"
|
||||
<< " after the test. Configuration function returned"
|
||||
<< " NULL pointer" << std::endl;
|
||||
}
|
||||
|
||||
comment_ = parseAnswer(rcode_, status);
|
||||
// returned value should be 0 (configuration success)
|
||||
if (rcode_ != 0) {
|
||||
FAIL() << "Fatal error: unable to reset configuration database"
|
||||
<< " after the test. Configuration function returned"
|
||||
<< " error code " << rcode_ << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
Dhcpv4Srv* srv_;
|
||||
|
||||
int rcode_;
|
||||
@@ -248,6 +428,302 @@ TEST_F(Dhcp4ParserTest, poolPrefixLen) {
|
||||
EXPECT_EQ(4000, subnet->getValid());
|
||||
}
|
||||
|
||||
// Goal of this test is to verify that global option
|
||||
// data is configured for the subnet if the subnet
|
||||
// configuration does not include options configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionDataDefaults) {
|
||||
ConstElementPtr x;
|
||||
string config = "{ \"interface\": [ \"all\" ],"
|
||||
"\"rebind-timer\": 2000,"
|
||||
"\"renew-timer\": 1000,"
|
||||
"\"option-data\": [ {"
|
||||
" \"name\": \"option_foo\","
|
||||
" \"code\": 56,"
|
||||
" \"data\": \"AB CDEF0105\""
|
||||
" },"
|
||||
" {"
|
||||
" \"name\": \"option_foo2\","
|
||||
" \"code\": 23,"
|
||||
" \"data\": \"01\""
|
||||
" } ],"
|
||||
"\"subnet4\": [ { "
|
||||
" \"pool\": [ \"192.0.2.1 - 192.0.2.100\" ],"
|
||||
" \"subnet\": \"192.0.2.0/24\""
|
||||
" } ],"
|
||||
"\"valid-lifetime\": 4000 }";
|
||||
|
||||
ElementPtr json = Element::fromJSON(config);
|
||||
|
||||
EXPECT_NO_THROW(x = configureDhcp4Server(*srv_, json));
|
||||
ASSERT_TRUE(x);
|
||||
comment_ = parseAnswer(rcode_, x);
|
||||
ASSERT_EQ(0, rcode_);
|
||||
|
||||
Subnet4Ptr subnet = CfgMgr::instance().getSubnet4(IOAddress("192.0.2.200"));
|
||||
ASSERT_TRUE(subnet);
|
||||
const Subnet::OptionContainer& options = subnet->getOptions();
|
||||
ASSERT_EQ(2, options.size());
|
||||
|
||||
// Get the search index. Index #1 is to search using option code.
|
||||
const Subnet::OptionContainerTypeIndex& idx = options.get<1>();
|
||||
|
||||
// Get the options for specified index. Expecting one option to be
|
||||
// returned but in theory we may have multiple options with the same
|
||||
// code so we get the range.
|
||||
std::pair<Subnet::OptionContainerTypeIndex::const_iterator,
|
||||
Subnet::OptionContainerTypeIndex::const_iterator> range =
|
||||
idx.equal_range(56);
|
||||
// Expect single option with the code equal to 56.
|
||||
ASSERT_EQ(1, std::distance(range.first, range.second));
|
||||
const uint8_t foo_expected[] = {
|
||||
0xAB, 0xCD, 0xEF, 0x01, 0x05
|
||||
};
|
||||
// Check if option is valid in terms of code and carried data.
|
||||
testOption(*range.first, 56, foo_expected, sizeof(foo_expected));
|
||||
|
||||
range = idx.equal_range(23);
|
||||
ASSERT_EQ(1, std::distance(range.first, range.second));
|
||||
// Do another round of testing with second option.
|
||||
const uint8_t foo2_expected[] = {
|
||||
0x01
|
||||
};
|
||||
testOption(*range.first, 23, foo2_expected, sizeof(foo2_expected));
|
||||
}
|
||||
|
||||
// Goal of this test is to verify options configuration
|
||||
// for a single subnet. In particular this test checks
|
||||
// that local options configuration overrides global
|
||||
// option setting.
|
||||
TEST_F(Dhcp4ParserTest, optionDataInSingleSubnet) {
|
||||
ConstElementPtr x;
|
||||
string config = "{ \"interface\": [ \"all\" ],"
|
||||
"\"rebind-timer\": 2000, "
|
||||
"\"renew-timer\": 1000, "
|
||||
"\"option-data\": [ {"
|
||||
" \"name\": \"option_foo\","
|
||||
" \"code\": 56,"
|
||||
" \"data\": \"AB\""
|
||||
" } ],"
|
||||
"\"subnet4\": [ { "
|
||||
" \"pool\": [ \"192.0.2.1 - 192.0.2.100\" ],"
|
||||
" \"subnet\": \"192.0.2.0/24\", "
|
||||
" \"option-data\": [ {"
|
||||
" \"name\": \"option_foo\","
|
||||
" \"code\": 56,"
|
||||
" \"data\": \"AB CDEF0105\""
|
||||
" },"
|
||||
" {"
|
||||
" \"name\": \"option_foo2\","
|
||||
" \"code\": 23,"
|
||||
" \"data\": \"01\""
|
||||
" } ]"
|
||||
" } ],"
|
||||
"\"valid-lifetime\": 4000 }";
|
||||
|
||||
ElementPtr json = Element::fromJSON(config);
|
||||
|
||||
EXPECT_NO_THROW(x = configureDhcp4Server(*srv_, json));
|
||||
ASSERT_TRUE(x);
|
||||
comment_ = parseAnswer(rcode_, x);
|
||||
ASSERT_EQ(0, rcode_);
|
||||
|
||||
Subnet4Ptr subnet = CfgMgr::instance().getSubnet4(IOAddress("192.0.2.24"));
|
||||
ASSERT_TRUE(subnet);
|
||||
const Subnet::OptionContainer& options = subnet->getOptions();
|
||||
ASSERT_EQ(2, options.size());
|
||||
|
||||
// Get the search index. Index #1 is to search using option code.
|
||||
const Subnet::OptionContainerTypeIndex& idx = options.get<1>();
|
||||
|
||||
// Get the options for specified index. Expecting one option to be
|
||||
// returned but in theory we may have multiple options with the same
|
||||
// code so we get the range.
|
||||
std::pair<Subnet::OptionContainerTypeIndex::const_iterator,
|
||||
Subnet::OptionContainerTypeIndex::const_iterator> range =
|
||||
idx.equal_range(56);
|
||||
// Expect single option with the code equal to 100.
|
||||
ASSERT_EQ(1, std::distance(range.first, range.second));
|
||||
const uint8_t foo_expected[] = {
|
||||
0xAB, 0xCD, 0xEF, 0x01, 0x05
|
||||
};
|
||||
// Check if option is valid in terms of code and carried data.
|
||||
testOption(*range.first, 56, foo_expected, sizeof(foo_expected));
|
||||
|
||||
range = idx.equal_range(23);
|
||||
ASSERT_EQ(1, std::distance(range.first, range.second));
|
||||
// Do another round of testing with second option.
|
||||
const uint8_t foo2_expected[] = {
|
||||
0x01
|
||||
};
|
||||
testOption(*range.first, 23, foo2_expected, sizeof(foo2_expected));
|
||||
}
|
||||
|
||||
// Goal of this test is to verify options configuration
|
||||
// for multiple subnets.
|
||||
TEST_F(Dhcp4ParserTest, optionDataInMultipleSubnets) {
|
||||
ConstElementPtr x;
|
||||
string config = "{ \"interface\": [ \"all\" ],"
|
||||
"\"rebind-timer\": 2000, "
|
||||
"\"renew-timer\": 1000, "
|
||||
"\"subnet4\": [ { "
|
||||
" \"pool\": [ \"192.0.2.1 - 192.0.2.100\" ],"
|
||||
" \"subnet\": \"192.0.2.0/24\", "
|
||||
" \"option-data\": [ {"
|
||||
" \"name\": \"option_foo\","
|
||||
" \"code\": 56,"
|
||||
" \"data\": \"0102030405060708090A\""
|
||||
" } ]"
|
||||
" },"
|
||||
" {"
|
||||
" \"pool\": [ \"192.0.3.101 - 192.0.3.150\" ],"
|
||||
" \"subnet\": \"192.0.3.0/24\", "
|
||||
" \"option-data\": [ {"
|
||||
" \"name\": \"option_foo2\","
|
||||
" \"code\": 23,"
|
||||
" \"data\": \"FF\""
|
||||
" } ]"
|
||||
" } ],"
|
||||
"\"valid-lifetime\": 4000 }";
|
||||
|
||||
ElementPtr json = Element::fromJSON(config);
|
||||
|
||||
EXPECT_NO_THROW(x = configureDhcp4Server(*srv_, json));
|
||||
ASSERT_TRUE(x);
|
||||
comment_ = parseAnswer(rcode_, x);
|
||||
ASSERT_EQ(0, rcode_);
|
||||
|
||||
Subnet4Ptr subnet1 = CfgMgr::instance().getSubnet4(IOAddress("192.0.2.100"));
|
||||
ASSERT_TRUE(subnet1);
|
||||
const Subnet::OptionContainer& options1 = subnet1->getOptions();
|
||||
ASSERT_EQ(1, options1.size());
|
||||
|
||||
// Get the search index. Index #1 is to search using option code.
|
||||
const Subnet::OptionContainerTypeIndex& idx1 = options1.get<1>();
|
||||
|
||||
// Get the options for specified index. Expecting one option to be
|
||||
// returned but in theory we may have multiple options with the same
|
||||
// code so we get the range.
|
||||
std::pair<Subnet::OptionContainerTypeIndex::const_iterator,
|
||||
Subnet::OptionContainerTypeIndex::const_iterator> range1 =
|
||||
idx1.equal_range(56);
|
||||
// Expect single option with the code equal to 56.
|
||||
ASSERT_EQ(1, std::distance(range1.first, range1.second));
|
||||
const uint8_t foo_expected[] = {
|
||||
0x01, 0x02, 0x03, 0x04, 0x05,
|
||||
0x06, 0x07, 0x08, 0x09, 0x0A
|
||||
};
|
||||
// Check if option is valid in terms of code and carried data.
|
||||
testOption(*range1.first, 56, foo_expected, sizeof(foo_expected));
|
||||
|
||||
// Test another subnet in the same way.
|
||||
Subnet4Ptr subnet2 = CfgMgr::instance().getSubnet4(IOAddress("192.0.3.102"));
|
||||
ASSERT_TRUE(subnet2);
|
||||
const Subnet::OptionContainer& options2 = subnet2->getOptions();
|
||||
ASSERT_EQ(1, options2.size());
|
||||
|
||||
const Subnet::OptionContainerTypeIndex& idx2 = options2.get<1>();
|
||||
std::pair<Subnet::OptionContainerTypeIndex::const_iterator,
|
||||
Subnet::OptionContainerTypeIndex::const_iterator> range2 =
|
||||
idx2.equal_range(23);
|
||||
ASSERT_EQ(1, std::distance(range2.first, range2.second));
|
||||
|
||||
const uint8_t foo2_expected[] = { 0xFF };
|
||||
testOption(*range2.first, 23, foo2_expected, sizeof(foo2_expected));
|
||||
}
|
||||
|
||||
// Verify that empty option name is rejected in the configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionNameEmpty) {
|
||||
// Empty option names not allowed.
|
||||
testInvalidOptionParam("", "name");
|
||||
}
|
||||
|
||||
// Verify that empty option name with spaces is rejected
|
||||
// in the configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionNameSpaces) {
|
||||
// Spaces in option names not allowed.
|
||||
testInvalidOptionParam("option foo", "name");
|
||||
}
|
||||
|
||||
// Verify that negative option code is rejected in the configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionCodeNegative) {
|
||||
// Check negative option code -4. This should fail too.
|
||||
testInvalidOptionParam("-4", "code");
|
||||
}
|
||||
|
||||
// Verify that out of bounds option code is rejected in the configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionCodeNonUint8) {
|
||||
// The valid option codes are uint16_t values so passing
|
||||
// uint16_t maximum value incremented by 1 should result
|
||||
// in failure.
|
||||
testInvalidOptionParam("257", "code");
|
||||
}
|
||||
|
||||
// Verify that zero option code is rejected in the configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionCodeZero) {
|
||||
// Option code 0 is reserved and should not be accepted
|
||||
// by configuration parser.
|
||||
testInvalidOptionParam("0", "code");
|
||||
}
|
||||
|
||||
// Verify that option data which contains non hexadecimal characters
|
||||
// is rejected by the configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionDataInvalidChar) {
|
||||
// Option code 0 is reserved and should not be accepted
|
||||
// by configuration parser.
|
||||
testInvalidOptionParam("01020R", "data");
|
||||
}
|
||||
|
||||
// Verify that option data containins '0x' prefix is rejected
|
||||
// by the configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionDataUnexpectedPrefix) {
|
||||
// Option code 0 is reserved and should not be accepted
|
||||
// by configuration parser.
|
||||
testInvalidOptionParam("0x0102", "data");
|
||||
}
|
||||
|
||||
// Verify that option data consisting od an odd number of
|
||||
// hexadecimal digits is rejected in the configuration.
|
||||
TEST_F(Dhcp4ParserTest, optionDataOddLength) {
|
||||
// Option code 0 is reserved and should not be accepted
|
||||
// by configuration parser.
|
||||
testInvalidOptionParam("123", "data");
|
||||
}
|
||||
|
||||
// Verify that either lower or upper case characters are allowed
|
||||
// to specify the option data.
|
||||
TEST_F(Dhcp4ParserTest, optionDataLowerCase) {
|
||||
ConstElementPtr x;
|
||||
std::string config = createConfigWithOption("0a0b0C0D", "data");
|
||||
ElementPtr json = Element::fromJSON(config);
|
||||
|
||||
EXPECT_NO_THROW(x = configureDhcp4Server(*srv_, json));
|
||||
ASSERT_TRUE(x);
|
||||
comment_ = parseAnswer(rcode_, x);
|
||||
ASSERT_EQ(0, rcode_);
|
||||
|
||||
Subnet4Ptr subnet = CfgMgr::instance().getSubnet4(IOAddress("192.0.2.5"));
|
||||
ASSERT_TRUE(subnet);
|
||||
const Subnet::OptionContainer& options = subnet->getOptions();
|
||||
ASSERT_EQ(1, options.size());
|
||||
|
||||
// Get the search index. Index #1 is to search using option code.
|
||||
const Subnet::OptionContainerTypeIndex& idx = options.get<1>();
|
||||
|
||||
// Get the options for specified index. Expecting one option to be
|
||||
// returned but in theory we may have multiple options with the same
|
||||
// code so we get the range.
|
||||
std::pair<Subnet::OptionContainerTypeIndex::const_iterator,
|
||||
Subnet::OptionContainerTypeIndex::const_iterator> range =
|
||||
idx.equal_range(56);
|
||||
// Expect single option with the code equal to 100.
|
||||
ASSERT_EQ(1, std::distance(range.first, range.second));
|
||||
const uint8_t foo_expected[] = {
|
||||
0x0A, 0x0B, 0x0C, 0x0D
|
||||
};
|
||||
// Check if option is valid in terms of code and carried data.
|
||||
testOption(*range.first, 56, foo_expected, sizeof(foo_expected));
|
||||
}
|
||||
|
||||
/// This test checks if Uint32Parser can really parse the whole range
|
||||
/// and properly err of out of range values. As we can't call Uint32Parser
|
||||
/// directly, we are exploiting the fact that it is used to parse global
|
||||
|
@@ -59,14 +59,14 @@ if USE_CLANGPP
|
||||
b10_dhcp6_CXXFLAGS = -Wno-unused-parameter
|
||||
endif
|
||||
|
||||
b10_dhcp6_LDADD = $(top_builddir)/src/lib/exceptions/libb10-exceptions.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/util/libb10-util.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/asiolink/libb10-asiolink.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/log/libb10-log.la
|
||||
b10_dhcp6_LDADD = $(top_builddir)/src/lib/asiolink/libb10-asiolink.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/cc/libb10-cc.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/config/libb10-cfgclient.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/dhcp/libb10-dhcp++.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/dhcpsrv/libb10-dhcpsrv.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/config/libb10-cfgclient.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/cc/libb10-cc.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/exceptions/libb10-exceptions.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/log/libb10-log.la
|
||||
b10_dhcp6_LDADD += $(top_builddir)/src/lib/util/libb10-util.la
|
||||
|
||||
b10_dhcp6dir = $(pkgdatadir)
|
||||
b10_dhcp6_DATA = dhcp6.spec
|
||||
|
@@ -496,12 +496,12 @@ private:
|
||||
///
|
||||
/// This parser parses configuration entries that specify value of
|
||||
/// a single option. These entries include option name, option code
|
||||
/// and data carried by the option. If parsing is successful than an
|
||||
/// and data carried by the option. If parsing is successful then an
|
||||
/// instance of an option is created and added to the storage provided
|
||||
/// by the calling class.
|
||||
///
|
||||
/// @todo This class parses and validates the option name. However it is
|
||||
/// not used anywhere util support for option spaces is implemented
|
||||
/// not used anywhere until support for option spaces is implemented
|
||||
/// (see tickets #2319, #2314). When option spaces are implemented
|
||||
/// there will be a way to reference the particular option using
|
||||
/// its type (code) or option name.
|
||||
@@ -857,26 +857,21 @@ public:
|
||||
// a setStorage and build methods are invoked.
|
||||
|
||||
// Try uint32 type parser.
|
||||
if (buildParser<Uint32Parser, Uint32Storage >(parser, uint32_values_,
|
||||
param.second)) {
|
||||
// Storage set, build invoked on the parser, proceed with
|
||||
// next configuration element.
|
||||
continue;
|
||||
}
|
||||
// Try string type parser.
|
||||
if (buildParser<StringParser, StringStorage >(parser, string_values_,
|
||||
param.second)) {
|
||||
continue;
|
||||
}
|
||||
// Try pools parser.
|
||||
if (buildParser<PoolParser, PoolStorage >(parser, pools_,
|
||||
param.second)) {
|
||||
continue;
|
||||
}
|
||||
// Try option data parser.
|
||||
if (buildParser<OptionDataListParser, OptionStorage >(parser, options_,
|
||||
param.second)) {
|
||||
continue;
|
||||
if (!buildParser<Uint32Parser, Uint32Storage >(parser, uint32_values_,
|
||||
param.second) &&
|
||||
// Try string type parser.
|
||||
!buildParser<StringParser, StringStorage >(parser, string_values_,
|
||||
param.second) &&
|
||||
// Try pool parser.
|
||||
!buildParser<PoolParser, PoolStorage >(parser, pools_,
|
||||
param.second) &&
|
||||
// Try option data parser.
|
||||
!buildParser<OptionDataListParser, OptionStorage >(parser, options_,
|
||||
param.second)) {
|
||||
// Appropriate parsers are created in the createSubnet6ConfigParser
|
||||
// and they should be limited to those that we check here for. Thus,
|
||||
// if we fail to find a matching parser here it is a programming error.
|
||||
isc_throw(Dhcp6ConfigError, "failed to find suitable parser");
|
||||
}
|
||||
}
|
||||
// Ok, we now have subnet parsed
|
||||
|
@@ -47,9 +47,9 @@ This is an informational message reporting that the configuration has
|
||||
been extended to include the specified subnet.
|
||||
|
||||
% DHCP6_CONFIG_OPTION_DUPLICATE multiple options with the code: %1 added to the subnet: %2
|
||||
This warning message is issued on attempt to configure multiple options with the
|
||||
This warning message is issued on an attempt to configure multiple options with the
|
||||
same option code for the particular subnet. Adding multiple options is uncommon
|
||||
for DHCPv6, yet it is not prohibited.
|
||||
for DHCPv6, but it is not prohibited.
|
||||
|
||||
% DHCP6_CONFIG_START DHCPv6 server is processing the following configuration: %1
|
||||
This is a debug message that is issued every time the server receives a
|
||||
@@ -65,11 +65,18 @@ This informational message is printed every time DHCPv6 is started.
|
||||
It indicates what database backend type is being to store lease and
|
||||
other information.
|
||||
|
||||
% DHCP6_LEASE_WITHOUT_DUID lease for address %1 does not have a DUID
|
||||
This error message indicates a database consistency failure. The lease
|
||||
database has an entry indicating that the given address is in use,
|
||||
but the lease does not contain any client identification. This is most
|
||||
likely due to a software error: please raise a bug report. As a temporary
|
||||
workaround, manually remove the lease entry from the database.
|
||||
|
||||
% DHCP6_LEASE_ADVERT lease %1 advertised (client duid=%2, iaid=%3)
|
||||
This debug message indicates that the server successfully advertised
|
||||
a lease. It is up to the client to choose one server out of othe advertised
|
||||
and continue allocation with that server. This is a normal behavior and
|
||||
indicates successful operation.
|
||||
a lease. It is up to the client to choose one server out of the
|
||||
advertised servers and continue allocation with that server. This
|
||||
is a normal behavior and indicates successful operation.
|
||||
|
||||
% DHCP6_LEASE_ADVERT_FAIL failed to advertise a lease for client duid=%1, iaid=%2
|
||||
This message indicates that the server failed to advertise (in response to
|
||||
@@ -79,19 +86,43 @@ such failure. Each specific failure is logged in a separate log entry.
|
||||
% DHCP6_LEASE_ALLOC lease %1 has been allocated (client duid=%2, iaid=%3)
|
||||
This debug message indicates that the server successfully granted (in
|
||||
response to client's REQUEST message) a lease. This is a normal behavior
|
||||
and incicates successful operation.
|
||||
and indicates successful operation.
|
||||
|
||||
% DHCP6_LEASE_ALLOC_FAIL failed to grant a lease for client duid=%1, iaid=%2
|
||||
This message indicates that the server failed to grant (in response to
|
||||
received REQUEST) a lease for a given client. There may be many reasons for
|
||||
such failure. Each specific failure is logged in a separate log entry.
|
||||
|
||||
% DHCP6_REQUIRED_OPTIONS_CHECK_FAIL %1 message received from %2 failed the following check: %3
|
||||
This message indicates that received DHCPv6 packet is invalid. This may be due
|
||||
to a number of reasons, e.g. the mandatory client-id option is missing,
|
||||
the server-id forbidden in that particular type of message is present,
|
||||
there is more than one instance of client-id or server-id present,
|
||||
etc. The exact reason for rejecting the packet is included in the message.
|
||||
% DHCP6_RELEASE address %1 belonging to client duid=%2, iaid=%3 was released properly.
|
||||
This debug message indicates that an address was released properly. It
|
||||
is a normal operation during client shutdown.
|
||||
|
||||
% DHCP6_RELEASE_FAIL failed to remove lease for address %1 for duid=%2, iaid=%3
|
||||
This error message indicates that the software failed to remove a
|
||||
lease from the lease database. It probably due to an error during a
|
||||
database operation: resolution will most likely require administrator
|
||||
intervention (e.g. check if DHCP process has sufficient privileges to
|
||||
update the database). It may also be triggered if a lease was manually
|
||||
removed from the database during RELEASE message processing.
|
||||
|
||||
% DHCP6_RELEASE_FAIL_WRONG_DUID client (duid=%1) tried to release address %2, but it belongs to client (duid=%3)
|
||||
This warning message indicates that client tried to release an address
|
||||
that belongs to a different client. This should not happen in normal
|
||||
circumstances and may indicate a misconfiguration of the client. However,
|
||||
since the client releasing the address will stop using it anyway, there
|
||||
is a good chance that the situation will correct itself.
|
||||
|
||||
% DHCP6_RELEASE_FAIL_WRONG_IAID client (duid=%1) tried to release address %2, but it used wrong IAID (expected %3, but got %4)
|
||||
This warning message indicates that client tried to release an address
|
||||
that does belong to it, but the address was expected to be in a different
|
||||
IA (identity association) container. This probably means that the client's
|
||||
support for multiple addresses is flawed.
|
||||
|
||||
% DHCP6_RELEASE_MISSING_CLIENTID client (address=%1) sent RELEASE message without mandatory client-id
|
||||
This warning message indicates that client sent RELEASE message without
|
||||
mandatory client-id option. This is most likely caused by a buggy client
|
||||
(or a relay that malformed forwarded message). This request will not be
|
||||
processed and a response with error status code will be sent back.
|
||||
|
||||
% DHCP6_NOT_RUNNING IPv6 DHCP server is not running
|
||||
A warning message is issued when an attempt is made to shut down the
|
||||
@@ -128,7 +159,7 @@ a received OFFER packet as UNKNOWN).
|
||||
|
||||
% DHCP6_PACKET_RECEIVE_FAIL error on attempt to receive packet: %1
|
||||
The IPv6 DHCP server tried to receive a packet but an error
|
||||
occured during this attempt. The reason for the error is included in
|
||||
occurred during this attempt. The reason for the error is included in
|
||||
the message.
|
||||
|
||||
% DHCP6_PACKET_SEND_FAIL failed to send DHCPv6 packet: %1
|
||||
@@ -149,6 +180,13 @@ as a hint for possible requested address.
|
||||
% DHCP6_QUERY_DATA received packet length %1, data length %2, data is %3
|
||||
A debug message listing the data received from the client or relay.
|
||||
|
||||
% DHCP6_REQUIRED_OPTIONS_CHECK_FAIL %1 message received from %2 failed the following check: %3
|
||||
This message indicates that received DHCPv6 packet is invalid. This may be due
|
||||
to a number of reasons, e.g. the mandatory client-id option is missing,
|
||||
the server-id forbidden in that particular type of message is present,
|
||||
there is more than one instance of client-id or server-id present,
|
||||
etc. The exact reason for rejecting the packet is included in the message.
|
||||
|
||||
% DHCP6_RESPONSE_DATA responding with packet type %1 data is %2
|
||||
A debug message listing the data returned to the client.
|
||||
|
||||
@@ -216,3 +254,8 @@ recently and does not recognize its well-behaving clients. This is more
|
||||
probable if you see many such messages. Clients will recover from this,
|
||||
but they will most likely get a different IP addresses and experience
|
||||
a brief service interruption.
|
||||
|
||||
% DHCP6_UNKNOWN_RELEASE received RELEASE from unknown client (duid=%1, iaid=%2)
|
||||
This warning message is printed when client attempts to release a lease,
|
||||
but no such lease is known by the server. See DHCP6_UNKNOWN_RENEW for
|
||||
possible reasons for such behavior.
|
||||
|
@@ -23,8 +23,8 @@
|
||||
#include <dhcp/option6_ia.h>
|
||||
#include <dhcp/option6_iaaddr.h>
|
||||
#include <dhcp/option6_iaaddr.h>
|
||||
#include <dhcp/option6_int_array.h>
|
||||
#include <dhcp/option_custom.h>
|
||||
#include <dhcp/option_int_array.h>
|
||||
#include <dhcp/pkt6.h>
|
||||
#include <dhcp6/dhcp6_log.h>
|
||||
#include <dhcp6/dhcp6_srv.h>
|
||||
@@ -331,8 +331,8 @@ void Dhcpv6Srv::appendRequestedOptions(const Pkt6Ptr& question, Pkt6Ptr& answer)
|
||||
|
||||
// Client requests some options using ORO option. Try to
|
||||
// get this option from client's message.
|
||||
boost::shared_ptr<Option6IntArray<uint16_t> > option_oro =
|
||||
boost::dynamic_pointer_cast<Option6IntArray<uint16_t> >(question->getOption(D6O_ORO));
|
||||
boost::shared_ptr<OptionIntArray<uint16_t> > option_oro =
|
||||
boost::dynamic_pointer_cast<OptionIntArray<uint16_t> >(question->getOption(D6O_ORO));
|
||||
// Option ORO not found. Don't do anything then.
|
||||
if (!option_oro) {
|
||||
return;
|
||||
@@ -436,6 +436,8 @@ void Dhcpv6Srv::assignLeases(const Pkt6Ptr& question, Pkt6Ptr& answer) {
|
||||
|
||||
// We need to allocate addresses for all IA_NA options in the client's
|
||||
// question (i.e. SOLICIT or REQUEST) message.
|
||||
// @todo add support for IA_TA
|
||||
// @todo add support for IA_PD
|
||||
|
||||
// We need to select a subnet the client is connected in.
|
||||
Subnet6Ptr subnet = selectSubnet(question);
|
||||
@@ -604,7 +606,7 @@ OptionPtr Dhcpv6Srv::renewIA_NA(const Subnet6Ptr& subnet, const DuidPtr& duid,
|
||||
boost::shared_ptr<Option6IA> ia_rsp(new Option6IA(D6O_IA_NA, ia->getIAID()));
|
||||
|
||||
// Insert status code NoAddrsAvail.
|
||||
ia_rsp->addOption(createStatusCode(STATUS_NoAddrsAvail,
|
||||
ia_rsp->addOption(createStatusCode(STATUS_NoBinding,
|
||||
"Sorry, no known leases for this duid/iaid."));
|
||||
|
||||
LOG_DEBUG(dhcp6_logger, DBG_DHCP6_DETAIL, DHCP6_UNKNOWN_RENEW)
|
||||
@@ -640,6 +642,8 @@ void Dhcpv6Srv::renewLeases(const Pkt6Ptr& renew, Pkt6Ptr& reply) {
|
||||
|
||||
// We need to renew addresses for all IA_NA options in the client's
|
||||
// RENEW message.
|
||||
// @todo add support for IA_TA
|
||||
// @todo add support for IA_PD
|
||||
|
||||
// We need to select a subnet the client is connected in.
|
||||
Subnet6Ptr subnet = selectSubnet(renew);
|
||||
@@ -688,11 +692,176 @@ void Dhcpv6Srv::renewLeases(const Pkt6Ptr& renew, Pkt6Ptr& reply) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
}
|
||||
|
||||
void Dhcpv6Srv::releaseLeases(const Pkt6Ptr& release, Pkt6Ptr& reply) {
|
||||
|
||||
// We need to release addresses for all IA_NA options in the client's
|
||||
// RELEASE message.
|
||||
// @todo Add support for IA_TA
|
||||
// @todo Add support for IA_PD
|
||||
// @todo Consider supporting more than one address in a single IA_NA.
|
||||
// That was envisaged by RFC3315, but it never happened. The only
|
||||
// software that supports that is Dibbler, but its author seriously doubts
|
||||
// if anyone is really using it. Clients that want more than one address
|
||||
// just include more instances of IA_NA options.
|
||||
|
||||
// Let's find client's DUID. Client is supposed to include its client-id
|
||||
// option almost all the time (the only exception is an anonymous inf-request,
|
||||
// but that is mostly a theoretical case). Our allocation engine needs DUID
|
||||
// and will refuse to allocate anything to anonymous clients.
|
||||
OptionPtr opt_duid = release->getOption(D6O_CLIENTID);
|
||||
if (!opt_duid) {
|
||||
// This should not happen. We have checked this before.
|
||||
// see sanityCheck() called from processRelease()
|
||||
LOG_WARN(dhcp6_logger, DHCP6_RELEASE_MISSING_CLIENTID)
|
||||
.arg(release->getRemoteAddr().toText());
|
||||
|
||||
reply->addOption(createStatusCode(STATUS_UnspecFail,
|
||||
"You did not include mandatory client-id"));
|
||||
return;
|
||||
}
|
||||
DuidPtr duid(new DUID(opt_duid->getData()));
|
||||
|
||||
int general_status = STATUS_Success;
|
||||
for (Option::OptionCollection::iterator opt = release->options_.begin();
|
||||
opt != release->options_.end(); ++opt) {
|
||||
switch (opt->second->getType()) {
|
||||
case D6O_IA_NA: {
|
||||
OptionPtr answer_opt = releaseIA_NA(duid, release, general_status,
|
||||
boost::dynamic_pointer_cast<Option6IA>(opt->second));
|
||||
if (answer_opt) {
|
||||
reply->addOption(answer_opt);
|
||||
}
|
||||
break;
|
||||
}
|
||||
// @todo: add support for IA_PD
|
||||
// @todo: add support for IA_TA
|
||||
default:
|
||||
// remaining options are stateless and thus ignored in this context
|
||||
;
|
||||
}
|
||||
}
|
||||
|
||||
// To be pedantic, we should also include status code in the top-level
|
||||
// scope, not just in each IA_NA. See RFC3315, section 18.2.6.
|
||||
// This behavior will likely go away in RFC3315bis.
|
||||
reply->addOption(createStatusCode(general_status,
|
||||
"Summary status for all processed IA_NAs"));
|
||||
}
|
||||
|
||||
OptionPtr Dhcpv6Srv::releaseIA_NA(const DuidPtr& duid, Pkt6Ptr question,
|
||||
int& general_status,
|
||||
boost::shared_ptr<Option6IA> ia) {
|
||||
// Release can be done in one of two ways:
|
||||
// Approach 1: extract address from client's IA_NA and see if it belongs
|
||||
// to this particular client.
|
||||
// Approach 2: find a subnet for this client, get a lease for
|
||||
// this subnet/duid/iaid and check if its content matches to what the
|
||||
// client is asking us to release.
|
||||
//
|
||||
// This method implements approach 1.
|
||||
|
||||
// That's our response
|
||||
boost::shared_ptr<Option6IA> ia_rsp(new Option6IA(D6O_IA_NA, ia->getIAID()));
|
||||
|
||||
boost::shared_ptr<Option6IAAddr> release_addr = boost::dynamic_pointer_cast<Option6IAAddr>
|
||||
(ia->getOption(D6O_IAADDR));
|
||||
if (!release_addr) {
|
||||
ia_rsp->addOption(createStatusCode(STATUS_NoBinding,
|
||||
"You did not include address in your RELEASE"));
|
||||
general_status = STATUS_NoBinding;
|
||||
return (ia_rsp);
|
||||
}
|
||||
|
||||
Lease6Ptr lease = LeaseMgrFactory::instance().getLease6(release_addr->getAddress());
|
||||
|
||||
if (!lease) {
|
||||
// client releasing a lease that we don't know about.
|
||||
|
||||
// Insert status code NoAddrsAvail.
|
||||
ia_rsp->addOption(createStatusCode(STATUS_NoBinding,
|
||||
"Sorry, no known leases for this duid/iaid, can't release."));
|
||||
general_status = STATUS_NoBinding;
|
||||
|
||||
LOG_INFO(dhcp6_logger, DHCP6_UNKNOWN_RELEASE)
|
||||
.arg(duid->toText())
|
||||
.arg(ia->getIAID());
|
||||
|
||||
return (ia_rsp);
|
||||
}
|
||||
|
||||
if (!lease->duid_) {
|
||||
// Something is gravely wrong here. We do have a lease, but it does not
|
||||
// have mandatory DUID information attached. Someone was messing with our
|
||||
// database.
|
||||
|
||||
LOG_ERROR(dhcp6_logger, DHCP6_LEASE_WITHOUT_DUID)
|
||||
.arg(release_addr->getAddress().toText());
|
||||
|
||||
general_status = STATUS_UnspecFail;
|
||||
ia_rsp->addOption(createStatusCode(STATUS_UnspecFail,
|
||||
"Database consistency check failed when trying to RELEASE"));
|
||||
return (ia_rsp);
|
||||
}
|
||||
|
||||
if (*duid != *(lease->duid_)) {
|
||||
// Sorry, it's not your address. You can't release it.
|
||||
|
||||
LOG_INFO(dhcp6_logger, DHCP6_RELEASE_FAIL_WRONG_DUID)
|
||||
.arg(duid->toText())
|
||||
.arg(release_addr->getAddress().toText())
|
||||
.arg(lease->duid_->toText());
|
||||
|
||||
general_status = STATUS_NoBinding;
|
||||
ia_rsp->addOption(createStatusCode(STATUS_NoBinding,
|
||||
"This address does not belong to you, you can't release it"));
|
||||
return (ia_rsp);
|
||||
}
|
||||
|
||||
if (ia->getIAID() != lease->iaid_) {
|
||||
// This address belongs to this client, but to a different IA
|
||||
LOG_WARN(dhcp6_logger, DHCP6_RELEASE_FAIL_WRONG_IAID)
|
||||
.arg(duid->toText())
|
||||
.arg(release_addr->getAddress().toText())
|
||||
.arg(lease->iaid_)
|
||||
.arg(ia->getIAID());
|
||||
ia_rsp->addOption(createStatusCode(STATUS_NoBinding,
|
||||
"This is your address, but you used wrong IAID"));
|
||||
general_status = STATUS_NoBinding;
|
||||
return (ia_rsp);
|
||||
}
|
||||
|
||||
// It is not necessary to check if the address matches as we used
|
||||
// getLease6(addr) method that is supposed to return a proper lease.
|
||||
|
||||
// Ok, we've passed all checks. Let's release this address.
|
||||
|
||||
if (!LeaseMgrFactory::instance().deleteLease(lease->addr_)) {
|
||||
ia_rsp->addOption(createStatusCode(STATUS_UnspecFail,
|
||||
"Server failed to release a lease"));
|
||||
|
||||
LOG_ERROR(dhcp6_logger, DHCP6_RELEASE_FAIL)
|
||||
.arg(lease->addr_.toText())
|
||||
.arg(duid->toText())
|
||||
.arg(lease->iaid_);
|
||||
general_status = STATUS_UnspecFail;
|
||||
|
||||
return (ia_rsp);
|
||||
} else {
|
||||
LOG_DEBUG(dhcp6_logger, DBG_DHCP6_DETAIL, DHCP6_RELEASE)
|
||||
.arg(lease->addr_.toText())
|
||||
.arg(duid->toText())
|
||||
.arg(lease->iaid_);
|
||||
|
||||
ia_rsp->addOption(createStatusCode(STATUS_Success,
|
||||
"Lease released. Thank you, please come again."));
|
||||
|
||||
return (ia_rsp);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Pkt6Ptr Dhcpv6Srv::processSolicit(const Pkt6Ptr& solicit) {
|
||||
|
||||
sanityCheck(solicit, MANDATORY, FORBIDDEN);
|
||||
@@ -751,8 +920,16 @@ Pkt6Ptr Dhcpv6Srv::processConfirm(const Pkt6Ptr& confirm) {
|
||||
}
|
||||
|
||||
Pkt6Ptr Dhcpv6Srv::processRelease(const Pkt6Ptr& release) {
|
||||
/// @todo: Implement this
|
||||
|
||||
sanityCheck(release, MANDATORY, MANDATORY);
|
||||
|
||||
Pkt6Ptr reply(new Pkt6(DHCPV6_REPLY, release->getTransid()));
|
||||
|
||||
copyDefaultOptions(release, reply);
|
||||
appendDefaultOptions(release, reply);
|
||||
|
||||
releaseLeases(release, reply);
|
||||
|
||||
return reply;
|
||||
}
|
||||
|
||||
|
@@ -212,17 +212,39 @@ protected:
|
||||
|
||||
/// @brief Renews specific IA_NA option
|
||||
///
|
||||
/// Generates response to IA_NA. This typically includes finding a lease that
|
||||
/// corresponds to the received address. If no such lease is found, an IA_NA
|
||||
/// response is generated with an appropriate status code.
|
||||
/// Generates response to IA_NA in Renew. This typically includes finding a
|
||||
/// lease that corresponds to the received address. If no such lease is
|
||||
/// found, an IA_NA response is generated with an appropriate status code.
|
||||
///
|
||||
/// @param subnet subnet the sender belongs to
|
||||
/// @param duid client's duid
|
||||
/// @param question client's message
|
||||
/// @param ia IA_NA option that is being renewed
|
||||
/// @return IA_NA option (server's response)
|
||||
OptionPtr renewIA_NA(const Subnet6Ptr& subnet, const DuidPtr& duid,
|
||||
Pkt6Ptr question, boost::shared_ptr<Option6IA> ia);
|
||||
|
||||
/// @brief Releases specific IA_NA option
|
||||
///
|
||||
/// Generates response to IA_NA in Release message. This covers finding and
|
||||
/// removal of a lease that corresponds to the received address. If no such
|
||||
/// lease is found, an IA_NA response is generated with an appropriate
|
||||
/// status code.
|
||||
///
|
||||
/// As RFC 3315 requires that a single status code be sent for the whole message,
|
||||
/// this method may update the passed general_status: it is set to SUCCESS when
|
||||
/// message processing begins, but may be updated to some error code if the
|
||||
/// release process fails.
|
||||
///
|
||||
/// @param duid client's duid
|
||||
/// @param question client's message
|
||||
/// @param general_status a global status (it may be updated in case of errors)
|
||||
/// @param ia IA_NA option that is being renewed
|
||||
/// @return IA_NA option (server's response)
|
||||
OptionPtr releaseIA_NA(const DuidPtr& duid, Pkt6Ptr question,
|
||||
int& general_status,
|
||||
boost::shared_ptr<Option6IA> ia);
|
||||
|
||||
/// @brief Copies required options from client message to server answer.
|
||||
///
|
||||
/// Copies options that must appear in any server response (ADVERTISE, REPLY)
|
||||
@@ -271,6 +293,17 @@ protected:
|
||||
/// @param reply server's response
|
||||
void renewLeases(const Pkt6Ptr& renew, Pkt6Ptr& reply);
|
||||
|
||||
/// @brief Attempts to release received addresses
|
||||
///
|
||||
/// It iterates through received IA_NA options and attempts to release
|
||||
/// received addresses. If no such leases are found, or the lease fails
|
||||
/// proper checks (e.g. belongs to someone else), a proper status
|
||||
/// code is added to reply message. Released addresses are not added
|
||||
/// to REPLY packet, just its IA_NA containers.
|
||||
/// @param release client's message asking to release
|
||||
/// @param reply server's response
|
||||
void releaseLeases(const Pkt6Ptr& release, Pkt6Ptr& reply);
|
||||
|
||||
/// @brief Sets server-identifier.
|
||||
///
|
||||
/// This method attempts to set server-identifier DUID. It loads it
|
||||
|
@@ -63,14 +63,13 @@ dhcp6_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES)
|
||||
dhcp6_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS)
|
||||
dhcp6_unittests_LDADD = $(GTEST_LDADD)
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/asiolink/libb10-asiolink.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/util/libb10-util.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/cc/libb10-cc.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/config/libb10-cfgclient.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/dhcp/libb10-dhcp++.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/dhcpsrv/libb10-dhcpsrv.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/log/libb10-log.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libb10-exceptions.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/config/libb10-cfgclient.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/cc/libb10-cc.la
|
||||
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/log/libb10-log.la
|
||||
dhcp6_unittests_LDADD += $(top_builddir)/src/lib/util/libb10-util.la
|
||||
endif
|
||||
|
||||
noinst_PROGRAMS = $(TESTS)
|
||||
|
@@ -81,7 +81,9 @@ public:
|
||||
return (createConfigWithOption(params));
|
||||
}
|
||||
|
||||
std::string createConfigWithOption(const std::map<std::string, std::string>& params) {
|
||||
std::string createConfigWithOption(const std::map<std::string,
|
||||
std::string>& params)
|
||||
{
|
||||
std::ostringstream stream;
|
||||
stream << "{ \"interface\": [ \"all\" ],"
|
||||
"\"preferred-lifetime\": 3000,"
|
||||
@@ -97,6 +99,7 @@ public:
|
||||
if (!first) {
|
||||
stream << ", ";
|
||||
} else {
|
||||
// cppcheck-suppress unreadVariable
|
||||
first = false;
|
||||
}
|
||||
if (param.first == "name") {
|
||||
@@ -144,14 +147,14 @@ public:
|
||||
<< ex.what() << std::endl;
|
||||
}
|
||||
|
||||
|
||||
// returned value should be 0 (configuration success)
|
||||
// status object must not be NULL
|
||||
if (!status) {
|
||||
FAIL() << "Fatal error: unable to reset configuration database"
|
||||
<< " after the test. Configuration function returned"
|
||||
<< " NULL pointer" << std::endl;
|
||||
}
|
||||
comment_ = parseAnswer(rcode_, status);
|
||||
// returned value should be 0 (configuration success)
|
||||
if (rcode_ != 0) {
|
||||
FAIL() << "Fatal error: unable to reset configuration database"
|
||||
<< " after the test. Configuration function returned"
|
||||
@@ -215,9 +218,10 @@ public:
|
||||
ASSERT_EQ(buf.getLength() - option_desc.option->getHeaderLen(),
|
||||
expected_data_len);
|
||||
}
|
||||
// Verify that the data is correct. However do not verify suboptions.
|
||||
// Verify that the data is correct. Do not verify suboptions and a header.
|
||||
const uint8_t* data = static_cast<const uint8_t*>(buf.getData());
|
||||
EXPECT_TRUE(memcmp(expected_data, data, expected_data_len));
|
||||
EXPECT_EQ(0, memcmp(expected_data, data + option_desc.option->getHeaderLen(),
|
||||
expected_data_len));
|
||||
}
|
||||
|
||||
Dhcpv6Srv srv_;
|
||||
|
@@ -23,7 +23,7 @@
|
||||
#include <dhcp/option6_addrlst.h>
|
||||
#include <dhcp/option6_ia.h>
|
||||
#include <dhcp/option6_iaaddr.h>
|
||||
#include <dhcp/option6_int_array.h>
|
||||
#include <dhcp/option_int_array.h>
|
||||
#include <dhcp6/config_parser.h>
|
||||
#include <dhcp6/dhcp6_srv.h>
|
||||
#include <dhcpsrv/cfgmgr.h>
|
||||
@@ -59,6 +59,7 @@ public:
|
||||
using Dhcpv6Srv::processSolicit;
|
||||
using Dhcpv6Srv::processRequest;
|
||||
using Dhcpv6Srv::processRenew;
|
||||
using Dhcpv6Srv::processRelease;
|
||||
using Dhcpv6Srv::createStatusCode;
|
||||
using Dhcpv6Srv::selectSubnet;
|
||||
using Dhcpv6Srv::sanityCheck;
|
||||
@@ -143,11 +144,14 @@ public:
|
||||
}
|
||||
|
||||
// Checks that server rejected IA_NA, i.e. that it has no addresses and
|
||||
// that expected status code really appears there.
|
||||
// that expected status code really appears there. In some limited cases
|
||||
// (reply to RELEASE) it may be used to verify positive case, where
|
||||
// IA_NA response is expected to not include address.
|
||||
//
|
||||
// Status code indicates type of error encountered (in theory it can also
|
||||
// indicate success, but servers typically don't send success status
|
||||
// as this is the default result and it saves bandwidth)
|
||||
void checkRejectedIA_NA(const boost::shared_ptr<Option6IA>& ia,
|
||||
void checkIA_NAStatusCode(const boost::shared_ptr<Option6IA>& ia,
|
||||
uint16_t expected_status_code) {
|
||||
// Make sure there is no address assigned.
|
||||
EXPECT_FALSE(ia->getOption(D6O_IAADDR));
|
||||
@@ -158,6 +162,12 @@ public:
|
||||
|
||||
boost::shared_ptr<OptionCustom> status =
|
||||
boost::dynamic_pointer_cast<OptionCustom>(ia->getOption(D6O_STATUS_CODE));
|
||||
|
||||
// It is ok to not include status success as this is the default behavior
|
||||
if (expected_status_code == STATUS_Success && !status) {
|
||||
return;
|
||||
}
|
||||
|
||||
EXPECT_TRUE(status);
|
||||
|
||||
if (status) {
|
||||
@@ -169,6 +179,26 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void checkMsgStatusCode(const Pkt6Ptr& msg, uint16_t expected_status) {
|
||||
boost::shared_ptr<OptionCustom> status =
|
||||
boost::dynamic_pointer_cast<OptionCustom>(msg->getOption(D6O_STATUS_CODE));
|
||||
|
||||
// It is ok to not include status success as this is the default behavior
|
||||
if (expected_status == STATUS_Success && !status) {
|
||||
return;
|
||||
}
|
||||
|
||||
EXPECT_TRUE(status);
|
||||
if (status) {
|
||||
// We don't have dedicated class for status code, so let's just interpret
|
||||
// first 2 bytes as status. Remainder of the status code option content is
|
||||
// just a text explanation what went wrong.
|
||||
EXPECT_EQ(static_cast<uint16_t>(expected_status),
|
||||
status->readInteger<uint16_t>(0));
|
||||
}
|
||||
}
|
||||
|
||||
// Check that generated IAADDR option contains expected address.
|
||||
void checkIAAddr(const boost::shared_ptr<Option6IAAddr>& addr,
|
||||
const IOAddress& expected_addr,
|
||||
@@ -353,10 +383,9 @@ TEST_F(Dhcpv6SrvTest, advertiseOptions) {
|
||||
|
||||
ElementPtr json = Element::fromJSON(config);
|
||||
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW(srv.reset(new NakedDhcpv6Srv(0)));
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
EXPECT_NO_THROW(x = configureDhcp6Server(*srv, json));
|
||||
EXPECT_NO_THROW(x = configureDhcp6Server(srv, json));
|
||||
ASSERT_TRUE(x);
|
||||
comment_ = parseAnswer(rcode_, x);
|
||||
|
||||
@@ -369,7 +398,7 @@ TEST_F(Dhcpv6SrvTest, advertiseOptions) {
|
||||
sol->addOption(clientid);
|
||||
|
||||
// Pass it to the server and get an advertise
|
||||
boost::shared_ptr<Pkt6> adv = srv->processSolicit(sol);
|
||||
boost::shared_ptr<Pkt6> adv = srv.processSolicit(sol);
|
||||
|
||||
// check if we get response at all
|
||||
ASSERT_TRUE(adv);
|
||||
@@ -381,8 +410,8 @@ TEST_F(Dhcpv6SrvTest, advertiseOptions) {
|
||||
|
||||
// Let's now request option with code 1000.
|
||||
// We expect that server will include this option in its reply.
|
||||
boost::shared_ptr<Option6IntArray<uint16_t> >
|
||||
option_oro(new Option6IntArray<uint16_t>(D6O_ORO));
|
||||
boost::shared_ptr<OptionIntArray<uint16_t> >
|
||||
option_oro(new OptionIntArray<uint16_t>(Option::V6, D6O_ORO));
|
||||
// Create vector with two option codes.
|
||||
std::vector<uint16_t> codes(2);
|
||||
codes[0] = 1000;
|
||||
@@ -393,7 +422,7 @@ TEST_F(Dhcpv6SrvTest, advertiseOptions) {
|
||||
sol->addOption(option_oro);
|
||||
|
||||
// Need to process SOLICIT again after requesting new option.
|
||||
adv = srv->processSolicit(sol);
|
||||
adv = srv.processSolicit(sol);
|
||||
ASSERT_TRUE(adv);
|
||||
|
||||
OptionPtr tmp = adv->getOption(D6O_NAME_SERVERS);
|
||||
@@ -444,8 +473,7 @@ TEST_F(Dhcpv6SrvTest, advertiseOptions) {
|
||||
// - server-id
|
||||
// - IA that includes IAADDR
|
||||
TEST_F(Dhcpv6SrvTest, SolicitBasic) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
Pkt6Ptr sol = Pkt6Ptr(new Pkt6(DHCPV6_SOLICIT, 1234));
|
||||
sol->setRemoteAddr(IOAddress("fe80::abcd"));
|
||||
@@ -454,7 +482,7 @@ TEST_F(Dhcpv6SrvTest, SolicitBasic) {
|
||||
sol->addOption(clientid);
|
||||
|
||||
// Pass it to the server and get an advertise
|
||||
Pkt6Ptr reply = srv->processSolicit(sol);
|
||||
Pkt6Ptr reply = srv.processSolicit(sol);
|
||||
|
||||
// check if we get response at all
|
||||
checkResponse(reply, DHCPV6_ADVERTISE, 1234);
|
||||
@@ -467,7 +495,7 @@ TEST_F(Dhcpv6SrvTest, SolicitBasic) {
|
||||
checkIAAddr(addr, addr->getAddress(), subnet_->getPreferred(), subnet_->getValid());
|
||||
|
||||
// check DUIDs
|
||||
checkServerId(reply, srv->getServerID());
|
||||
checkServerId(reply, srv.getServerID());
|
||||
checkClientId(reply, clientid);
|
||||
}
|
||||
|
||||
@@ -487,8 +515,7 @@ TEST_F(Dhcpv6SrvTest, SolicitBasic) {
|
||||
// - server-id
|
||||
// - IA that includes IAADDR
|
||||
TEST_F(Dhcpv6SrvTest, SolicitHint) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
// Let's create a SOLICIT
|
||||
Pkt6Ptr sol = Pkt6Ptr(new Pkt6(DHCPV6_SOLICIT, 1234));
|
||||
@@ -505,7 +532,7 @@ TEST_F(Dhcpv6SrvTest, SolicitHint) {
|
||||
sol->addOption(clientid);
|
||||
|
||||
// Pass it to the server and get an advertise
|
||||
Pkt6Ptr reply = srv->processSolicit(sol);
|
||||
Pkt6Ptr reply = srv.processSolicit(sol);
|
||||
|
||||
// check if we get response at all
|
||||
checkResponse(reply, DHCPV6_ADVERTISE, 1234);
|
||||
@@ -521,7 +548,7 @@ TEST_F(Dhcpv6SrvTest, SolicitHint) {
|
||||
checkIAAddr(addr, hint, subnet_->getPreferred(), subnet_->getValid());
|
||||
|
||||
// check DUIDs
|
||||
checkServerId(reply, srv->getServerID());
|
||||
checkServerId(reply, srv.getServerID());
|
||||
checkClientId(reply, clientid);
|
||||
}
|
||||
|
||||
@@ -541,8 +568,7 @@ TEST_F(Dhcpv6SrvTest, SolicitHint) {
|
||||
// - server-id
|
||||
// - IA that includes IAADDR
|
||||
TEST_F(Dhcpv6SrvTest, SolicitInvalidHint) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
// Let's create a SOLICIT
|
||||
Pkt6Ptr sol = Pkt6Ptr(new Pkt6(DHCPV6_SOLICIT, 1234));
|
||||
@@ -557,7 +583,7 @@ TEST_F(Dhcpv6SrvTest, SolicitInvalidHint) {
|
||||
sol->addOption(clientid);
|
||||
|
||||
// Pass it to the server and get an advertise
|
||||
Pkt6Ptr reply = srv->processSolicit(sol);
|
||||
Pkt6Ptr reply = srv.processSolicit(sol);
|
||||
|
||||
// check if we get response at all
|
||||
checkResponse(reply, DHCPV6_ADVERTISE, 1234);
|
||||
@@ -571,10 +597,13 @@ TEST_F(Dhcpv6SrvTest, SolicitInvalidHint) {
|
||||
EXPECT_TRUE(subnet_->inPool(addr->getAddress()));
|
||||
|
||||
// check DUIDs
|
||||
checkServerId(reply, srv->getServerID());
|
||||
checkServerId(reply, srv.getServerID());
|
||||
checkClientId(reply, clientid);
|
||||
}
|
||||
|
||||
/// @todo: Add a test that client sends hint that is in pool, but currently
|
||||
/// being used by a different client.
|
||||
|
||||
// This test checks that the server is offering different addresses to different
|
||||
// clients in ADVERTISEs. Please note that ADVERTISE is not a guarantee that such
|
||||
// and address will be assigned. Had the pool was very small and contained only
|
||||
@@ -583,8 +612,7 @@ TEST_F(Dhcpv6SrvTest, SolicitInvalidHint) {
|
||||
// client. ADVERTISE is basically saying "if you send me a request, you will
|
||||
// probably get an address like this" (there are no guarantees).
|
||||
TEST_F(Dhcpv6SrvTest, ManySolicits) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
Pkt6Ptr sol1 = Pkt6Ptr(new Pkt6(DHCPV6_SOLICIT, 1234));
|
||||
Pkt6Ptr sol2 = Pkt6Ptr(new Pkt6(DHCPV6_SOLICIT, 2345));
|
||||
@@ -608,9 +636,9 @@ TEST_F(Dhcpv6SrvTest, ManySolicits) {
|
||||
sol3->addOption(clientid3);
|
||||
|
||||
// Pass it to the server and get an advertise
|
||||
Pkt6Ptr reply1 = srv->processSolicit(sol1);
|
||||
Pkt6Ptr reply2 = srv->processSolicit(sol2);
|
||||
Pkt6Ptr reply3 = srv->processSolicit(sol3);
|
||||
Pkt6Ptr reply1 = srv.processSolicit(sol1);
|
||||
Pkt6Ptr reply2 = srv.processSolicit(sol2);
|
||||
Pkt6Ptr reply3 = srv.processSolicit(sol3);
|
||||
|
||||
// check if we get response at all
|
||||
checkResponse(reply1, DHCPV6_ADVERTISE, 1234);
|
||||
@@ -631,9 +659,9 @@ TEST_F(Dhcpv6SrvTest, ManySolicits) {
|
||||
checkIAAddr(addr3, addr3->getAddress(), subnet_->getPreferred(), subnet_->getValid());
|
||||
|
||||
// check DUIDs
|
||||
checkServerId(reply1, srv->getServerID());
|
||||
checkServerId(reply2, srv->getServerID());
|
||||
checkServerId(reply3, srv->getServerID());
|
||||
checkServerId(reply1, srv.getServerID());
|
||||
checkServerId(reply2, srv.getServerID());
|
||||
checkServerId(reply3, srv.getServerID());
|
||||
checkClientId(reply1, clientid1);
|
||||
checkClientId(reply2, clientid2);
|
||||
checkClientId(reply3, clientid3);
|
||||
@@ -663,8 +691,7 @@ TEST_F(Dhcpv6SrvTest, ManySolicits) {
|
||||
// - server-id
|
||||
// - IA that includes IAADDR
|
||||
TEST_F(Dhcpv6SrvTest, RequestBasic) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
// Let's create a REQUEST
|
||||
Pkt6Ptr req = Pkt6Ptr(new Pkt6(DHCPV6_REQUEST, 1234));
|
||||
@@ -681,10 +708,10 @@ TEST_F(Dhcpv6SrvTest, RequestBasic) {
|
||||
req->addOption(clientid);
|
||||
|
||||
// server-id is mandatory in REQUEST
|
||||
req->addOption(srv->getServerID());
|
||||
req->addOption(srv.getServerID());
|
||||
|
||||
// Pass it to the server and hope for a REPLY
|
||||
Pkt6Ptr reply = srv->processRequest(req);
|
||||
Pkt6Ptr reply = srv.processRequest(req);
|
||||
|
||||
// check if we get response at all
|
||||
checkResponse(reply, DHCPV6_REPLY, 1234);
|
||||
@@ -700,7 +727,7 @@ TEST_F(Dhcpv6SrvTest, RequestBasic) {
|
||||
checkIAAddr(addr, hint, subnet_->getPreferred(), subnet_->getValid());
|
||||
|
||||
// check DUIDs
|
||||
checkServerId(reply, srv->getServerID());
|
||||
checkServerId(reply, srv.getServerID());
|
||||
checkClientId(reply, clientid);
|
||||
|
||||
// check that the lease is really in the database
|
||||
@@ -717,8 +744,7 @@ TEST_F(Dhcpv6SrvTest, RequestBasic) {
|
||||
// client. ADVERTISE is basically saying "if you send me a request, you will
|
||||
// probably get an address like this" (there are no guarantees).
|
||||
TEST_F(Dhcpv6SrvTest, ManyRequests) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
Pkt6Ptr req1 = Pkt6Ptr(new Pkt6(DHCPV6_REQUEST, 1234));
|
||||
Pkt6Ptr req2 = Pkt6Ptr(new Pkt6(DHCPV6_REQUEST, 2345));
|
||||
@@ -742,14 +768,14 @@ TEST_F(Dhcpv6SrvTest, ManyRequests) {
|
||||
req3->addOption(clientid3);
|
||||
|
||||
// server-id is mandatory in REQUEST
|
||||
req1->addOption(srv->getServerID());
|
||||
req2->addOption(srv->getServerID());
|
||||
req3->addOption(srv->getServerID());
|
||||
req1->addOption(srv.getServerID());
|
||||
req2->addOption(srv.getServerID());
|
||||
req3->addOption(srv.getServerID());
|
||||
|
||||
// Pass it to the server and get an advertise
|
||||
Pkt6Ptr reply1 = srv->processRequest(req1);
|
||||
Pkt6Ptr reply2 = srv->processRequest(req2);
|
||||
Pkt6Ptr reply3 = srv->processRequest(req3);
|
||||
Pkt6Ptr reply1 = srv.processRequest(req1);
|
||||
Pkt6Ptr reply2 = srv.processRequest(req2);
|
||||
Pkt6Ptr reply3 = srv.processRequest(req3);
|
||||
|
||||
// check if we get response at all
|
||||
checkResponse(reply1, DHCPV6_REPLY, 1234);
|
||||
@@ -770,9 +796,9 @@ TEST_F(Dhcpv6SrvTest, ManyRequests) {
|
||||
checkIAAddr(addr3, addr3->getAddress(), subnet_->getPreferred(), subnet_->getValid());
|
||||
|
||||
// check DUIDs
|
||||
checkServerId(reply1, srv->getServerID());
|
||||
checkServerId(reply2, srv->getServerID());
|
||||
checkServerId(reply3, srv->getServerID());
|
||||
checkServerId(reply1, srv.getServerID());
|
||||
checkServerId(reply2, srv.getServerID());
|
||||
checkServerId(reply3, srv.getServerID());
|
||||
checkClientId(reply1, clientid1);
|
||||
checkClientId(reply2, clientid2);
|
||||
checkClientId(reply3, clientid3);
|
||||
@@ -796,8 +822,7 @@ TEST_F(Dhcpv6SrvTest, ManyRequests) {
|
||||
// - returned REPLY message has IA that includes IAADDR
|
||||
// - lease is actually renewed in LeaseMgr
|
||||
TEST_F(Dhcpv6SrvTest, RenewBasic) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
const IOAddress addr("2001:db8:1:1::cafe:babe");
|
||||
const uint32_t iaid = 234;
|
||||
@@ -838,10 +863,10 @@ TEST_F(Dhcpv6SrvTest, RenewBasic) {
|
||||
req->addOption(clientid);
|
||||
|
||||
// Server-id is mandatory in RENEW
|
||||
req->addOption(srv->getServerID());
|
||||
req->addOption(srv.getServerID());
|
||||
|
||||
// Pass it to the server and hope for a REPLY
|
||||
Pkt6Ptr reply = srv->processRenew(req);
|
||||
Pkt6Ptr reply = srv.processRenew(req);
|
||||
|
||||
// Check if we get response at all
|
||||
checkResponse(reply, DHCPV6_REPLY, 1234);
|
||||
@@ -857,7 +882,7 @@ TEST_F(Dhcpv6SrvTest, RenewBasic) {
|
||||
checkIAAddr(addr_opt, addr, subnet_->getPreferred(), subnet_->getValid());
|
||||
|
||||
// Check DUIDs
|
||||
checkServerId(reply, srv->getServerID());
|
||||
checkServerId(reply, srv.getServerID());
|
||||
checkClientId(reply, clientid);
|
||||
|
||||
// Check that the lease is really in the database
|
||||
@@ -892,9 +917,7 @@ TEST_F(Dhcpv6SrvTest, RenewBasic) {
|
||||
// - returned REPLY message has IA that includes STATUS-CODE
|
||||
// - No lease in LeaseMgr
|
||||
TEST_F(Dhcpv6SrvTest, RenewReject) {
|
||||
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
const IOAddress addr("2001:db8:1:1::dead");
|
||||
const uint32_t transid = 1234;
|
||||
@@ -922,12 +945,12 @@ TEST_F(Dhcpv6SrvTest, RenewReject) {
|
||||
req->addOption(clientid);
|
||||
|
||||
// Server-id is mandatory in RENEW
|
||||
req->addOption(srv->getServerID());
|
||||
req->addOption(srv.getServerID());
|
||||
|
||||
// Case 1: No lease known to server
|
||||
|
||||
// Pass it to the server and hope for a REPLY
|
||||
Pkt6Ptr reply = srv->processRenew(req);
|
||||
Pkt6Ptr reply = srv.processRenew(req);
|
||||
|
||||
// Check if we get response at all
|
||||
checkResponse(reply, DHCPV6_REPLY, transid);
|
||||
@@ -936,7 +959,7 @@ TEST_F(Dhcpv6SrvTest, RenewReject) {
|
||||
// Check that IA_NA was returned and that there's an address included
|
||||
ia = boost::dynamic_pointer_cast<Option6IA>(tmp);
|
||||
ASSERT_TRUE(ia);
|
||||
checkRejectedIA_NA(ia, STATUS_NoAddrsAvail);
|
||||
checkIA_NAStatusCode(ia, STATUS_NoBinding);
|
||||
|
||||
// Check that there is no lease added
|
||||
l = LeaseMgrFactory::instance().getLease6(addr);
|
||||
@@ -953,14 +976,14 @@ TEST_F(Dhcpv6SrvTest, RenewReject) {
|
||||
ASSERT_TRUE(LeaseMgrFactory::instance().addLease(lease));
|
||||
|
||||
// Pass it to the server and hope for a REPLY
|
||||
reply = srv->processRenew(req);
|
||||
reply = srv.processRenew(req);
|
||||
checkResponse(reply, DHCPV6_REPLY, transid);
|
||||
tmp = reply->getOption(D6O_IA_NA);
|
||||
ASSERT_TRUE(tmp);
|
||||
// Check that IA_NA was returned and that there's an address included
|
||||
ia = boost::dynamic_pointer_cast<Option6IA>(tmp);
|
||||
ASSERT_TRUE(ia);
|
||||
checkRejectedIA_NA(ia, STATUS_NoAddrsAvail);
|
||||
checkIA_NAStatusCode(ia, STATUS_NoBinding);
|
||||
|
||||
// There is a iaid mis-match, so server should respond that there is
|
||||
// no such address to renew.
|
||||
@@ -972,14 +995,14 @@ TEST_F(Dhcpv6SrvTest, RenewReject) {
|
||||
req->addOption(generateClientId(13)); // generate different DUID
|
||||
// (with length 13)
|
||||
|
||||
reply = srv->processRenew(req);
|
||||
reply = srv.processRenew(req);
|
||||
checkResponse(reply, DHCPV6_REPLY, transid);
|
||||
tmp = reply->getOption(D6O_IA_NA);
|
||||
ASSERT_TRUE(tmp);
|
||||
// Check that IA_NA was returned and that there's an address included
|
||||
ia = boost::dynamic_pointer_cast<Option6IA>(tmp);
|
||||
ASSERT_TRUE(ia);
|
||||
checkRejectedIA_NA(ia, STATUS_NoAddrsAvail);
|
||||
checkIA_NAStatusCode(ia, STATUS_NoBinding);
|
||||
|
||||
lease = LeaseMgrFactory::instance().getLease6(addr);
|
||||
ASSERT_TRUE(lease);
|
||||
@@ -989,10 +1012,198 @@ TEST_F(Dhcpv6SrvTest, RenewReject) {
|
||||
EXPECT_TRUE(LeaseMgrFactory::instance().deleteLease(addr));
|
||||
}
|
||||
|
||||
// This test verifies that incoming (positive) RELEASE can be handled properly,
|
||||
// that a REPLY is generated, that the response has status code and that the
|
||||
// lease is indeed removed from the database.
|
||||
//
|
||||
// expected:
|
||||
// - returned REPLY message has copy of client-id
|
||||
// - returned REPLY message has server-id
|
||||
// - returned REPLY message has IA that does not include an IAADDR
|
||||
// - lease is actually removed from LeaseMgr
|
||||
TEST_F(Dhcpv6SrvTest, ReleaseBasic) {
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
const IOAddress addr("2001:db8:1:1::cafe:babe");
|
||||
const uint32_t iaid = 234;
|
||||
|
||||
// Generate client-id also duid_
|
||||
OptionPtr clientid = generateClientId();
|
||||
|
||||
// Check that the address we are about to use is indeed in pool
|
||||
ASSERT_TRUE(subnet_->inPool(addr));
|
||||
|
||||
// Note that preferred, valid, T1 and T2 timers and CLTT are set to invalid
|
||||
// value on purpose. They should be updated during RENEW.
|
||||
Lease6Ptr lease(new Lease6(Lease6::LEASE_IA_NA, addr, duid_, iaid,
|
||||
501, 502, 503, 504, subnet_->getID(), 0));
|
||||
lease->cltt_ = 1234;
|
||||
ASSERT_TRUE(LeaseMgrFactory::instance().addLease(lease));
|
||||
|
||||
// Check that the lease is really in the database
|
||||
Lease6Ptr l = LeaseMgrFactory::instance().getLease6(addr);
|
||||
ASSERT_TRUE(l);
|
||||
|
||||
// Let's create a RELEASE
|
||||
Pkt6Ptr req = Pkt6Ptr(new Pkt6(DHCPV6_RELEASE, 1234));
|
||||
req->setRemoteAddr(IOAddress("fe80::abcd"));
|
||||
boost::shared_ptr<Option6IA> ia = generateIA(iaid, 1500, 3000);
|
||||
|
||||
OptionPtr released_addr_opt(new Option6IAAddr(D6O_IAADDR, addr, 300, 500));
|
||||
ia->addOption(released_addr_opt);
|
||||
req->addOption(ia);
|
||||
req->addOption(clientid);
|
||||
|
||||
// Server-id is mandatory in RELEASE
|
||||
req->addOption(srv.getServerID());
|
||||
|
||||
// Pass it to the server and hope for a REPLY
|
||||
Pkt6Ptr reply = srv.processRelease(req);
|
||||
|
||||
// Check if we get response at all
|
||||
checkResponse(reply, DHCPV6_REPLY, 1234);
|
||||
|
||||
OptionPtr tmp = reply->getOption(D6O_IA_NA);
|
||||
ASSERT_TRUE(tmp);
|
||||
|
||||
// Check that IA_NA was returned and that there's an address included
|
||||
ia = boost::dynamic_pointer_cast<Option6IA>(tmp);
|
||||
checkIA_NAStatusCode(ia, STATUS_Success);
|
||||
checkMsgStatusCode(reply, STATUS_Success);
|
||||
|
||||
// There should be no address returned in RELEASE (see RFC3315, 18.2.6)
|
||||
EXPECT_FALSE(tmp->getOption(D6O_IAADDR));
|
||||
|
||||
// Check DUIDs
|
||||
checkServerId(reply, srv.getServerID());
|
||||
checkClientId(reply, clientid);
|
||||
|
||||
// Check that the lease is really gone in the database
|
||||
// get lease by address
|
||||
l = LeaseMgrFactory::instance().getLease6(addr);
|
||||
ASSERT_FALSE(l);
|
||||
|
||||
// get lease by subnetid/duid/iaid combination
|
||||
l = LeaseMgrFactory::instance().getLease6(*duid_, iaid, subnet_->getID());
|
||||
ASSERT_FALSE(l);
|
||||
}
|
||||
|
||||
// This test verifies that incoming (invalid) RELEASE can be handled properly.
|
||||
//
|
||||
// This test checks 3 scenarios:
|
||||
// 1. there is no such lease at all
|
||||
// 2. there is such a lease, but it is assigned to a different IAID
|
||||
// 3. there is such a lease, but it belongs to a different client
|
||||
//
|
||||
// expected:
|
||||
// - returned REPLY message has copy of client-id
|
||||
// - returned REPLY message has server-id
|
||||
// - returned REPLY message has IA that includes STATUS-CODE
|
||||
// - No lease in LeaseMgr
|
||||
TEST_F(Dhcpv6SrvTest, ReleaseReject) {
|
||||
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
const IOAddress addr("2001:db8:1:1::dead");
|
||||
const uint32_t transid = 1234;
|
||||
const uint32_t valid_iaid = 234;
|
||||
const uint32_t bogus_iaid = 456;
|
||||
|
||||
// Quick sanity check that the address we're about to use is ok
|
||||
ASSERT_TRUE(subnet_->inPool(addr));
|
||||
|
||||
// GenerateClientId() also sets duid_
|
||||
OptionPtr clientid = generateClientId();
|
||||
|
||||
// Check that the lease is NOT in the database
|
||||
Lease6Ptr l = LeaseMgrFactory::instance().getLease6(addr);
|
||||
ASSERT_FALSE(l);
|
||||
|
||||
// Let's create a RELEASE
|
||||
Pkt6Ptr req = Pkt6Ptr(new Pkt6(DHCPV6_RELEASE, transid));
|
||||
req->setRemoteAddr(IOAddress("fe80::abcd"));
|
||||
boost::shared_ptr<Option6IA> ia = generateIA(bogus_iaid, 1500, 3000);
|
||||
|
||||
OptionPtr released_addr_opt(new Option6IAAddr(D6O_IAADDR, addr, 300, 500));
|
||||
ia->addOption(released_addr_opt);
|
||||
req->addOption(ia);
|
||||
req->addOption(clientid);
|
||||
|
||||
// Server-id is mandatory in RENEW
|
||||
req->addOption(srv.getServerID());
|
||||
|
||||
// Case 1: No lease known to server
|
||||
SCOPED_TRACE("CASE 1: No lease known to server");
|
||||
|
||||
// Pass it to the server and hope for a REPLY
|
||||
Pkt6Ptr reply = srv.processRelease(req);
|
||||
|
||||
// Check if we get response at all
|
||||
checkResponse(reply, DHCPV6_REPLY, transid);
|
||||
OptionPtr tmp = reply->getOption(D6O_IA_NA);
|
||||
ASSERT_TRUE(tmp);
|
||||
// Check that IA_NA was returned and that there's an address included
|
||||
ia = boost::dynamic_pointer_cast<Option6IA>(tmp);
|
||||
ASSERT_TRUE(ia);
|
||||
checkIA_NAStatusCode(ia, STATUS_NoBinding);
|
||||
checkMsgStatusCode(reply, STATUS_NoBinding);
|
||||
|
||||
// Check that the lease is not there
|
||||
l = LeaseMgrFactory::instance().getLease6(addr);
|
||||
ASSERT_FALSE(l);
|
||||
|
||||
// CASE 2: Lease is known and belongs to this client, but to a different IAID
|
||||
SCOPED_TRACE("CASE 2: Lease is known and belongs to this client, but to a different IAID");
|
||||
|
||||
Lease6Ptr lease(new Lease6(Lease6::LEASE_IA_NA, addr, duid_, valid_iaid,
|
||||
501, 502, 503, 504, subnet_->getID(), 0));
|
||||
ASSERT_TRUE(LeaseMgrFactory::instance().addLease(lease));
|
||||
|
||||
// Pass it to the server and hope for a REPLY
|
||||
reply = srv.processRelease(req);
|
||||
checkResponse(reply, DHCPV6_REPLY, transid);
|
||||
tmp = reply->getOption(D6O_IA_NA);
|
||||
ASSERT_TRUE(tmp);
|
||||
// Check that IA_NA was returned and that there's an address included
|
||||
ia = boost::dynamic_pointer_cast<Option6IA>(tmp);
|
||||
ASSERT_TRUE(ia);
|
||||
checkIA_NAStatusCode(ia, STATUS_NoBinding);
|
||||
checkMsgStatusCode(reply, STATUS_NoBinding);
|
||||
|
||||
// Check that the lease is still there
|
||||
l = LeaseMgrFactory::instance().getLease6(addr);
|
||||
ASSERT_TRUE(l);
|
||||
|
||||
// CASE 3: Lease belongs to a client with different client-id
|
||||
SCOPED_TRACE("CASE 3: Lease belongs to a client with different client-id");
|
||||
|
||||
req->delOption(D6O_CLIENTID);
|
||||
ia = boost::dynamic_pointer_cast<Option6IA>(req->getOption(D6O_IA_NA));
|
||||
ia->setIAID(valid_iaid); // Now iaid in renew matches that in leasemgr
|
||||
req->addOption(generateClientId(13)); // generate different DUID
|
||||
// (with length 13)
|
||||
|
||||
reply = srv.processRelease(req);
|
||||
checkResponse(reply, DHCPV6_REPLY, transid);
|
||||
tmp = reply->getOption(D6O_IA_NA);
|
||||
ASSERT_TRUE(tmp);
|
||||
// Check that IA_NA was returned and that there's an address included
|
||||
ia = boost::dynamic_pointer_cast<Option6IA>(tmp);
|
||||
ASSERT_TRUE(ia);
|
||||
checkIA_NAStatusCode(ia, STATUS_NoBinding);
|
||||
checkMsgStatusCode(reply, STATUS_NoBinding);
|
||||
|
||||
// Check that the lease is still there
|
||||
l = LeaseMgrFactory::instance().getLease6(addr);
|
||||
ASSERT_TRUE(l);
|
||||
|
||||
// Finally, let's cleanup the database
|
||||
EXPECT_TRUE(LeaseMgrFactory::instance().deleteLease(addr));
|
||||
}
|
||||
|
||||
// This test verifies if the status code option is generated properly.
|
||||
TEST_F(Dhcpv6SrvTest, StatusCode) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
// a dummy content for client-id
|
||||
uint8_t expected[] = {
|
||||
@@ -1002,7 +1213,7 @@ TEST_F(Dhcpv6SrvTest, StatusCode) {
|
||||
0x41, 0x42, 0x43, 0x44, 0x45 // string value ABCDE
|
||||
};
|
||||
// Create the option.
|
||||
OptionPtr status = srv->createStatusCode(3, "ABCDE");
|
||||
OptionPtr status = srv.createStatusCode(3, "ABCDE");
|
||||
// Allocate an output buffer. We will store the option
|
||||
// in wire format here.
|
||||
OutputBuffer buf(sizeof(expected));
|
||||
@@ -1016,34 +1227,34 @@ TEST_F(Dhcpv6SrvTest, StatusCode) {
|
||||
|
||||
// This test verifies if the sanityCheck() really checks options presence.
|
||||
TEST_F(Dhcpv6SrvTest, sanityCheck) {
|
||||
boost::scoped_ptr<NakedDhcpv6Srv> srv;
|
||||
ASSERT_NO_THROW( srv.reset(new NakedDhcpv6Srv(0)) );
|
||||
NakedDhcpv6Srv srv(0);
|
||||
|
||||
Pkt6Ptr pkt = Pkt6Ptr(new Pkt6(DHCPV6_SOLICIT, 1234));
|
||||
|
||||
// check that the packets originating from local addresses can be
|
||||
// Set link-local sender address, so appropriate subnet can be
|
||||
// selected for this packet.
|
||||
pkt->setRemoteAddr(IOAddress("fe80::abcd"));
|
||||
|
||||
// client-id is optional for information-request, so
|
||||
EXPECT_NO_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::OPTIONAL));
|
||||
EXPECT_NO_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::OPTIONAL));
|
||||
|
||||
// empty packet, no client-id, no server-id
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::FORBIDDEN),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::FORBIDDEN),
|
||||
RFCViolation);
|
||||
|
||||
// This doesn't make much sense, but let's check it for completeness
|
||||
EXPECT_NO_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::FORBIDDEN, Dhcpv6Srv::FORBIDDEN));
|
||||
EXPECT_NO_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::FORBIDDEN, Dhcpv6Srv::FORBIDDEN));
|
||||
|
||||
OptionPtr clientid = generateClientId();
|
||||
pkt->addOption(clientid);
|
||||
|
||||
// client-id is mandatory, server-id is forbidden (as in SOLICIT or REBIND)
|
||||
EXPECT_NO_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::FORBIDDEN));
|
||||
EXPECT_NO_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::FORBIDDEN));
|
||||
|
||||
pkt->addOption(srv->getServerID());
|
||||
pkt->addOption(srv.getServerID());
|
||||
|
||||
// both client-id and server-id are mandatory (as in REQUEST, RENEW, RELEASE, DECLINE)
|
||||
EXPECT_NO_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::MANDATORY));
|
||||
EXPECT_NO_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::MANDATORY));
|
||||
|
||||
// sane section ends here, let's do some negative tests as well
|
||||
|
||||
@@ -1051,13 +1262,13 @@ TEST_F(Dhcpv6SrvTest, sanityCheck) {
|
||||
pkt->addOption(clientid);
|
||||
|
||||
// with more than one client-id it should throw, no matter what
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::OPTIONAL),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::OPTIONAL),
|
||||
RFCViolation);
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::OPTIONAL),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::OPTIONAL),
|
||||
RFCViolation);
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::MANDATORY),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::MANDATORY),
|
||||
RFCViolation);
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::MANDATORY),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::MANDATORY),
|
||||
RFCViolation);
|
||||
|
||||
pkt->delOption(D6O_CLIENTID);
|
||||
@@ -1066,20 +1277,21 @@ TEST_F(Dhcpv6SrvTest, sanityCheck) {
|
||||
// again we have only one client-id
|
||||
|
||||
// let's try different type of insanity - several server-ids
|
||||
pkt->addOption(srv->getServerID());
|
||||
pkt->addOption(srv->getServerID());
|
||||
pkt->addOption(srv.getServerID());
|
||||
pkt->addOption(srv.getServerID());
|
||||
|
||||
// with more than one server-id it should throw, no matter what
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::OPTIONAL),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::OPTIONAL),
|
||||
RFCViolation);
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::OPTIONAL),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::OPTIONAL),
|
||||
RFCViolation);
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::MANDATORY),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::OPTIONAL, Dhcpv6Srv::MANDATORY),
|
||||
RFCViolation);
|
||||
EXPECT_THROW(srv->sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::MANDATORY),
|
||||
EXPECT_THROW(srv.sanityCheck(pkt, Dhcpv6Srv::MANDATORY, Dhcpv6Srv::MANDATORY),
|
||||
RFCViolation);
|
||||
|
||||
|
||||
}
|
||||
|
||||
/// @todo: Add more negative tests for processX(), e.g. extend sanityCheck() test
|
||||
/// to call processX() methods.
|
||||
|
||||
} // end of anonymous namespace
|
||||
|
1
src/bin/loadzone/.gitignore
vendored
1
src/bin/loadzone/.gitignore
vendored
@@ -1,4 +1,5 @@
|
||||
/b10-loadzone
|
||||
/b10-loadzone.py
|
||||
/loadzone.py
|
||||
/run_loadzone.sh
|
||||
/b10-loadzone.8
|
||||
|
@@ -1,12 +1,17 @@
|
||||
SUBDIRS = . tests/correct tests/error
|
||||
SUBDIRS = . tests
|
||||
bin_SCRIPTS = b10-loadzone
|
||||
noinst_SCRIPTS = run_loadzone.sh
|
||||
|
||||
CLEANFILES = b10-loadzone
|
||||
nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/loadzone_messages.py
|
||||
pylogmessagedir = $(pyexecdir)/isc/log_messages/
|
||||
|
||||
CLEANFILES = b10-loadzone loadzone.pyc
|
||||
CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/loadzone_messages.py
|
||||
CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/loadzone_messages.pyc
|
||||
|
||||
man_MANS = b10-loadzone.8
|
||||
DISTCLEANFILES = $(man_MANS)
|
||||
EXTRA_DIST = $(man_MANS) b10-loadzone.xml
|
||||
EXTRA_DIST = $(man_MANS) b10-loadzone.xml loadzone_messages.mes
|
||||
|
||||
if GENERATE_DOCS
|
||||
|
||||
@@ -21,10 +26,13 @@ $(man_MANS):
|
||||
|
||||
endif
|
||||
|
||||
b10-loadzone: b10-loadzone.py
|
||||
$(SED) -e "s|@@PYTHONPATH@@|@pyexecdir@|" \
|
||||
-e "s|@@LOCALSTATEDIR@@|$(localstatedir)|" \
|
||||
-e "s|@@LIBEXECDIR@@|$(pkglibexecdir)|" b10-loadzone.py >$@
|
||||
# Define rule to build logging source files from message file
|
||||
$(PYTHON_LOGMSGPKG_DIR)/work/loadzone_messages.py : loadzone_messages.mes
|
||||
$(top_builddir)/src/lib/log/compiler/message \
|
||||
-d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/loadzone_messages.mes
|
||||
|
||||
b10-loadzone: loadzone.py $(PYTHON_LOGMSGPKG_DIR)/work/loadzone_messages.py
|
||||
$(SED) -e "s|@@PYTHONPATH@@|@pyexecdir@|" loadzone.py >$@
|
||||
chmod a+x $@
|
||||
|
||||
EXTRA_DIST += tests/normal/README
|
||||
@@ -48,6 +56,7 @@ EXTRA_DIST += tests/normal/sql1.example.com.signed
|
||||
EXTRA_DIST += tests/normal/sql2.example.com
|
||||
EXTRA_DIST += tests/normal/sql2.example.com.signed
|
||||
|
||||
pytest:
|
||||
$(SHELL) tests/correct/correct_test.sh
|
||||
$(SHELL) tests/error/error_test.sh
|
||||
CLEANDIRS = __pycache__
|
||||
|
||||
clean-local:
|
||||
rm -rf $(CLEANDIRS)
|
||||
|
@@ -1,16 +1,3 @@
|
||||
Support optional origin in $INCLUDE:
|
||||
$INCLUDE filename origin
|
||||
|
||||
Support optional comment in $INCLUDE:
|
||||
$INCLUDE filename origin comment
|
||||
|
||||
Support optional comment in $TTL (RFC 2308):
|
||||
$TTL number comment
|
||||
|
||||
Do not assume "." is origin if origin is not set and sees a @ or
|
||||
a label without a ".". It should probably fail. (Don't assume a
|
||||
mistake means it is a root level label.)
|
||||
|
||||
Add verbose option to show what it is adding, not necessarily
|
||||
in master file format, but in the context of the data source.
|
||||
|
||||
|
@@ -1,94 +0,0 @@
|
||||
#!@PYTHON@
|
||||
|
||||
# Copyright (C) 2010 Internet Systems Consortium.
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
|
||||
# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
|
||||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
|
||||
# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
|
||||
# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
|
||||
# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
import sys; sys.path.append ('@@PYTHONPATH@@')
|
||||
import re, getopt
|
||||
import isc.datasrc
|
||||
import isc.util.process
|
||||
from isc.datasrc.master import MasterFile
|
||||
import time
|
||||
import os
|
||||
|
||||
isc.util.process.rename()
|
||||
|
||||
#########################################################################
|
||||
# usage: print usage note and exit
|
||||
#########################################################################
|
||||
def usage():
|
||||
print("Usage: %s [-d <database>] [-o <origin>] <file>" % sys.argv[0], \
|
||||
file=sys.stderr)
|
||||
exit(1)
|
||||
|
||||
#########################################################################
|
||||
# main
|
||||
#########################################################################
|
||||
def main():
|
||||
try:
|
||||
opts, args = getopt.getopt(sys.argv[1:], "d:o:h", \
|
||||
["dbfile", "origin", "help"])
|
||||
except getopt.GetoptError as e:
|
||||
print(str(e))
|
||||
usage()
|
||||
exit(2)
|
||||
|
||||
dbfile = '@@LOCALSTATEDIR@@/@PACKAGE@/zone.sqlite3'
|
||||
initial_origin = ''
|
||||
for o, a in opts:
|
||||
if o in ("-d", "--dbfile"):
|
||||
dbfile = a
|
||||
elif o in ("-o", "--origin"):
|
||||
if a[-1] != '.':
|
||||
a += '.'
|
||||
initial_origin = a
|
||||
elif o in ("-h", "--help"):
|
||||
usage()
|
||||
else:
|
||||
assert False, "unhandled option"
|
||||
|
||||
if len(args) != 1:
|
||||
usage()
|
||||
zonefile = args[0]
|
||||
verbose = os.isatty(sys.stdout.fileno())
|
||||
try:
|
||||
master = MasterFile(zonefile, initial_origin, verbose)
|
||||
except Exception as e:
|
||||
sys.stderr.write("Error reading zone file: %s\n" % str(e))
|
||||
exit(1)
|
||||
|
||||
try:
|
||||
zone = master.zonename()
|
||||
if verbose:
|
||||
sys.stdout.write("Using SQLite3 database file %s\n" % dbfile)
|
||||
sys.stdout.write("Zone name is %s\n" % zone)
|
||||
sys.stdout.write("Loading file \"%s\"\n" % zonefile)
|
||||
except Exception as e:
|
||||
sys.stdout.write("\n")
|
||||
sys.stderr.write("Error reading zone file: %s\n" % str(e))
|
||||
exit(1)
|
||||
|
||||
try:
|
||||
isc.datasrc.sqlite3_ds.load(dbfile, zone, master.zonedata)
|
||||
if verbose:
|
||||
master.closeverbose()
|
||||
sys.stdout.write("\nDone.\n")
|
||||
except Exception as e:
|
||||
sys.stdout.write("\n")
|
||||
sys.stderr.write("Error loading database: %s\n"% str(e))
|
||||
exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@@ -2,7 +2,7 @@
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
|
||||
[<!ENTITY mdash "—">]>
|
||||
<!--
|
||||
- Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC")
|
||||
- Copyright (C) 2012 Internet Systems Consortium, Inc. ("ISC")
|
||||
-
|
||||
- Permission to use, copy, modify, and/or distribute this software for any
|
||||
- purpose with or without fee is hereby granted, provided that the above
|
||||
@@ -20,7 +20,7 @@
|
||||
<refentry>
|
||||
|
||||
<refentryinfo>
|
||||
<date>March 26, 2012</date>
|
||||
<date>December 15, 2012</date>
|
||||
</refentryinfo>
|
||||
|
||||
<refmeta>
|
||||
@@ -36,7 +36,7 @@
|
||||
|
||||
<docinfo>
|
||||
<copyright>
|
||||
<year>2010</year>
|
||||
<year>2012</year>
|
||||
<holder>Internet Systems Consortium, Inc. ("ISC")</holder>
|
||||
</copyright>
|
||||
</docinfo>
|
||||
@@ -44,9 +44,13 @@
|
||||
<refsynopsisdiv>
|
||||
<cmdsynopsis>
|
||||
<command>b10-loadzone</command>
|
||||
<arg><option>-d <replaceable class="parameter">database</replaceable></option></arg>
|
||||
<arg><option>-o <replaceable class="parameter">origin</replaceable></option></arg>
|
||||
<arg choice="req">filename</arg>
|
||||
<arg><option>-c <replaceable class="parameter">datasrc_config</replaceable></option></arg>
|
||||
<arg><option>-d <replaceable class="parameter">debug_level</replaceable></option></arg>
|
||||
<arg><option>-i <replaceable class="parameter">report_interval</replaceable></option></arg>
|
||||
<arg><option>-t <replaceable class="parameter">datasrc_type</replaceable></option></arg>
|
||||
<arg><option>-C <replaceable class="parameter">zone_class</replaceable></option></arg>
|
||||
<arg choice="req">zone name</arg>
|
||||
<arg choice="req">zone file</arg>
|
||||
</cmdsynopsis>
|
||||
</refsynopsisdiv>
|
||||
|
||||
@@ -63,23 +67,41 @@
|
||||
|
||||
<para>
|
||||
Some control entries (aka directives) are supported.
|
||||
$ORIGIN is followed by a domain name, and sets the the origin
|
||||
$ORIGIN is followed by a domain name, and sets the origin
|
||||
that will be used for relative domain names in subsequent records.
|
||||
$INCLUDE is followed by a filename to load.
|
||||
<!-- TODO: and optionally a
|
||||
domain name used to set the relative domain name origin. -->
|
||||
The previous origin is restored after the file is included.
|
||||
<!-- the current domain name is also restored -->
|
||||
$TTL is followed by a time-to-live value which is used
|
||||
by any following records that don't specify a TTL.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If the specified zone does not exist in the specified data
|
||||
source, <command>b10-loadzone</command> will first create a
|
||||
new empty zone in the data source, then fill it with the RRs
|
||||
given in the specified master zone file. In this case, if
|
||||
loading fails for some reason, the creation of the new zone
|
||||
is also canceled.
|
||||
<note><simpara>
|
||||
Due to an implementation limitation, the current version
|
||||
does not make the zone creation and subsequent loading an
|
||||
atomic operation; an empty zone will be visible and used by
|
||||
other application (e.g., the <command>b10-auth</command>
|
||||
authoritative server) while loading. If this is an issue,
|
||||
make sure the initial loading of a new zone is done before
|
||||
starting other BIND 10 applications.
|
||||
</simpara></note>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
When re-loading an existing zone, the prior version is completely
|
||||
removed. While the new version of the zone is being loaded, the old
|
||||
version remains accessible to queries. After the new version is
|
||||
completely loaded, the old version is swapped out and replaced
|
||||
with the new one in a single operation.
|
||||
with the new one in a single operation. If loading fails for
|
||||
some reason, the loaded RRs will be effectively deleted, and the
|
||||
old version will still remain accessible for other applications.
|
||||
</para>
|
||||
|
||||
</refsect1>
|
||||
@@ -88,21 +110,82 @@
|
||||
<title>ARGUMENTS</title>
|
||||
|
||||
<variablelist>
|
||||
|
||||
<varlistentry>
|
||||
<term>-d <replaceable class="parameter">database</replaceable> </term>
|
||||
<term>-c <replaceable class="parameter">datasrc_config</replaceable></term>
|
||||
<listitem><para>
|
||||
Defines the filename for the database.
|
||||
The default is
|
||||
<filename>/usr/local/var/bind10-devel/zone.sqlite3</filename>.
|
||||
<!-- TODO: fix filename -->
|
||||
Specifies configuration of the data source in the JSON
|
||||
format. The configuration contents depend on the type of
|
||||
the data source, and that's the same as what would be
|
||||
specified for the BIND 10 servers (see the data source
|
||||
configuration section of the BIND 10 guide). For example,
|
||||
for an SQLite3 data source, it would look like
|
||||
'{"database_file": "path-to-sqlite3-db-file"}'.
|
||||
<note>
|
||||
<simpara>For SQLite3 data source with the default DB file,
|
||||
this option can be omitted; in other cases including
|
||||
for any other types of data sources when supported,
|
||||
this option is currently mandatory in practice.
|
||||
In a future version it will be possible to retrieve the
|
||||
configuration from the BIND 10 server configuration (if
|
||||
it exists).
|
||||
</simpara></note>
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>-o <replaceable class="parameter">origin</replaceable></term>
|
||||
<term>-d <replaceable class="parameter">debug_level</replaceable> </term>
|
||||
<listitem><para>
|
||||
Defines the default origin for the zone file records.
|
||||
Enable dumping debug level logging with the specified
|
||||
level. By default, only log messages at the severity of
|
||||
informational or higher levels will be produced.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>-i <replaceable class="parameter">report_interval</replaceable></term>
|
||||
<listitem><para>
|
||||
Specifies the interval of status update by the number of RRs
|
||||
loaded in the interval.
|
||||
The <command>b10-loadzone</command> tool periodically
|
||||
reports the progress of loading with the total number of
|
||||
loaded RRs and elapsed time. This option specifies the
|
||||
interval of the reports. If set to 0, status reports will
|
||||
be suppressed. The default is 10,000.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>-t <replaceable class="parameter">datasrc_type</replaceable></term>
|
||||
<listitem><para>
|
||||
Specifies the type of data source to store the zone.
|
||||
Currently, only the "sqlite3" type is supported (which is
|
||||
the default of this option), which means the SQLite3 data
|
||||
source.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>-C <replaceable class="parameter">zone_class</replaceable></term>
|
||||
<listitem><para>
|
||||
Specifies the RR class of the zone.
|
||||
Currently, only class IN is supported (which is the default
|
||||
of this option) due to limitation of the underlying data
|
||||
source implementation.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><replaceable class="parameter">zone name</replaceable></term>
|
||||
<listitem><para>
|
||||
The name of the zone to create or update. This must be a valid DNS
|
||||
domain name.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term><replaceable class="parameter">zone file</replaceable></term>
|
||||
<listitem><para>
|
||||
A path to the master zone file to be loaded.
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
||||
@@ -131,8 +214,31 @@
|
||||
<refsect1>
|
||||
<title>AUTHORS</title>
|
||||
<para>
|
||||
The <command>b10-loadzone</command> tool was initial written
|
||||
by Evan Hunt of ISC.
|
||||
A prior version of the <command>b10-loadzone</command> tool was
|
||||
written by Evan Hunt of ISC.
|
||||
The new version that this manual refers to was rewritten from
|
||||
the scratch by the BIND 10 development team in around December 2012.
|
||||
</para>
|
||||
</refsect1>
|
||||
|
||||
<refsect1>
|
||||
<title>BUGS</title>
|
||||
<para>
|
||||
As of the initial implementation, the underlying library that
|
||||
this tool uses does not fully validate the loaded zone; for
|
||||
example, loading will succeed even if it doesn't have the SOA or
|
||||
NS record at its origin name. Such checks will be implemented
|
||||
in a near future version, but until then, the
|
||||
<command>b10-loadzone</command> performs the existence of the
|
||||
SOA and NS records by itself. However, <command>b10-loadzone</command>
|
||||
only warns about it, and does not cancel the load itself.
|
||||
If this warning message is produced, it's the user's
|
||||
responsibility to fix the errors and reload it. When the
|
||||
library is updated with the post load checks, it will be more
|
||||
sophisticated and the such zone won't be successfully loaded.
|
||||
</para>
|
||||
<para>
|
||||
There are some other issues noted in the DESCRIPTION section.
|
||||
</para>
|
||||
</refsect1>
|
||||
</refentry><!--
|
||||
|
342
src/bin/loadzone/loadzone.py.in
Executable file
342
src/bin/loadzone/loadzone.py.in
Executable file
@@ -0,0 +1,342 @@
|
||||
#!@PYTHON@
|
||||
|
||||
# Copyright (C) 2012 Internet Systems Consortium.
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
|
||||
# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
|
||||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
|
||||
# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
|
||||
# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
|
||||
# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
import sys
|
||||
sys.path.append('@@PYTHONPATH@@')
|
||||
import time
|
||||
import signal
|
||||
from optparse import OptionParser
|
||||
from isc.dns import *
|
||||
from isc.datasrc import *
|
||||
import isc.util.process
|
||||
import isc.log
|
||||
from isc.log_messages.loadzone_messages import *
|
||||
|
||||
isc.util.process.rename()
|
||||
|
||||
# These are needed for logger settings
|
||||
import bind10_config
|
||||
import json
|
||||
from isc.config import module_spec_from_file
|
||||
from isc.config.ccsession import path_search
|
||||
|
||||
isc.log.init("b10-loadzone")
|
||||
logger = isc.log.Logger("loadzone")
|
||||
|
||||
# The default value for the interval of progress report in terms of the
|
||||
# number of RRs loaded in that interval. Arbitrary choice, but intended to
|
||||
# be reasonably small to handle emergency exit.
|
||||
LOAD_INTERVAL_DEFAULT = 10000
|
||||
|
||||
class BadArgument(Exception):
|
||||
'''An exception indicating an error in command line argument.
|
||||
|
||||
'''
|
||||
pass
|
||||
|
||||
class LoadFailure(Exception):
|
||||
'''An exception indicating failure in loading operation.
|
||||
|
||||
'''
|
||||
pass
|
||||
|
||||
def set_cmd_options(parser):
|
||||
'''Helper function to set command-line options.
|
||||
|
||||
'''
|
||||
parser.add_option("-c", "--datasrc-conf", dest="conf", action="store",
|
||||
help="""configuration of datasrc to load the zone in.
|
||||
Example: '{"database_file": "/path/to/dbfile/db.sqlite3"}'""",
|
||||
metavar='CONFIG')
|
||||
parser.add_option("-d", "--debug", dest="debug_level",
|
||||
type='int', action="store", default=None,
|
||||
help="enable debug logs with the specified level [0-99]")
|
||||
parser.add_option("-i", "--report-interval", dest="report_interval",
|
||||
type='int', action="store",
|
||||
default=LOAD_INTERVAL_DEFAULT,
|
||||
help="""report logs progress per specified number of RRs
|
||||
(specify 0 to suppress report) [default: %default]""")
|
||||
parser.add_option("-t", "--datasrc-type", dest="datasrc_type",
|
||||
action="store", default='sqlite3',
|
||||
help="""type of data source (e.g., 'sqlite3')\n
|
||||
[default: %default]""")
|
||||
parser.add_option("-C", "--class", dest="zone_class", action="store",
|
||||
default='IN',
|
||||
help="""RR class of the zone; currently must be 'IN'
|
||||
[default: %default]""")
|
||||
|
||||
class LoadZoneRunner:
|
||||
'''Main logic for the loadzone.
|
||||
|
||||
This is implemented as a class mainly for the convenience of tests.
|
||||
|
||||
'''
|
||||
def __init__(self, command_args):
|
||||
self.__command_args = command_args
|
||||
self.__loaded_rrs = 0
|
||||
self.__interrupted = False # will be set to True on receiving signal
|
||||
|
||||
# system-wide log configuration. We need to configure logging this
|
||||
# way so that the logging policy applies to underlying libraries, too.
|
||||
self.__log_spec = json.dumps(isc.config.module_spec_from_file(
|
||||
path_search('logging.spec', bind10_config.PLUGIN_PATHS)).
|
||||
get_full_spec())
|
||||
# "severity" and "debuglevel" are the tunable parameters, which will
|
||||
# be set in _config_log().
|
||||
self.__log_conf_base = {"loggers":
|
||||
[{"name": "*",
|
||||
"output_options":
|
||||
[{"output": "stderr",
|
||||
"destination": "console"}]}]}
|
||||
|
||||
# These are essentially private, and defined as "protected" for the
|
||||
# convenience of tests inspecting them
|
||||
self._zone_class = None
|
||||
self._zone_name = None
|
||||
self._zone_file = None
|
||||
self._datasrc_config = None
|
||||
self._datasrc_type = None
|
||||
self._log_severity = 'INFO'
|
||||
self._log_debuglevel = 0
|
||||
self._report_interval = LOAD_INTERVAL_DEFAULT
|
||||
|
||||
self._config_log()
|
||||
|
||||
def _config_log(self):
|
||||
'''Configure logging policy.
|
||||
|
||||
This is essentially private, but defined as "protected" for tests.
|
||||
|
||||
'''
|
||||
self.__log_conf_base['loggers'][0]['severity'] = self._log_severity
|
||||
self.__log_conf_base['loggers'][0]['debuglevel'] = self._log_debuglevel
|
||||
isc.log.log_config_update(json.dumps(self.__log_conf_base),
|
||||
self.__log_spec)
|
||||
|
||||
def _parse_args(self):
|
||||
'''Parse command line options and other arguments.
|
||||
|
||||
This is essentially private, but defined as "protected" for tests.
|
||||
|
||||
'''
|
||||
|
||||
usage_txt = \
|
||||
'usage: %prog [options] -c datasrc_config zonename zonefile'
|
||||
parser = OptionParser(usage=usage_txt)
|
||||
set_cmd_options(parser)
|
||||
(options, args) = parser.parse_args(args=self.__command_args)
|
||||
|
||||
# Configure logging policy as early as possible
|
||||
if options.debug_level is not None:
|
||||
self._log_severity = 'DEBUG'
|
||||
# optparse performs type check
|
||||
self._log_debuglevel = int(options.debug_level)
|
||||
if self._log_debuglevel < 0:
|
||||
raise BadArgument(
|
||||
'Invalid debug level (must be non negative): %d' %
|
||||
self._log_debuglevel)
|
||||
self._config_log()
|
||||
|
||||
self._datasrc_type = options.datasrc_type
|
||||
self._datasrc_config = options.conf
|
||||
if options.conf is None:
|
||||
self._datasrc_config = self._get_datasrc_config(self._datasrc_type)
|
||||
try:
|
||||
self._zone_class = RRClass(options.zone_class)
|
||||
except isc.dns.InvalidRRClass as ex:
|
||||
raise BadArgument('Invalid zone class: ' + str(ex))
|
||||
if self._zone_class != RRClass.IN():
|
||||
raise BadArgument("RR class is not supported: " +
|
||||
str(self._zone_class))
|
||||
|
||||
self._report_interval = int(options.report_interval)
|
||||
if self._report_interval < 0:
|
||||
raise BadArgument(
|
||||
'Invalid report interval (must be non negative): %d' %
|
||||
self._report_interval)
|
||||
|
||||
if len(args) != 2:
|
||||
raise BadArgument('Unexpected number of arguments: %d (must be 2)'
|
||||
% (len(args)))
|
||||
try:
|
||||
self._zone_name = Name(args[0])
|
||||
except Exception as ex: # too broad, but there's no better granurality
|
||||
raise BadArgument("Invalid zone name '" + args[0] + "': " +
|
||||
str(ex))
|
||||
self._zone_file = args[1]
|
||||
|
||||
def _get_datasrc_config(self, datasrc_type):
|
||||
''''Return the default data source configuration of given type.
|
||||
|
||||
Right now, it only supports SQLite3, and hardcodes the syntax
|
||||
of the default configuration. It's a kind of workaround to balance
|
||||
convenience of users and minimizing hardcoding of data source
|
||||
specific logic in the entire tool. In future this should be
|
||||
more sophisticated.
|
||||
|
||||
This is essentially a private helper method for _parse_arg(),
|
||||
but defined as "protected" so tests can use it directly.
|
||||
|
||||
'''
|
||||
if datasrc_type != 'sqlite3':
|
||||
raise BadArgument('default config is not available for ' +
|
||||
datasrc_type)
|
||||
|
||||
default_db_file = bind10_config.DATA_PATH + '/zone.sqlite3'
|
||||
logger.info(LOADZONE_SQLITE3_USING_DEFAULT_CONFIG, default_db_file)
|
||||
return '{"database_file": "' + default_db_file + '"}'
|
||||
|
||||
def __cancel_create(self):
|
||||
'''sqlite3-only hack: delete the zone just created on load failure.
|
||||
|
||||
This should eventually be done via generic datasrc API, but right now
|
||||
we don't have that interface. Leaving the zone in this situation
|
||||
is too bad, so we handle it with a workaround.
|
||||
|
||||
'''
|
||||
if self._datasrc_type is not 'sqlite3':
|
||||
return
|
||||
|
||||
import sqlite3 # we need the module only here
|
||||
import json
|
||||
|
||||
# If we are here, the following should basically succeed; since
|
||||
# this is considered a temporary workaround we don't bother to catch
|
||||
# and recover rare failure cases.
|
||||
dbfile = json.loads(self._datasrc_config)['database_file']
|
||||
with sqlite3.connect(dbfile) as conn:
|
||||
cur = conn.cursor()
|
||||
cur.execute("DELETE FROM zones WHERE name = ?",
|
||||
[self._zone_name.to_text()])
|
||||
|
||||
def _report_progress(self, loaded_rrs):
|
||||
'''Dump the current progress report to stdout.
|
||||
|
||||
This is essentially private, but defined as "protected" for tests.
|
||||
|
||||
'''
|
||||
elapsed = time.time() - self.__start_time
|
||||
sys.stdout.write("\r" + (80 * " "))
|
||||
sys.stdout.write("\r%d RRs loaded in %.2f seconds" %
|
||||
(loaded_rrs, elapsed))
|
||||
|
||||
def _do_load(self):
|
||||
'''Main part of the load logic.
|
||||
|
||||
This is essentially private, but defined as "protected" for tests.
|
||||
|
||||
'''
|
||||
created = False
|
||||
try:
|
||||
datasrc_client = DataSourceClient(self._datasrc_type,
|
||||
self._datasrc_config)
|
||||
created = datasrc_client.create_zone(self._zone_name)
|
||||
if created:
|
||||
logger.info(LOADZONE_ZONE_CREATED, self._zone_name,
|
||||
self._zone_class)
|
||||
else:
|
||||
logger.info(LOADZONE_ZONE_UPDATING, self._zone_name,
|
||||
self._zone_class)
|
||||
loader = ZoneLoader(datasrc_client, self._zone_name,
|
||||
self._zone_file)
|
||||
self.__start_time = time.time()
|
||||
if self._report_interval > 0:
|
||||
limit = self._report_interval
|
||||
else:
|
||||
# Even if progress report is suppressed, we still load
|
||||
# incrementally so we won't delay catching signals too long.
|
||||
limit = LOAD_INTERVAL_DEFAULT
|
||||
while (not self.__interrupted and
|
||||
not loader.load_incremental(limit)):
|
||||
self.__loaded_rrs += self._report_interval
|
||||
if self._report_interval > 0:
|
||||
self._report_progress(self.__loaded_rrs)
|
||||
if self.__interrupted:
|
||||
raise LoadFailure('loading interrupted by signal')
|
||||
|
||||
# On successful completion, add final '\n' to the progress
|
||||
# report output (on failure don't bother to make it prettier).
|
||||
if (self._report_interval > 0 and
|
||||
self.__loaded_rrs >= self._report_interval):
|
||||
sys.stdout.write('\n')
|
||||
except Exception as ex:
|
||||
# release any remaining lock held in the client/loader
|
||||
loader, datasrc_client = None, None
|
||||
if created:
|
||||
self.__cancel_create()
|
||||
logger.error(LOADZONE_CANCEL_CREATE_ZONE, self._zone_name,
|
||||
self._zone_class)
|
||||
raise LoadFailure(str(ex))
|
||||
|
||||
def _post_load_checks(self):
|
||||
'''Perform minimal validity checks on the loaded zone.
|
||||
|
||||
We do this ourselves because the underlying library currently
|
||||
doesn't do any checks. Once the library support post-load validation
|
||||
this check should be removed.
|
||||
|
||||
'''
|
||||
datasrc_client = DataSourceClient(self._datasrc_type,
|
||||
self._datasrc_config)
|
||||
_, finder = datasrc_client.find_zone(self._zone_name) # should succeed
|
||||
result = finder.find(self._zone_name, RRType.SOA())[0]
|
||||
if result is not finder.SUCCESS:
|
||||
self._post_load_warning('zone has no SOA')
|
||||
result = finder.find(self._zone_name, RRType.NS())[0]
|
||||
if result is not finder.SUCCESS:
|
||||
self._post_load_warning('zone has no NS')
|
||||
|
||||
def _post_load_warning(self, msg):
|
||||
logger.warn(LOADZONE_POSTLOAD_ISSUE, self._zone_name,
|
||||
self._zone_class, msg)
|
||||
|
||||
def _set_signal_handlers(self):
|
||||
signal.signal(signal.SIGINT, self._interrupt_handler)
|
||||
signal.signal(signal.SIGTERM, self._interrupt_handler)
|
||||
|
||||
def _interrupt_handler(self, signal, frame):
|
||||
self.__interrupted = True
|
||||
|
||||
def run(self):
|
||||
'''Top-level method, simply calling other helpers'''
|
||||
|
||||
try:
|
||||
self._set_signal_handlers()
|
||||
self._parse_args()
|
||||
self._do_load()
|
||||
total_elapsed_txt = "%.2f" % (time.time() - self.__start_time)
|
||||
logger.info(LOADZONE_DONE, self.__loaded_rrs, self._zone_name,
|
||||
self._zone_class, total_elapsed_txt)
|
||||
self._post_load_checks()
|
||||
return 0
|
||||
except BadArgument as ex:
|
||||
logger.error(LOADZONE_ARGUMENT_ERROR, ex)
|
||||
except LoadFailure as ex:
|
||||
logger.error(LOADZONE_LOAD_ERROR, self._zone_name,
|
||||
self._zone_class, ex)
|
||||
except Exception as ex:
|
||||
logger.error(LOADZONE_UNEXPECTED_FAILURE, ex)
|
||||
return 1
|
||||
|
||||
if '__main__' == __name__:
|
||||
runner = LoadZoneRunner(sys.argv[1:])
|
||||
ret = runner.run()
|
||||
sys.exit(ret)
|
||||
|
||||
## Local Variables:
|
||||
## mode: python
|
||||
## End:
|
81
src/bin/loadzone/loadzone_messages.mes
Normal file
81
src/bin/loadzone/loadzone_messages.mes
Normal file
@@ -0,0 +1,81 @@
|
||||
# Copyright (C) 2012 Internet Systems Consortium, Inc. ("ISC")
|
||||
#
|
||||
# Permission to use, copy, modify, and/or distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH
|
||||
# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
|
||||
# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
|
||||
# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
|
||||
# PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
# When you add a message to this file, it is a good idea to run
|
||||
# <topsrcdir>/tools/reorder_message_file.py to make sure the
|
||||
# messages are in the correct order.
|
||||
|
||||
% LOADZONE_ARGUMENT_ERROR Error in command line arguments: %1
|
||||
Some semantics error in command line arguments or options to b10-loadzone
|
||||
is detected. b10-loadzone does effectively nothing and immediately
|
||||
terminates.
|
||||
|
||||
% LOADZONE_CANCEL_CREATE_ZONE Creation of new zone %1/%2 was canceled
|
||||
b10-loadzone has created a new zone in the data source (see
|
||||
LOADZONE_ZONE_CREATED), but the loading operation has subsequently
|
||||
failed. The newly created zone has been removed from the data source,
|
||||
so that the data source will go back to the original state.
|
||||
|
||||
% LOADZONE_DONE Loaded (at least) %1 RRs into zone %2/%3 in %4 seconds
|
||||
b10-loadzone has successfully loaded the specified zone. If there was
|
||||
an old version of the zone in the data source, it is now deleted.
|
||||
It also prints (a lower bound of) the number of RRs that have been loaded
|
||||
and the time spent for the loading. Due to a limitation of the
|
||||
current implementation of the underlying library however, it cannot show the
|
||||
exact number of the loaded RRs; it's counted for every N-th RR where N
|
||||
is the value of the -i command line option. So, for smaller zones that
|
||||
don't even contain N RRs, the reported value will be 0. This will be
|
||||
improved in a future version.
|
||||
|
||||
% LOADZONE_LOAD_ERROR Failed to load zone %1/%2: %3
|
||||
Loading a zone by b10-loadzone fails for some reason in the middle of
|
||||
the loading. This is most likely due to an error in the specified
|
||||
arguments to b10-loadzone (such as non-existent zone file) or an error
|
||||
in the zone file. When this happens, the RRs loaded so far are
|
||||
effectively deleted from the zone, and the old version (if exists)
|
||||
will still remain valid for operations.
|
||||
|
||||
% LOADZONE_POSTLOAD_ISSUE New version of zone %1/%2 has an issue: %3
|
||||
b10-loadzone detected a problem after a successful load of zone:
|
||||
either or both of SOA and NS records are missing at the zone origin.
|
||||
In the current implementation the load will not be canceled for such
|
||||
problems. The operator will need to fix the issues and reload the
|
||||
zone; otherwise applications (such as b10-auth) that use this data
|
||||
source will not work as expected.
|
||||
|
||||
% LOADZONE_SQLITE3_USING_DEFAULT_CONFIG Using default configuration with SQLite3 DB file %1
|
||||
The SQLite3 data source is specified as the data source type without a
|
||||
data source configuration. b10-loadzone uses the default
|
||||
configuration with the default DB file for the BIND 10 system.
|
||||
|
||||
% LOADZONE_UNEXPECTED_FAILURE Unexpected exception: %1
|
||||
b10-loadzone encounters an unexpected failure and terminates itself.
|
||||
This is generally a bug of b10-loadzone itself or the underlying
|
||||
data source library, so it's advisable to submit a bug report if
|
||||
this message is logged. The incomplete attempt of loading should
|
||||
have been cleanly canceled in this case, too.
|
||||
|
||||
% LOADZONE_ZONE_CREATED Zone %1/%2 does not exist in the data source, newly created
|
||||
The specified zone to b10-loadzone to load does not exist in the
|
||||
specified data source. b10-loadzone has created a new empty zone
|
||||
in the data source.
|
||||
|
||||
% LOADZONE_ZONE_UPDATING Started updating zone %1/%2 with removing old data (this can take a while)
|
||||
b10-loadzone started loading a new version of the zone as specified,
|
||||
beginning with removing the current contents of the zone (in a
|
||||
transaction, so the removal won't take effect until and unless the entire
|
||||
load is completed successfully). If the old version of the zone is large,
|
||||
this can take time, such as a few minutes or more, without any visible
|
||||
feedback. This is not a problem as long as the b10-loadzone process
|
||||
is working at a moderate load.
|
@@ -18,7 +18,7 @@
|
||||
PYTHON_EXEC=${PYTHON_EXEC:-@PYTHON@}
|
||||
export PYTHON_EXEC
|
||||
|
||||
PYTHONPATH=@abs_top_builddir@/src/lib/python/isc/log_messages:@abs_top_builddir@/src/lib/python
|
||||
PYTHONPATH=@abs_top_builddir@/src/lib/python/isc/log_messages:@abs_top_builddir@/src/lib/python:@abs_top_srcdir@/src/lib/python:@abs_top_builddir@/src/lib/dns/python/.libs
|
||||
export PYTHONPATH
|
||||
|
||||
# If necessary (rare cases), explicitly specify paths to dynamic libraries
|
||||
@@ -32,5 +32,13 @@ fi
|
||||
BIND10_MSGQ_SOCKET_FILE=@abs_top_builddir@/msgq_socket
|
||||
export BIND10_MSGQ_SOCKET_FILE
|
||||
|
||||
# For bind10_config
|
||||
B10_FROM_SOURCE=@abs_top_srcdir@
|
||||
export B10_FROM_SOURCE
|
||||
|
||||
# For data source loadable modules
|
||||
B10_FROM_BUILD=@abs_top_builddir@
|
||||
export B10_FROM_BUILD
|
||||
|
||||
LOADZONE_PATH=@abs_top_builddir@/src/bin/loadzone
|
||||
exec ${LOADZONE_PATH}/b10-loadzone "$@"
|
||||
|
37
src/bin/loadzone/tests/Makefile.am
Normal file
37
src/bin/loadzone/tests/Makefile.am
Normal file
@@ -0,0 +1,37 @@
|
||||
SUBDIRS = . correct
|
||||
|
||||
PYCOVERAGE_RUN=@PYCOVERAGE_RUN@
|
||||
PYTESTS = loadzone_test.py
|
||||
|
||||
EXTRA_DIST = $(PYTESTS)
|
||||
EXTRA_DIST += testdata/example.org.zone
|
||||
EXTRA_DIST += testdata/broken-example.org.zone
|
||||
EXTRA_DIST += testdata/example-nosoa.org.zone
|
||||
EXTRA_DIST += testdata/example-nons.org.zone
|
||||
|
||||
# If necessary (rare cases), explicitly specify paths to dynamic libraries
|
||||
# required by loadable python modules.
|
||||
LIBRARY_PATH_PLACEHOLDER =
|
||||
if SET_ENV_LIBRARY_PATH
|
||||
LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/util/io/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$(abs_top_builddir)/src/lib/acl/.libs:$$$(ENV_LIBRARY_PATH)
|
||||
endif
|
||||
|
||||
# test using command-line arguments, so use check-local target instead of TESTS
|
||||
# We need to define B10_FROM_BUILD for datasrc loadable modules
|
||||
check-local:
|
||||
if ENABLE_PYTHON_COVERAGE
|
||||
touch $(abs_top_srcdir)/.coverage
|
||||
rm -f .coverage
|
||||
${LN_S} $(abs_top_srcdir)/.coverage .coverage
|
||||
endif
|
||||
for pytest in $(PYTESTS) ; do \
|
||||
echo Running test: $$pytest ; \
|
||||
B10_FROM_SOURCE=$(abs_top_srcdir) \
|
||||
B10_FROM_BUILD=$(abs_top_builddir) \
|
||||
$(LIBRARY_PATH_PLACEHOLDER) \
|
||||
TESTDATA_PATH=$(abs_top_srcdir)/src/lib/testutils/testdata \
|
||||
LOCAL_TESTDATA_PATH=$(srcdir)/testdata \
|
||||
TESTDATA_WRITE_PATH=$(builddir) \
|
||||
PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/bin/loadzone:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/util/io/.libs \
|
||||
$(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \
|
||||
done
|
@@ -26,5 +26,8 @@ endif
|
||||
# TODO: maybe use TESTS?
|
||||
# test using command-line arguments, so use check-local target instead of TESTS
|
||||
check-local:
|
||||
echo Running test: correct_test.sh
|
||||
echo Running test: correct_test.sh
|
||||
B10_FROM_SOURCE=$(abs_top_srcdir) \
|
||||
B10_FROM_BUILD=$(abs_top_builddir) \
|
||||
PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/bin/loadzone:$(abs_top_builddir)/src/lib/dns/python/.libs \
|
||||
$(LIBRARY_PATH_PLACEHOLDER) $(SHELL) $(abs_builddir)/correct_test.sh
|
||||
|
@@ -18,7 +18,7 @@
|
||||
PYTHON_EXEC=${PYTHON_EXEC:-@PYTHON@}
|
||||
export PYTHON_EXEC
|
||||
|
||||
PYTHONPATH=@abs_top_builddir@/src/lib/python/isc/log_messages:@abs_top_srcdir@/src/lib/python:@abs_top_builddir@/src/lib/python
|
||||
PYTHONPATH=@abs_top_builddir@/src/lib/python/isc/log_messages:@abs_top_srcdir@/src/lib/python:@abs_top_builddir@/src/lib/python:$PYTHONPATH
|
||||
export PYTHONPATH
|
||||
|
||||
LOADZONE_PATH=@abs_top_builddir@/src/bin/loadzone
|
||||
@@ -28,28 +28,28 @@ TEST_OUTPUT_PATH=@abs_top_builddir@/src/bin/loadzone//tests/correct
|
||||
status=0
|
||||
echo "Loadzone include. from include.db file"
|
||||
cd ${TEST_FILE_PATH}
|
||||
${LOADZONE_PATH}/b10-loadzone -d ${TEST_OUTPUT_PATH}/zone.sqlite3 include.db >> /dev/null
|
||||
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' include. include.db >> /dev/null
|
||||
|
||||
echo "loadzone ttl1. from ttl1.db file"
|
||||
${LOADZONE_PATH}/b10-loadzone -d ${TEST_OUTPUT_PATH}/zone.sqlite3 ttl1.db >> /dev/null
|
||||
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' ttl1. ttl1.db >> /dev/null
|
||||
|
||||
echo "loadzone ttl2. from ttl2.db file"
|
||||
${LOADZONE_PATH}/b10-loadzone -d ${TEST_OUTPUT_PATH}/zone.sqlite3 ttl2.db >> /dev/null
|
||||
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' ttl2. ttl2.db >> /dev/null
|
||||
|
||||
echo "loadzone mix1. from mix1.db"
|
||||
${LOADZONE_PATH}/b10-loadzone -d ${TEST_OUTPUT_PATH}/zone.sqlite3 mix1.db >> /dev/null
|
||||
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' mix1. mix1.db >> /dev/null
|
||||
|
||||
echo "loadzone mix2. from mix2.db"
|
||||
${LOADZONE_PATH}/b10-loadzone -d ${TEST_OUTPUT_PATH}/zone.sqlite3 mix2.db >> /dev/null
|
||||
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' mix2. mix2.db >> /dev/null
|
||||
|
||||
echo "loadzone ttlext. from ttlext.db"
|
||||
${LOADZONE_PATH}/b10-loadzone -d ${TEST_OUTPUT_PATH}/zone.sqlite3 ttlext.db >> /dev/null
|
||||
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' ttlext. ttlext.db >> /dev/null
|
||||
|
||||
echo "loadzone example.com. from example.db"
|
||||
${LOADZONE_PATH}/b10-loadzone -d ${TEST_OUTPUT_PATH}/zone.sqlite3 example.db >> /dev/null
|
||||
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' example.com. example.db >> /dev/null
|
||||
|
||||
echo "loadzone comment.example.com. from comment.db"
|
||||
${LOADZONE_PATH}/b10-loadzone -d ${TEST_OUTPUT_PATH}/zone.sqlite3 comment.db >> /dev/null
|
||||
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' comment.example.com. comment.db >> /dev/null
|
||||
|
||||
echo "I:test master file \$INCLUDE semantics"
|
||||
echo "I:test master file BIND 8 compatibility TTL and \$TTL semantics"
|
||||
|
@@ -2,11 +2,17 @@
|
||||
$ORIGIN example.com.
|
||||
$TTL 60
|
||||
@ IN SOA ns1.example.com. hostmaster.example.com. (1 43200 900 1814400 7200)
|
||||
IN 20 NS ns1
|
||||
NS ns2
|
||||
; these need #2390
|
||||
; IN 20 NS ns1
|
||||
; NS ns2
|
||||
IN 20 NS ns1.example.com.
|
||||
NS ns2.example.com.
|
||||
ns1 IN 30 A 192.168.1.102
|
||||
70 NS ns3
|
||||
IN NS ns4
|
||||
; these need #2390
|
||||
; 70 NS ns3
|
||||
; IN NS ns4
|
||||
70 NS ns3.example.com.
|
||||
IN NS ns4.example.com.
|
||||
10 IN MX 10 mail.example.com.
|
||||
ns2 80 A 1.1.1.1
|
||||
ns3 IN A 2.2.2.2
|
||||
|
@@ -1,13 +1,17 @@
|
||||
$ORIGIN include. ; initialize origin
|
||||
$TTL 300
|
||||
@ IN SOA ns hostmaster (
|
||||
; this needs #2500
|
||||
;@ IN SOA ns hostmaster (
|
||||
@ IN SOA ns.include. hostmaster.include. (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
; this needs #2390
|
||||
; NS ns
|
||||
NS ns.include.
|
||||
|
||||
ns A 127.0.0.1
|
||||
|
||||
|
@@ -80,6 +80,6 @@ ns5.example.com. 90 IN A 4.4.4.4
|
||||
comment.example.com. 60 IN SOA ns1.example.com. hostmaster.example.com. 1 43200 900 1814400 7200
|
||||
comment.example.com. 60 IN NS ns1.example.com.
|
||||
comment.example.com. 60 IN TXT "Simple text"
|
||||
comment.example.com. 60 IN TXT "; No comment"
|
||||
comment.example.com. 60 IN TXT "\; No comment"
|
||||
comment.example.com. 60 IN TXT "Also no comment here"
|
||||
comment.example.com. 60 IN TXT "A combination ; see?"
|
||||
comment.example.com. 60 IN TXT "A combination \; see?"
|
||||
|
@@ -1,12 +1,16 @@
|
||||
$ORIGIN mix1.
|
||||
@ IN SOA ns hostmaster (
|
||||
; this needs #2500
|
||||
;@ IN SOA ns hostmaster (
|
||||
@ IN SOA ns.mix1. hostmaster.mix1. (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3
|
||||
)
|
||||
NS ns
|
||||
; this needs #2390
|
||||
; NS ns
|
||||
NS ns.mix1.
|
||||
ns A 10.53.0.1
|
||||
a TXT "soa minttl 3"
|
||||
b 2 TXT "explicit ttl 2"
|
||||
|
@@ -1,12 +1,16 @@
|
||||
$ORIGIN mix2.
|
||||
@ 1 IN SOA ns hostmaster (
|
||||
; this needs #2500
|
||||
;@ 1 IN SOA ns hostmaster (
|
||||
@ 1 IN SOA ns.mix2. hostmaster.mix2. (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3
|
||||
)
|
||||
NS ns
|
||||
; this needs #2390
|
||||
; NS ns
|
||||
NS ns.mix2.
|
||||
ns A 10.53.0.1
|
||||
a TXT "inherited ttl 1"
|
||||
$INCLUDE mix2sub1.txt
|
||||
|
@@ -1,3 +1,3 @@
|
||||
f TXT "default ttl 3"
|
||||
f TXT "default ttl 3"
|
||||
$TTL 5
|
||||
g TXT "default ttl 5"
|
||||
g TXT "default ttl 5"
|
||||
|
@@ -1,12 +1,16 @@
|
||||
$ORIGIN ttl1.
|
||||
@ IN SOA ns hostmaster (
|
||||
; this needs #2500
|
||||
;@ IN SOA ns hostmaster (
|
||||
@ IN SOA ns.ttl1. hostmaster.ttl1. (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3
|
||||
)
|
||||
NS ns
|
||||
; this needs #2390
|
||||
; NS ns
|
||||
NS ns.ttl1.
|
||||
ns A 10.53.0.1
|
||||
a TXT "soa minttl 3"
|
||||
b 2 TXT "explicit ttl 2"
|
||||
|
@@ -1,12 +1,16 @@
|
||||
$ORIGIN ttl2.
|
||||
@ 1 IN SOA ns hostmaster (
|
||||
; this needs #2500
|
||||
;@ 1 IN SOA ns hostmaster (
|
||||
@ 1 IN SOA ns.ttl2. hostmaster.ttl2 (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3
|
||||
)
|
||||
NS ns
|
||||
; this needs #2390
|
||||
; NS ns
|
||||
NS ns.ttl2.
|
||||
ns A 10.53.0.1
|
||||
a TXT "inherited ttl 1"
|
||||
b 2 TXT "explicit ttl 2"
|
||||
|
@@ -1,12 +1,16 @@
|
||||
$ORIGIN ttlext.
|
||||
@ IN SOA ns hostmaster (
|
||||
; this needs #2500
|
||||
;@ IN SOA ns hostmaster (
|
||||
@ IN SOA ns.ttlext. hostmaster.ttlext. (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3
|
||||
)
|
||||
NS ns
|
||||
; this needs #2390
|
||||
; NS ns
|
||||
NS ns.ttlext.
|
||||
ns A 10.53.0.1
|
||||
a TXT "soa minttl 3"
|
||||
b 2S TXT "explicit ttl 2"
|
||||
|
1
src/bin/loadzone/tests/error/.gitignore
vendored
1
src/bin/loadzone/tests/error/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
/error_test.sh
|
@@ -1,28 +0,0 @@
|
||||
EXTRA_DIST = error.known
|
||||
EXTRA_DIST += formerr1.db
|
||||
EXTRA_DIST += formerr2.db
|
||||
EXTRA_DIST += formerr3.db
|
||||
EXTRA_DIST += formerr4.db
|
||||
EXTRA_DIST += formerr5.db
|
||||
EXTRA_DIST += include.txt
|
||||
EXTRA_DIST += keyerror1.db
|
||||
EXTRA_DIST += keyerror2.db
|
||||
EXTRA_DIST += keyerror3.db
|
||||
#EXTRA_DIST += nofilenane.db
|
||||
EXTRA_DIST += originerr1.db
|
||||
EXTRA_DIST += originerr2.db
|
||||
|
||||
noinst_SCRIPTS = error_test.sh
|
||||
|
||||
# If necessary (rare cases), explicitly specify paths to dynamic libraries
|
||||
# required by loadable python modules.
|
||||
LIBRARY_PATH_PLACEHOLDER =
|
||||
if SET_ENV_LIBRARY_PATH
|
||||
LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/util/io/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH)
|
||||
endif
|
||||
|
||||
# TODO: use TESTS ?
|
||||
# test using command-line arguments, so use check-local target instead of TESTS
|
||||
check-local:
|
||||
echo Running test: error_test.sh
|
||||
$(LIBRARY_PATH_PLACEHOLDER) $(SHELL) $(abs_builddir)/error_test.sh
|
@@ -1,11 +0,0 @@
|
||||
Error reading zone file: Cannot parse RR, No $ORIGIN: @ IN SOA ns hostmaster 1 3600 1800 1814400 3600
|
||||
Error reading zone file: $ORIGIN is not absolute in record: $ORIGIN com
|
||||
Error reading zone file: Cannot parse RR: $TL 300
|
||||
Error reading zone file: Cannot parse RR: $OIGIN com.
|
||||
Error loading database: Error while loading com.: Cannot parse RR: $INLUDE file.txt
|
||||
Error loading database: Error while loading com.: Invalid $include format
|
||||
Error loading database: Error while loading com.: Cannot parse RR, No $ORIGIN: include.txt sub
|
||||
Error reading zone file: Invalid TTL: ""
|
||||
Error reading zone file: Invalid TTL: "M"
|
||||
Error loading database: Error while loading com.: Cannot parse RR: b "no type error!"
|
||||
Error reading zone file: Could not open bogusfile
|
@@ -1,82 +0,0 @@
|
||||
#! /bin/sh
|
||||
|
||||
# Copyright (C) 2010 Internet Systems Consortium.
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
|
||||
# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
|
||||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
|
||||
# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
|
||||
# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
|
||||
# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
PYTHON_EXEC=${PYTHON_EXEC:-@PYTHON@}
|
||||
export PYTHON_EXEC
|
||||
|
||||
PYTHONPATH=@abs_top_builddir@/src/lib/python/isc/log_messages:@abs_top_srcdir@/src/lib/python:@abs_top_builddir@/src/lib/python
|
||||
export PYTHONPATH
|
||||
|
||||
LOADZONE_PATH=@abs_top_builddir@/src/bin/loadzone
|
||||
TEST_OUTPUT_PATH=@abs_top_builddir@/src/bin/loadzone/tests/error
|
||||
TEST_FILE_PATH=@abs_top_srcdir@/src/bin/loadzone/tests/error
|
||||
|
||||
cd ${LOADZONE_PATH}/tests/error
|
||||
|
||||
export LOADZONE_PATH
|
||||
status=0
|
||||
|
||||
echo "PYTHON PATH: $PYTHONPATH"
|
||||
|
||||
echo "Test no \$ORIGIN error in zone file"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/originerr1.db 1> /dev/null 2> error.out
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/originerr2.db 1> /dev/null 2>> error.out
|
||||
|
||||
echo "Test: key word TTL spell error"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/keyerror1.db 1> /dev/null 2>> error.out
|
||||
|
||||
echo "Test: key word ORIGIN spell error"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/keyerror2.db 1> /dev/null 2>> error.out
|
||||
|
||||
echo "Test: key INCLUDE spell error"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/keyerror3.db 1> /dev/null 2>> error.out
|
||||
|
||||
echo "Test: include formal error, miss filename"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/formerr1.db 1> /dev/null 2>>error.out
|
||||
|
||||
echo "Test: include form error, domain is not absolute"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/formerr2.db 1> /dev/null 2>> error.out
|
||||
|
||||
echo "Test: TTL form error, no ttl value"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/formerr3.db 1> /dev/null 2>> error.out
|
||||
|
||||
echo "Test: TTL form error, ttl value error"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/formerr4.db 1> /dev/null 2>> error.out
|
||||
|
||||
echo "Test: rr form error, no type"
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 ${TEST_FILE_PATH}/formerr5.db 1> /dev/null 2>> error.out
|
||||
|
||||
echo "Test: zone file is bogus"
|
||||
# since bogusfile doesn't exist anyway, we *don't* specify the directory
|
||||
${LOADZONE_PATH}/b10-loadzone -d zone.sqlite3 bogusfile 1> /dev/null 2>> error.out
|
||||
|
||||
diff error.out ${TEST_FILE_PATH}/error.known || status=1
|
||||
|
||||
echo "Clean tmp file."
|
||||
rm -f error.out
|
||||
rm -f zone.sqlite3
|
||||
|
||||
echo "I:exit status:$status"
|
||||
echo "-----------------------------------------------------------------------------"
|
||||
echo "Ran 11 test files"
|
||||
echo ""
|
||||
if [ "$status" -eq 1 ];then
|
||||
echo "ERROR"
|
||||
else
|
||||
echo "OK"
|
||||
fi
|
||||
exit $status
|
@@ -1,13 +0,0 @@
|
||||
$TTL 300
|
||||
$ORIGIN com.
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1
|
||||
$INCLUDE
|
||||
a A 10.0.0.1
|
@@ -1,12 +0,0 @@
|
||||
$TTL 300
|
||||
com. IN SOA ns.com. hostmaster.com. (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns.example.com.
|
||||
ns.com. A 127.0.0.1
|
||||
$INCLUDE include.txt sub
|
||||
a.com. A 10.0.0.1
|
@@ -1,12 +0,0 @@
|
||||
$TTL
|
||||
$ORIGIN com.
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1
|
||||
a A 10.0.0.1
|
@@ -1,12 +0,0 @@
|
||||
$TTL M
|
||||
$ORIGIN com.
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1
|
||||
a A 10.0.0.1
|
@@ -1,13 +0,0 @@
|
||||
$TTL 2M
|
||||
$ORIGIN com.
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1 ; ip value
|
||||
b "no type error!"
|
||||
a A 10.0.0.1
|
@@ -1 +0,0 @@
|
||||
a 300 A 127.0.0.1
|
@@ -1,12 +0,0 @@
|
||||
$TL 300
|
||||
@ORIGIN com.
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1
|
||||
a A 10.0.0.1
|
@@ -1,12 +0,0 @@
|
||||
$TTL 300
|
||||
$OIGIN com.
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1
|
||||
a A 10.0.0.1
|
@@ -1,13 +0,0 @@
|
||||
$TTL 300
|
||||
$ORIGIN com.
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1
|
||||
$INLUDE file.txt
|
||||
a A 10.0.0.1
|
@@ -1,11 +0,0 @@
|
||||
$TTL 300
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1
|
||||
a A 10.0.0.1
|
@@ -1,12 +0,0 @@
|
||||
$TTL 300
|
||||
$ORIGIN com
|
||||
@ IN SOA ns hostmaster (
|
||||
1 ; serial
|
||||
3600
|
||||
1800
|
||||
1814400
|
||||
3600
|
||||
)
|
||||
NS ns
|
||||
ns A 127.0.0.1
|
||||
a A 10.0.0.1
|
342
src/bin/loadzone/tests/loadzone_test.py
Executable file
342
src/bin/loadzone/tests/loadzone_test.py
Executable file
@@ -0,0 +1,342 @@
|
||||
# Copyright (C) 2012 Internet Systems Consortium.
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
|
||||
# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
|
||||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
|
||||
# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
|
||||
# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
|
||||
# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
'''Tests for the loadzone module'''
|
||||
|
||||
import unittest
|
||||
from loadzone import *
|
||||
from isc.dns import *
|
||||
from isc.datasrc import *
|
||||
import isc.log
|
||||
import bind10_config
|
||||
import os
|
||||
import shutil
|
||||
|
||||
# Some common test parameters
|
||||
TESTDATA_PATH = os.environ['TESTDATA_PATH'] + os.sep
|
||||
READ_ZONE_DB_FILE = TESTDATA_PATH + "rwtest.sqlite3" # original, to be copied
|
||||
LOCAL_TESTDATA_PATH = os.environ['LOCAL_TESTDATA_PATH'] + os.sep
|
||||
READ_ZONE_DB_FILE = TESTDATA_PATH + "rwtest.sqlite3" # original, to be copied
|
||||
NEW_ZONE_TXT_FILE = LOCAL_TESTDATA_PATH + "example.org.zone"
|
||||
ALT_NEW_ZONE_TXT_FILE = TESTDATA_PATH + "example.com.zone"
|
||||
TESTDATA_WRITE_PATH = os.environ['TESTDATA_WRITE_PATH'] + os.sep
|
||||
WRITE_ZONE_DB_FILE = TESTDATA_WRITE_PATH + "rwtest.sqlite3.copied"
|
||||
TEST_ZONE_NAME = Name('example.org')
|
||||
DATASRC_CONFIG = '{"database_file": "' + WRITE_ZONE_DB_FILE + '"}'
|
||||
|
||||
# before/after SOAs: different in mname and serial
|
||||
ORIG_SOA_TXT = 'example.org. 3600 IN SOA ns1.example.org. ' +\
|
||||
'admin.example.org. 1234 3600 1800 2419200 7200\n'
|
||||
NEW_SOA_TXT = 'example.org. 3600 IN SOA ns.example.org. ' +\
|
||||
'admin.example.org. 1235 3600 1800 2419200 7200\n'
|
||||
# This is the brandnew SOA for a newly created zone
|
||||
ALT_NEW_SOA_TXT = 'example.com. 3600 IN SOA ns.example.com. ' +\
|
||||
'admin.example.com. 1234 3600 1800 2419200 7200\n'
|
||||
|
||||
class TestLoadZoneRunner(unittest.TestCase):
|
||||
def setUp(self):
|
||||
shutil.copyfile(READ_ZONE_DB_FILE, WRITE_ZONE_DB_FILE)
|
||||
|
||||
# default command line arguments
|
||||
self.__args = ['-c', DATASRC_CONFIG, 'example.org', NEW_ZONE_TXT_FILE]
|
||||
self.__runner = LoadZoneRunner(self.__args)
|
||||
|
||||
def tearDown(self):
|
||||
# Delete the used DB file; if some of the tests unexpectedly fail
|
||||
# unexpectedly in the middle of updating the DB, a lock could stay
|
||||
# there and would affect the other tests that would otherwise succeed.
|
||||
os.unlink(WRITE_ZONE_DB_FILE)
|
||||
|
||||
def test_init(self):
|
||||
'''
|
||||
Checks initial class attributes
|
||||
'''
|
||||
self.assertIsNone(self.__runner._zone_class)
|
||||
self.assertIsNone(self.__runner._zone_name)
|
||||
self.assertIsNone(self.__runner._zone_file)
|
||||
self.assertIsNone(self.__runner._datasrc_config)
|
||||
self.assertIsNone(self.__runner._datasrc_type)
|
||||
self.assertEqual(10000, self.__runner._report_interval)
|
||||
self.assertEqual('INFO', self.__runner._log_severity)
|
||||
self.assertEqual(0, self.__runner._log_debuglevel)
|
||||
|
||||
def test_parse_args(self):
|
||||
self.__runner._parse_args()
|
||||
self.assertEqual(TEST_ZONE_NAME, self.__runner._zone_name)
|
||||
self.assertEqual(NEW_ZONE_TXT_FILE, self.__runner._zone_file)
|
||||
self.assertEqual(DATASRC_CONFIG, self.__runner._datasrc_config)
|
||||
self.assertEqual('sqlite3', self.__runner._datasrc_type) # default
|
||||
self.assertEqual(10000, self.__runner._report_interval) # default
|
||||
self.assertEqual(RRClass.IN(), self.__runner._zone_class) # default
|
||||
self.assertEqual('INFO', self.__runner._log_severity) # default
|
||||
self.assertEqual(0, self.__runner._log_debuglevel)
|
||||
|
||||
def test_set_loglevel(self):
|
||||
runner = LoadZoneRunner(['-d', '1'] + self.__args)
|
||||
runner._parse_args()
|
||||
self.assertEqual('DEBUG', runner._log_severity)
|
||||
self.assertEqual(1, runner._log_debuglevel)
|
||||
|
||||
def test_parse_bad_args(self):
|
||||
# There must be exactly 2 non-option arguments: zone name and zone file
|
||||
self.assertRaises(BadArgument, LoadZoneRunner([])._parse_args)
|
||||
self.assertRaises(BadArgument, LoadZoneRunner(['example']).
|
||||
_parse_args)
|
||||
self.assertRaises(BadArgument, LoadZoneRunner(self.__args + ['0']).
|
||||
_parse_args)
|
||||
|
||||
# Bad zone name
|
||||
args = ['example.org', 'example.zone'] # otherwise valid args
|
||||
self.assertRaises(BadArgument,
|
||||
LoadZoneRunner(['bad..name', 'example.zone'] + args).
|
||||
_parse_args)
|
||||
|
||||
# Bad class name
|
||||
self.assertRaises(BadArgument,
|
||||
LoadZoneRunner(['-C', 'badclass'] + args).
|
||||
_parse_args)
|
||||
# Unsupported class
|
||||
self.assertRaises(BadArgument,
|
||||
LoadZoneRunner(['-C', 'CH'] + args)._parse_args)
|
||||
|
||||
# bad debug level
|
||||
self.assertRaises(BadArgument,
|
||||
LoadZoneRunner(['-d', '-10'] + args)._parse_args)
|
||||
|
||||
# bad report interval
|
||||
self.assertRaises(BadArgument,
|
||||
LoadZoneRunner(['-i', '-5'] + args)._parse_args)
|
||||
|
||||
# -c cannot be omitted unless it's type sqlite3 (right now)
|
||||
self.assertRaises(BadArgument,
|
||||
LoadZoneRunner(['-t', 'memory'] + args)._parse_args)
|
||||
|
||||
def test_get_datasrc_config(self):
|
||||
# For sqlite3, we use the config with the well-known DB file.
|
||||
expected_conf = \
|
||||
'{"database_file": "' + bind10_config.DATA_PATH + '/zone.sqlite3"}'
|
||||
self.assertEqual(expected_conf,
|
||||
self.__runner._get_datasrc_config('sqlite3'))
|
||||
|
||||
# For other types, config must be given by hand for now
|
||||
self.assertRaises(BadArgument, self.__runner._get_datasrc_config,
|
||||
'memory')
|
||||
|
||||
def __common_load_setup(self):
|
||||
self.__runner._zone_class = RRClass.IN()
|
||||
self.__runner._zone_name = TEST_ZONE_NAME
|
||||
self.__runner._zone_file = NEW_ZONE_TXT_FILE
|
||||
self.__runner._datasrc_type = 'sqlite3'
|
||||
self.__runner._datasrc_config = DATASRC_CONFIG
|
||||
self.__runner._report_interval = 1
|
||||
self.__reports = []
|
||||
self.__runner._report_progress = lambda x: self.__reports.append(x)
|
||||
|
||||
def __check_zone_soa(self, soa_txt, zone_name=TEST_ZONE_NAME):
|
||||
"""Check that the given SOA RR exists and matches the expected string
|
||||
|
||||
If soa_txt is None, the zone is expected to be non-existent.
|
||||
Otherwise, if soa_txt is False, the zone should exist but SOA is
|
||||
expected to be missing.
|
||||
|
||||
"""
|
||||
|
||||
client = DataSourceClient('sqlite3', DATASRC_CONFIG)
|
||||
result, finder = client.find_zone(zone_name)
|
||||
if soa_txt is None:
|
||||
self.assertEqual(client.NOTFOUND, result)
|
||||
return
|
||||
self.assertEqual(client.SUCCESS, result)
|
||||
result, rrset, _ = finder.find(zone_name, RRType.SOA())
|
||||
if soa_txt:
|
||||
self.assertEqual(finder.SUCCESS, result)
|
||||
self.assertEqual(soa_txt, rrset.to_text())
|
||||
else:
|
||||
self.assertEqual(finder.NXRRSET, result)
|
||||
|
||||
def test_load_update(self):
|
||||
'''successful case to loading new contents to an existing zone.'''
|
||||
self.__common_load_setup()
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
self.__runner._do_load()
|
||||
# In this test setup every loaded RR will be reported, and there will
|
||||
# be 3 RRs
|
||||
self.assertEqual([1, 2, 3], self.__reports)
|
||||
self.__check_zone_soa(NEW_SOA_TXT)
|
||||
|
||||
def test_load_update_skipped_report(self):
|
||||
'''successful loading, with reports for every 2 RRs'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._report_interval = 2
|
||||
self.__runner._do_load()
|
||||
self.assertEqual([2], self.__reports)
|
||||
|
||||
def test_load_update_no_report(self):
|
||||
'''successful loading, without progress reports'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._report_interval = 0
|
||||
self.__runner._do_load()
|
||||
self.assertEqual([], self.__reports) # no report
|
||||
self.__check_zone_soa(NEW_SOA_TXT) # but load is completed
|
||||
|
||||
def test_create_and_load(self):
|
||||
'''successful case to loading contents to a new zone (created).'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._zone_name = Name('example.com')
|
||||
self.__runner._zone_file = ALT_NEW_ZONE_TXT_FILE
|
||||
self.__check_zone_soa(None, zone_name=Name('example.com'))
|
||||
self.__runner._do_load()
|
||||
self.__check_zone_soa(ALT_NEW_SOA_TXT, zone_name=Name('example.com'))
|
||||
|
||||
def test_load_fail_badconfig(self):
|
||||
'''Load attempt fails due to broken datasrc config.'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._datasrc_config = "invalid config"
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
self.assertRaises(LoadFailure, self.__runner._do_load)
|
||||
self.__check_zone_soa(ORIG_SOA_TXT) # no change to the zone
|
||||
|
||||
def test_load_fail_badzone(self):
|
||||
'''Load attempt fails due to broken zone file.'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._zone_file = \
|
||||
LOCAL_TESTDATA_PATH + '/broken-example.org.zone'
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
self.assertRaises(LoadFailure, self.__runner._do_load)
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
|
||||
def test_load_fail_noloader(self):
|
||||
'''Load attempt fails because loading isn't supported'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._datasrc_type = 'memory'
|
||||
self.__runner._datasrc_config = '{"type": "memory"}'
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
self.assertRaises(LoadFailure, self.__runner._do_load)
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
|
||||
def test_load_fail_create_cancel(self):
|
||||
'''Load attempt fails and new creation of zone is canceled'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._zone_name = Name('example.com')
|
||||
self.__runner._zone_file = 'no-such-file'
|
||||
self.__check_zone_soa(None, zone_name=Name('example.com'))
|
||||
self.assertRaises(LoadFailure, self.__runner._do_load)
|
||||
# _do_load() should have once created the zone but then canceled it.
|
||||
self.__check_zone_soa(None, zone_name=Name('example.com'))
|
||||
|
||||
def __common_post_load_setup(self, zone_file):
|
||||
'''Common setup procedure for post load tests.'''
|
||||
# replace the LoadZoneRunner's original _post_load_warning() for
|
||||
# inspection
|
||||
self.__warnings = []
|
||||
self.__runner._post_load_warning = \
|
||||
lambda msg: self.__warnings.append(msg)
|
||||
|
||||
# perform load and invoke checks
|
||||
self.__common_load_setup()
|
||||
self.__runner._zone_file = zone_file
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
self.__runner._do_load()
|
||||
self.__runner._post_load_checks()
|
||||
|
||||
def test_load_post_check_fail_soa(self):
|
||||
'''Load succeeds but warns about missing SOA, should cause warn'''
|
||||
self.__common_load_setup()
|
||||
self.__common_post_load_setup(LOCAL_TESTDATA_PATH +
|
||||
'/example-nosoa.org.zone')
|
||||
self.__check_zone_soa(False)
|
||||
self.assertEqual(1, len(self.__warnings))
|
||||
self.assertEqual('zone has no SOA', self.__warnings[0])
|
||||
|
||||
def test_load_post_check_fail_ns(self):
|
||||
'''Load succeeds but warns about missing NS, should cause warn'''
|
||||
self.__common_load_setup()
|
||||
self.__common_post_load_setup(LOCAL_TESTDATA_PATH +
|
||||
'/example-nons.org.zone')
|
||||
self.__check_zone_soa(NEW_SOA_TXT)
|
||||
self.assertEqual(1, len(self.__warnings))
|
||||
self.assertEqual('zone has no NS', self.__warnings[0])
|
||||
|
||||
def __interrupt_progress(self, loaded_rrs):
|
||||
'''A helper emulating a signal in the middle of loading.
|
||||
|
||||
On the second progress report, it internally invokes the signal
|
||||
handler to see if it stops the loading.
|
||||
|
||||
'''
|
||||
self.__reports.append(loaded_rrs)
|
||||
if len(self.__reports) == 2:
|
||||
self.__runner._interrupt_handler()
|
||||
|
||||
def test_load_interrupted(self):
|
||||
'''Load attempt fails due to signal interruption'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._report_progress = lambda x: self.__interrupt_progress(x)
|
||||
# The interrupting _report_progress() will terminate the loading
|
||||
# in the middle. the number of reports is smaller, and the zone
|
||||
# won't be changed.
|
||||
self.assertRaises(LoadFailure, self.__runner._do_load)
|
||||
self.assertEqual([1, 2], self.__reports)
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
|
||||
def test_load_interrupted_create_cancel(self):
|
||||
'''Load attempt for a new zone fails due to signal interruption
|
||||
|
||||
It cancels the zone creation.
|
||||
|
||||
'''
|
||||
self.__common_load_setup()
|
||||
self.__runner._report_progress = lambda x: self.__interrupt_progress(x)
|
||||
self.__runner._zone_name = Name('example.com')
|
||||
self.__runner._zone_file = ALT_NEW_ZONE_TXT_FILE
|
||||
self.__check_zone_soa(None, zone_name=Name('example.com'))
|
||||
self.assertRaises(LoadFailure, self.__runner._do_load)
|
||||
self.assertEqual([1, 2], self.__reports)
|
||||
self.__check_zone_soa(None, zone_name=Name('example.com'))
|
||||
|
||||
def test_run_success(self):
|
||||
'''Check for the top-level method.
|
||||
|
||||
Detailed behavior is tested in other tests. We only check the
|
||||
return value of run(), and the zone is successfully loaded.
|
||||
|
||||
'''
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
self.assertEqual(0, self.__runner.run())
|
||||
self.__check_zone_soa(NEW_SOA_TXT)
|
||||
|
||||
def test_run_fail(self):
|
||||
'''Check for the top-level method, failure case.
|
||||
|
||||
Similar to the success test, but loading will fail, and return
|
||||
value should be 1.
|
||||
|
||||
'''
|
||||
runner = LoadZoneRunner(['-c', DATASRC_CONFIG, 'example.org',
|
||||
LOCAL_TESTDATA_PATH +
|
||||
'/broken-example.org.zone'])
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
self.assertEqual(1, runner.run())
|
||||
self.__check_zone_soa(ORIG_SOA_TXT)
|
||||
|
||||
if __name__== "__main__":
|
||||
isc.log.resetUnitTestRootLogger()
|
||||
# Disable the internal logging setup so the test output won't be too
|
||||
# verbose by default.
|
||||
LoadZoneRunner._config_log = lambda x: None
|
||||
|
||||
# Cancel signal handlers so we can stop tests when they hang
|
||||
LoadZoneRunner._set_signal_handlers = lambda x: None
|
||||
unittest.main()
|
11
src/bin/loadzone/tests/testdata/broken-example.org.zone
vendored
Normal file
11
src/bin/loadzone/tests/testdata/broken-example.org.zone
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
example.org. 3600 IN SOA (
|
||||
ns.example.org.
|
||||
admin.example.org.
|
||||
1235
|
||||
3600 ;1H
|
||||
1800 ;30M
|
||||
2419200
|
||||
7200)
|
||||
example.org. 3600 IN NS ns.example.org.
|
||||
ns.example.org. 3600 IN A 192.0.2.1
|
||||
bad..name.example.org. 3600 IN AAAA 2001:db8::1
|
10
src/bin/loadzone/tests/testdata/example-nons.org.zone
vendored
Normal file
10
src/bin/loadzone/tests/testdata/example-nons.org.zone
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
;; Intentionally missing SOA for testing post-load checks
|
||||
example.org. 3600 IN SOA (
|
||||
ns.example.org.
|
||||
admin.example.org.
|
||||
1235
|
||||
3600 ;1H
|
||||
1800 ;30M
|
||||
2419200
|
||||
7200)
|
||||
ns.example.org. 3600 IN A 192.0.2.1
|
3
src/bin/loadzone/tests/testdata/example-nosoa.org.zone
vendored
Normal file
3
src/bin/loadzone/tests/testdata/example-nosoa.org.zone
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
;; Intentionally missing SOA for testing post-load checks
|
||||
example.org. 3600 IN NS ns.example.org.
|
||||
ns.example.org. 3600 IN A 192.0.2.1
|
10
src/bin/loadzone/tests/testdata/example.org.zone
vendored
Normal file
10
src/bin/loadzone/tests/testdata/example.org.zone
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
example.org. 3600 IN SOA (
|
||||
ns.example.org.
|
||||
admin.example.org.
|
||||
1235
|
||||
3600 ;1H
|
||||
1800 ;30M
|
||||
2419200
|
||||
7200)
|
||||
example.org. 3600 IN NS ns.example.org.
|
||||
ns.example.org. 3600 IN A 192.0.2.1
|
@@ -5,10 +5,16 @@ pkglibexecdir = $(libexecdir)/@PACKAGE@
|
||||
pkglibexec_SCRIPTS = b10-msgq
|
||||
|
||||
CLEANFILES = b10-msgq msgq.pyc
|
||||
CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/msgq_messages.py
|
||||
CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/msgq_messages.pyc
|
||||
|
||||
man_MANS = b10-msgq.8
|
||||
DISTCLEANFILES = $(man_MANS)
|
||||
EXTRA_DIST = $(man_MANS) msgq.xml
|
||||
EXTRA_DIST = $(man_MANS) msgq.xml msgq_messages.mes
|
||||
|
||||
nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/msgq_messages.py
|
||||
pylogmessagedir = $(pyexecdir)/isc/log_messages/
|
||||
BUILT_SOURCES = $(PYTHON_LOGMSGPKG_DIR)/work/msgq_messages.py
|
||||
|
||||
if GENERATE_DOCS
|
||||
|
||||
@@ -23,6 +29,11 @@ $(man_MANS):
|
||||
|
||||
endif
|
||||
|
||||
# Define rule to build logging source files from message file
|
||||
$(PYTHON_LOGMSGPKG_DIR)/work/msgq_messages.py : msgq_messages.mes
|
||||
$(top_builddir)/src/lib/log/compiler/message \
|
||||
-d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/msgq_messages.mes
|
||||
|
||||
# this is done here since configure.ac AC_OUTPUT doesn't expand exec_prefix
|
||||
b10-msgq: msgq.py
|
||||
$(SED) "s|@@PYTHONPATH@@|@pyexecdir@|" msgq.py >$@
|
||||
|
@@ -31,10 +31,16 @@ import select
|
||||
import random
|
||||
from optparse import OptionParser, OptionValueError
|
||||
import isc.util.process
|
||||
import isc.log
|
||||
from isc.log_messages.msgq_messages import *
|
||||
|
||||
import isc.cc
|
||||
|
||||
isc.util.process.rename()
|
||||
logger = isc.log.Logger("msgq")
|
||||
TRACE_START = logger.DBGLVL_START_SHUT
|
||||
TRACE_BASIC = logger.DBGLVL_TRACE_BASIC
|
||||
TRACE_DETAIL = logger.DBGLVL_TRACE_DETAIL
|
||||
|
||||
# This is the version that gets displayed to the user.
|
||||
# The VERSION string consists of the module name, the module version
|
||||
@@ -51,11 +57,11 @@ class SubscriptionManager:
|
||||
"""Add a subscription."""
|
||||
target = ( group, instance )
|
||||
if target in self.subscriptions:
|
||||
print("[b10-msgq] Appending to existing target")
|
||||
logger.debug(TRACE_BASIC, MSGQ_SUBS_APPEND_TARGET, group, instance)
|
||||
if socket not in self.subscriptions[target]:
|
||||
self.subscriptions[target].append(socket)
|
||||
else:
|
||||
print("[b10-msgq] Creating new target")
|
||||
logger.debug(TRACE_BASIC, MSGQ_SUBS_NEW_TARGET, group, instance)
|
||||
self.subscriptions[target] = [ socket ]
|
||||
|
||||
def unsubscribe(self, group, instance, socket):
|
||||
@@ -162,9 +168,7 @@ class MsgQ:
|
||||
|
||||
def setup_listener(self):
|
||||
"""Set up the listener socket. Internal function."""
|
||||
if self.verbose:
|
||||
sys.stdout.write("[b10-msgq] Setting up socket at %s\n" %
|
||||
self.socket_file)
|
||||
logger.debug(TRACE_BASIC, MSGQ_LISTENER_SETUP, self.socket_file)
|
||||
|
||||
self.listen_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
|
||||
@@ -179,8 +183,7 @@ class MsgQ:
|
||||
if os.path.exists(self.socket_file):
|
||||
os.remove(self.socket_file)
|
||||
self.listen_socket.close()
|
||||
sys.stderr.write("[b10-msgq] failed to setup listener on %s: %s\n"
|
||||
% (self.socket_file, str(e)))
|
||||
logger.fatal(MSGQ_LISTENER_FAILED, self.socket_file, e)
|
||||
raise e
|
||||
|
||||
if self.poller:
|
||||
@@ -197,8 +200,7 @@ class MsgQ:
|
||||
self.setup_poller()
|
||||
self.setup_listener()
|
||||
|
||||
if self.verbose:
|
||||
sys.stdout.write("[b10-msgq] Listening\n")
|
||||
logger.debug(TRACE_START, MSGQ_LISTENER_STARTED);
|
||||
|
||||
self.runnable = True
|
||||
|
||||
@@ -226,7 +228,7 @@ class MsgQ:
|
||||
def process_socket(self, fd):
|
||||
"""Process a read on a socket."""
|
||||
if not fd in self.sockets:
|
||||
sys.stderr.write("[b10-msgq] Got read on Strange Socket fd %d\n" % fd)
|
||||
logger.error(MSGQ_READ_UNKNOWN_FD, fd)
|
||||
return
|
||||
sock = self.sockets[fd]
|
||||
# sys.stderr.write("[b10-msgq] Got read on fd %d\n" %fd)
|
||||
@@ -243,7 +245,7 @@ class MsgQ:
|
||||
del self.sockets[fd]
|
||||
if fd in self.sendbuffs:
|
||||
del self.sendbuffs[fd]
|
||||
sys.stderr.write("[b10-msgq] Closing socket fd %d\n" % fd)
|
||||
logger.debug(TRACE_BASIC, MSGQ_SOCK_CLOSE, fd)
|
||||
|
||||
def getbytes(self, fd, sock, length):
|
||||
"""Get exactly the requested bytes, or raise an exception if
|
||||
@@ -285,15 +287,15 @@ class MsgQ:
|
||||
try:
|
||||
routing, data = self.read_packet(fd, sock)
|
||||
except MsgQReceiveError as err:
|
||||
logger.error(MSGQ_RECV_ERR, fd, err)
|
||||
self.kill_socket(fd, sock)
|
||||
sys.stderr.write("[b10-msgq] Receive error: %s\n" % err)
|
||||
return
|
||||
|
||||
try:
|
||||
routingmsg = isc.cc.message.from_wire(routing)
|
||||
except DecodeError as err:
|
||||
self.kill_socket(fd, sock)
|
||||
sys.stderr.write("[b10-msgq] Routing decode error: %s\n" % err)
|
||||
logger.error(MSGQ_HDR_DECODE_ERR, fd, err)
|
||||
return
|
||||
|
||||
self.process_command(fd, sock, routingmsg, data)
|
||||
@@ -301,9 +303,7 @@ class MsgQ:
|
||||
def process_command(self, fd, sock, routing, data):
|
||||
"""Process a single command. This will split out into one of the
|
||||
other functions."""
|
||||
# TODO: A print statement got removed here (one that prints the
|
||||
# routing envelope). When we have logging with multiple levels,
|
||||
# we might want to re-add that on a high debug verbosity.
|
||||
logger.debug(TRACE_DETAIL, MSGQ_RECV_HDR, routing)
|
||||
cmd = routing["type"]
|
||||
if cmd == 'send':
|
||||
self.process_command_send(sock, routing, data)
|
||||
@@ -319,7 +319,7 @@ class MsgQ:
|
||||
elif cmd == 'stop':
|
||||
self.stop()
|
||||
else:
|
||||
sys.stderr.write("[b10-msgq] Invalid command: %s\n" % cmd)
|
||||
logger.error(MSGQ_INVALID_CMD, cmd)
|
||||
|
||||
def preparemsg(self, env, msg = None):
|
||||
if type(env) == dict:
|
||||
@@ -363,8 +363,8 @@ class MsgQ:
|
||||
elif e.errno in [ errno.EPIPE,
|
||||
errno.ECONNRESET,
|
||||
errno.ENOBUFS ]:
|
||||
print("[b10-msgq] " + errno.errorcode[e.errno] +
|
||||
" on send, dropping message and closing connection")
|
||||
logger.error(MSGQ_SEND_ERR, sock.fileno(),
|
||||
errno.errorcode[e.errno])
|
||||
self.kill_socket(sock.fileno(), sock)
|
||||
return None
|
||||
else:
|
||||
@@ -491,7 +491,7 @@ class MsgQ:
|
||||
if err.args[0] == errno.EINTR:
|
||||
events = []
|
||||
else:
|
||||
sys.stderr.write("[b10-msgq] Error with poll(): %s\n" % err)
|
||||
logger.fatal(MSGQ_POLL_ERR, err)
|
||||
break
|
||||
for (fd, event) in events:
|
||||
if fd == self.listen_socket.fileno():
|
||||
@@ -502,7 +502,7 @@ class MsgQ:
|
||||
elif event & select.POLLIN:
|
||||
self.process_socket(fd)
|
||||
else:
|
||||
print("[b10-msgq] Error: Unknown even in run_poller()")
|
||||
logger.error(MSGQ_POLL_UNKNOWN_EVENT, fd, event)
|
||||
|
||||
def run_kqueue(self):
|
||||
while self.running:
|
||||
@@ -563,18 +563,25 @@ if __name__ == "__main__":
|
||||
help="UNIX domain socket file the msgq daemon will use")
|
||||
(options, args) = parser.parse_args()
|
||||
|
||||
# Init logging, according to the parameters.
|
||||
# FIXME: Do proper logger configuration, this is just a hack
|
||||
# This is #2582
|
||||
sev = 'INFO'
|
||||
if options.verbose:
|
||||
sev = 'DEBUG'
|
||||
isc.log.init("b10-msgq", buffer=False, severity=sev, debuglevel=99)
|
||||
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
# Announce startup.
|
||||
if options.verbose:
|
||||
sys.stdout.write("[b10-msgq] %s\n" % VERSION)
|
||||
logger.debug(TRACE_START, MSGQ_START, VERSION)
|
||||
|
||||
msgq = MsgQ(options.msgq_socket_file, options.verbose)
|
||||
|
||||
try:
|
||||
msgq.setup()
|
||||
except Exception as e:
|
||||
sys.stderr.write("[b10-msgq] Error on startup: %s\n" % str(e))
|
||||
logger.fatal(MSGQ_START_FAIL, e)
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
|
@@ -111,7 +111,7 @@
|
||||
<listitem><para>
|
||||
The UNIX domain socket file this daemon will use.
|
||||
The default is
|
||||
<filename>/usr/local/var/bind10-devel/msg_socket</filename>.
|
||||
<filename>/usr/local/var/bind10/msg_socket</filename>.
|
||||
<!-- @localstatedir@/@PACKAGE_NAME@/msg_socket -->
|
||||
</para></listitem>
|
||||
</varlistentry>
|
||||
|
88
src/bin/msgq/msgq_messages.mes
Normal file
88
src/bin/msgq/msgq_messages.mes
Normal file
@@ -0,0 +1,88 @@
|
||||
# Copyright (C) 2012 Internet Systems Consortium, Inc. ("ISC")
|
||||
#
|
||||
# Permission to use, copy, modify, and/or distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH
|
||||
# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
|
||||
# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
|
||||
# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
|
||||
# PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
# No namespace declaration - these constants go in the global namespace
|
||||
# of the ddns messages python module.
|
||||
|
||||
# When you add a message to this file, it is a good idea to run
|
||||
# <topsrcdir>/tools/reorder_message_file.py to make sure the
|
||||
# messages are in the correct order.
|
||||
|
||||
% MSGQ_HDR_DECODE_ERR Error decoding header received from socket %1: %2
|
||||
The socket with mentioned file descriptor sent a packet. However, it was not
|
||||
possible to decode the routing header of the packet. The packet is ignored.
|
||||
This may be caused by a programmer error (one of the components sending invalid
|
||||
data) or possibly by incompatible version of msgq and the component (but that's
|
||||
unlikely, as the protocol is not changed often).
|
||||
|
||||
% MSGQ_LISTENER_FAILED Failed to initialize listener on socket file '%1': %2
|
||||
The message queue daemon tried to listen on a file socket (the path is in the
|
||||
message), but it failed. The error from the operating system is logged.
|
||||
|
||||
% MSGQ_LISTENER_SETUP Starting to listen on socket file '%1'
|
||||
Debug message. The listener is trying to open a listening socket.
|
||||
|
||||
% MSGQ_LISTENER_STARTED Successfully started to listen
|
||||
Debug message. The message queue successfully opened a listening socket and
|
||||
waits for incoming connections.
|
||||
|
||||
% MSGQ_POLL_ERR Error while polling for events: %1
|
||||
A low-level error happened when waiting for events, the error is logged. The
|
||||
reason for this varies, but it usually means the system is short on some
|
||||
resources.
|
||||
|
||||
% MSGQ_POLL_UNKNOWN_EVENT Got an unknown event from the poller for fd %1: %2
|
||||
An unknown event got out from the poll() system call. This should generally not
|
||||
happen and it is either a programmer error or OS bug. The event is ignored. The
|
||||
number noted as the event is the raw encoded value, which might be useful to
|
||||
the authors when figuring the problem out.
|
||||
|
||||
% MSGQ_READ_UNKNOWN_FD Got read on strange socket %1
|
||||
The OS reported a file descriptor is ready to read. But the daemon doesn't know
|
||||
the mentioned file descriptor, which is either a programmer error or OS bug.
|
||||
The read event is ignored.
|
||||
|
||||
% MSGQ_RECV_ERR Error reading from socket %1: %2
|
||||
There was a low-level error when reading from a socket. The error is logged and
|
||||
the corresponding socket is dropped.
|
||||
|
||||
% MSGQ_RECV_HDR Received header: %1
|
||||
Debug message. This message includes the whole routing header of a packet.
|
||||
|
||||
% MSGQ_INVALID_CMD Received invalid command: %1
|
||||
An unknown command listed in the log has been received. It is ignored. This
|
||||
indicates either a programmer error (eg. a typo in the command name) or
|
||||
incompatible version of a module and message queue daemon.
|
||||
|
||||
% MSGQ_SEND_ERR Error while sending to socket %1: %2
|
||||
There was a low-level error when sending data to a socket. The error is logged
|
||||
and the corresponding socket is dropped.
|
||||
|
||||
% MSGQ_SOCK_CLOSE Closing socket fd %1
|
||||
Debug message. Closing the mentioned socket.
|
||||
|
||||
% MSGQ_START Msgq version %1 starting
|
||||
Debug message. The message queue is starting up.
|
||||
|
||||
% MSGQ_START_FAIL Error during startup: %1
|
||||
There was an error during early startup of the daemon. More concrete error is
|
||||
in the log. The daemon terminates as a result.
|
||||
|
||||
% MSGQ_SUBS_APPEND_TARGET Appending to existing target for subscription to group '%1' for instance '%2'
|
||||
Debug message. Creating a new subscription by appending it to already existing
|
||||
data structure.
|
||||
|
||||
% MSGQ_SUBS_NEW_TARGET Creating new target for subscription to group '%1' for instance '%2'
|
||||
Debug message. Creating a new subscription. Also creating a new data structure
|
||||
to hold it.
|
@@ -20,9 +20,25 @@ export PYTHON_EXEC
|
||||
|
||||
MYPATH_PATH=@abs_top_builddir@/src/bin/msgq
|
||||
|
||||
PYTHONPATH=@abs_top_srcdir@/src/lib/python
|
||||
PYTHONPATH=@abs_top_builddir@/src/lib/python/isc/log_messages:@abs_top_builddir@/src/lib/python:@abs_top_builddir@/src/lib/log/.libs
|
||||
export PYTHONPATH
|
||||
|
||||
# If necessary (rare cases), explicitly specify paths to dynamic libraries
|
||||
# required by loadable python modules.
|
||||
SET_ENV_LIBRARY_PATH=@SET_ENV_LIBRARY_PATH@
|
||||
if test $SET_ENV_LIBRARY_PATH = yes; then
|
||||
@ENV_LIBRARY_PATH@=@abs_top_builddir@/src/lib/dns/.libs:@abs_top_builddir@/src/lib/dns/python/.libs:@abs_top_builddir@/src/lib/cryptolink/.libs:@abs_top_builddir@/src/lib/cc/.libs:@abs_top_builddir@/src/lib/config/.libs:@abs_top_builddir@/src/lib/log/.libs:@abs_top_builddir@/src/lib/acl/.libs:@abs_top_builddir@/src/lib/util/.libs:@abs_top_builddir@/src/lib/util/io/.libs:@abs_top_builddir@/src/lib/exceptions/.libs:@abs_top_builddir@/src/lib/datasrc/.libs:$@ENV_LIBRARY_PATH@
|
||||
export @ENV_LIBRARY_PATH@
|
||||
fi
|
||||
|
||||
B10_FROM_SOURCE=@abs_top_srcdir@
|
||||
export B10_FROM_SOURCE
|
||||
# TODO: We need to do this feature based (ie. no general from_source)
|
||||
# But right now we need a second one because some spec files are
|
||||
# generated and hence end up under builddir
|
||||
B10_FROM_BUILD=@abs_top_builddir@
|
||||
export B10_FROM_BUILD
|
||||
|
||||
BIND10_MSGQ_SOCKET_FILE=@abs_top_builddir@/msgq_socket
|
||||
export BIND10_MSGQ_SOCKET_FILE
|
||||
|
||||
|
@@ -21,6 +21,8 @@ endif
|
||||
$(LIBRARY_PATH_PLACEHOLDER) \
|
||||
PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/bin/msgq \
|
||||
BIND10_TEST_SOCKET_FILE=$(builddir)/test_msgq_socket.sock \
|
||||
B10_FROM_SOURCE=$(abs_top_srcdir) \
|
||||
B10_FROM_BUILD=$(abs_top_builddir) \
|
||||
$(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \
|
||||
done
|
||||
|
||||
|
@@ -1,28 +0,0 @@
|
||||
#! /bin/sh
|
||||
|
||||
# Copyright (C) 2010 Internet Systems Consortium.
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
|
||||
# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
|
||||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
|
||||
# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
|
||||
# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
|
||||
# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
PYTHON_EXEC=${PYTHON_EXEC:-@PYTHON@}
|
||||
export PYTHON_EXEC
|
||||
|
||||
MYPATH_PATH=@abs_top_srcdir@/src/bin/msgq/tests
|
||||
|
||||
PYTHONPATH=@abs_top_srcdir@/src/bin/msgq:@abs_top_srcdir@/src/lib/python
|
||||
|
||||
export PYTHONPATH
|
||||
|
||||
cd ${MYPATH_PATH}
|
||||
exec ${PYTHON_EXEC} -O msgq_test.py $*
|
@@ -10,6 +10,7 @@ import errno
|
||||
import threading
|
||||
import isc.cc
|
||||
import collections
|
||||
import isc.log
|
||||
|
||||
#
|
||||
# Currently only the subscription part and some sending is implemented...
|
||||
@@ -457,4 +458,6 @@ class SendNonblock(unittest.TestCase):
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
isc.log.init("b10-msgq")
|
||||
isc.log.resetUnitTestRootLogger()
|
||||
unittest.main()
|
||||
|
@@ -20,7 +20,7 @@
|
||||
<refentry>
|
||||
|
||||
<refentryinfo>
|
||||
<date>February 28, 2012</date>
|
||||
<date>August 16, 2012</date>
|
||||
</refentryinfo>
|
||||
|
||||
<refmeta>
|
||||
@@ -148,6 +148,8 @@ once that is merged you can for instance do 'config add Resolver/forward_address
|
||||
address or special keyword.
|
||||
The <varname>key</varname> is a TSIG key name.
|
||||
The default configuration accepts queries from 127.0.0.1 and ::1.
|
||||
The default action is REJECT for newly added
|
||||
<varname>query_acl</varname> items.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
|
@@ -103,7 +103,7 @@
|
||||
<refsect1>
|
||||
<title>FILES</title>
|
||||
<para>
|
||||
<filename>/usr/local/share/bind10-devel/stats-httpd.spec</filename>
|
||||
<filename>/usr/local/share/bind10/stats-httpd.spec</filename>
|
||||
<!--TODO: The filename should be computed from prefix-->
|
||||
— the spec file of <command>b10-stats-httpd</command>. This file
|
||||
contains configurable settings
|
||||
@@ -115,17 +115,17 @@
|
||||
how to configure the settings.
|
||||
</para>
|
||||
<para>
|
||||
<filename>/usr/local/share/bind10-devel/stats-httpd-xml.tpl</filename>
|
||||
<filename>/usr/local/share/bind10/stats-httpd-xml.tpl</filename>
|
||||
<!--TODO: The filename should be computed from prefix-->
|
||||
— the template file of XML document.
|
||||
</para>
|
||||
<para>
|
||||
<filename>/usr/local/share/bind10-devel/stats-httpd-xsd.tpl</filename>
|
||||
<filename>/usr/local/share/bind10/stats-httpd-xsd.tpl</filename>
|
||||
<!--TODO: The filename should be computed from prefix-->
|
||||
— the template file of XSD document.
|
||||
</para>
|
||||
<para>
|
||||
<filename>/usr/local/share/bind10-devel/stats-httpd-xsl.tpl</filename>
|
||||
<filename>/usr/local/share/bind10/stats-httpd-xsl.tpl</filename>
|
||||
<!--TODO: The filename should be computed from prefix-->
|
||||
— the template file of XSL document.
|
||||
</para>
|
||||
|
@@ -210,7 +210,7 @@
|
||||
|
||||
<refsect1>
|
||||
<title>FILES</title>
|
||||
<para><filename>/usr/local/share/bind10-devel/stats.spec</filename>
|
||||
<para><filename>/usr/local/share/bind10/stats.spec</filename>
|
||||
<!--TODO: The filename should be computed from prefix-->
|
||||
— This is a spec file for <command>b10-stats</command>. It
|
||||
contains commands for <command>b10-stats</command>. They can be
|
||||
|
@@ -24,7 +24,7 @@ The serial fields of the first and last SOAs of AXFR (including AXFR-style
|
||||
IXFR) are not the same. According to RFC 5936 these two SOAs must be the
|
||||
"same" (not only for the serial), but it is still not clear what the
|
||||
receiver should do if this condition does not hold. There was a discussion
|
||||
about this at the IETF dnsext wg:
|
||||
about this at the IETF dnsext working group:
|
||||
http://www.ietf.org/mail-archive/web/dnsext/current/msg07908.html
|
||||
and the general feeling seems that it would be better to reject the
|
||||
transfer if a mismatch is detected. On the other hand, also as noted
|
||||
@@ -61,10 +61,10 @@ There was an error opening a connection to the master. The error is
|
||||
shown in the log message.
|
||||
|
||||
% XFRIN_GOT_INCREMENTAL_RESP got incremental response for %1
|
||||
In an attempt of IXFR processing, the begenning SOA of the first difference
|
||||
In an attempt of IXFR processing, the beginning SOA of the first difference
|
||||
(following the initial SOA that specified the final SOA for all the
|
||||
differences) was found. This means a connection for xfrin tried IXFR
|
||||
and really aot a response for incremental updates.
|
||||
and really got a response for incremental updates.
|
||||
|
||||
% XFRIN_GOT_NONINCREMENTAL_RESP got nonincremental response for %1
|
||||
Non incremental transfer was detected at the "first data" of a transfer,
|
||||
@@ -149,16 +149,16 @@ daemon will now shut down.
|
||||
The AXFR transfer of the given zone was successful.
|
||||
The provided information contains the following values:
|
||||
|
||||
messages: Number of overhead DNS messages in the transfer
|
||||
messages: Number of overhead DNS messages in the transfer.
|
||||
|
||||
records: Number of Resource Records in the full transfer, excluding the
|
||||
final SOA record that marks the end of the AXFR.
|
||||
|
||||
bytes: Full size of the transfer data on the wire.
|
||||
|
||||
run time: Time (in seconds) the complete axfr took
|
||||
run time: Time (in seconds) the complete axfr took.
|
||||
|
||||
bytes/second: Transfer speed
|
||||
bytes/second: Transfer speed.
|
||||
|
||||
% XFRIN_TSIG_KEY_NOT_FOUND TSIG key not found in key ring: %1
|
||||
An attempt to start a transfer with TSIG was made, but the configured TSIG
|
||||
|
@@ -70,7 +70,7 @@ AXFR-style IXFR.
|
||||
|
||||
% XFROUT_IXFR_NO_ZONE IXFR client %1, %2: zone not found with journal
|
||||
The requested zone in IXFR was not found in the data source
|
||||
even though the xfrout daemon sucessfully found the SOA RR of the zone
|
||||
even though the xfrout daemon successfully found the SOA RR of the zone
|
||||
in the data source. This can happen if the administrator removed the
|
||||
zone from the data source within the small duration between these
|
||||
operations, but it's more likely to be a bug or broken data source.
|
||||
@@ -84,9 +84,6 @@ NOTAUTH.
|
||||
An IXFR request was received, but the client's SOA version is the same as
|
||||
or newer than that of the server. The xfrout server responds to the
|
||||
request with the answer section being just one SOA of that version.
|
||||
Note: as of this wrting the 'newer version' cannot be identified due to
|
||||
the lack of support for the serial number arithmetic. This will soon
|
||||
be implemented.
|
||||
|
||||
% XFROUT_MODULECC_SESSION_ERROR error encountered by configuration/command module: %1
|
||||
There was a problem in the lower level module handling configuration and
|
||||
@@ -206,7 +203,7 @@ xfrout daemon process is still running. This xfrout daemon (the one
|
||||
printing this message) will not start.
|
||||
|
||||
% XFROUT_XFR_TRANSFER_CHECK_ERROR %1 client %2: check for transfer of %3 failed: %4
|
||||
Pre-response check for an incomding XFR request failed unexpectedly.
|
||||
Pre-response check for an incoming XFR request failed unexpectedly.
|
||||
The most likely cause of this is that some low level error in the data
|
||||
source, but it may also be other general (more unlikely) errors such
|
||||
as memory shortage. Some detail of the error is also included in the
|
||||
|
@@ -53,6 +53,22 @@ The asynchronous I/O code encountered an error when trying to send data to
|
||||
the specified address on the given protocol. The number of the system
|
||||
error that caused the problem is given in the message.
|
||||
|
||||
% ASIODNS_UDP_ASYNC_SEND_FAIL Error sending UDP packet to %1: %2
|
||||
The low-level ASIO library reported an error when trying to send a UDP
|
||||
packet in asynchronous UDP mode. This can be any error reported by
|
||||
send_to(), and can indicate problems such as too high a load on the network,
|
||||
or a problem in the underlying library or system.
|
||||
This packet is dropped and will not be sent, but service should resume
|
||||
normally.
|
||||
If you see a single occurrence of this message, it probably does not
|
||||
indicate any significant problem, but if it is logged often, it is probably
|
||||
a good idea to inspect your network traffic.
|
||||
|
||||
% ASIODNS_UDP_SYNC_SEND_FAIL Error sending UDP packet to %1: %2
|
||||
The low-level ASIO library reported an error when trying to send a UDP
|
||||
packet in synchronous UDP mode. See ASIODNS_UDP_ASYNC_SEND_FAIL for
|
||||
more information.
|
||||
|
||||
% ASIODNS_UNKNOWN_ORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3)
|
||||
An internal consistency check on the origin of a message from the
|
||||
asynchronous I/O module failed. This may indicate an internal error;
|
||||
|
@@ -148,9 +148,15 @@ SyncUDPServer::handleRead(const asio::error_code& ec, const size_t length) {
|
||||
return;
|
||||
}
|
||||
|
||||
asio::error_code ec;
|
||||
socket_->send_to(asio::buffer(output_buffer_->getData(),
|
||||
output_buffer_->getLength()),
|
||||
sender_);
|
||||
sender_, 0, ec);
|
||||
if (ec) {
|
||||
LOG_ERROR(logger, ASIODNS_UDP_SYNC_SEND_FAIL).
|
||||
arg(sender_.address().to_string()).
|
||||
arg(ec.message());
|
||||
}
|
||||
}
|
||||
|
||||
// And schedule handling another socket.
|
||||
|
@@ -299,10 +299,16 @@ UDPServer::operator()(asio::error_code ec, size_t length) {
|
||||
// Begin an asynchronous send, and then yield. When the
|
||||
// send completes, we will resume immediately after this point
|
||||
// (though we have nothing further to do, so the coroutine
|
||||
// will simply exit at that time).
|
||||
// will simply exit at that time, after reporting an error if
|
||||
// there was one).
|
||||
CORO_YIELD data_->socket_->async_send_to(
|
||||
buffer(data_->respbuf_->getData(), data_->respbuf_->getLength()),
|
||||
*data_->sender_, *this);
|
||||
if (ec) {
|
||||
LOG_ERROR(logger, ASIODNS_UDP_ASYNC_SEND_FAIL).
|
||||
arg(data_->sender_->address().to_string()).
|
||||
arg(ec.message());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user