diff --git a/ChangeLog b/ChangeLog index d88f759a72..547192ef03 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,4 +1,4 @@ -266. [func]* tomek +293. [func]* tomek b10-dhcp6: Implemented DHCPv6 echo server. It joins DHCPv6 multicast groups and listens to incoming DHCPv6 client messages. Received messages are then echoed back to clients. This @@ -8,6 +8,184 @@ and its address must be specified in interfaces.txt. (Trac #878, git 3b1a604abf5709bfda7271fa94213f7d823de69d) +292. [func] dvv + Implement the DLV rrtype according to RFC4431. + (Trac #1144, git d267c0511a07c41cd92e3b0b9ee9bf693743a7cf) + +291. [func] naokikambe + Statistics items are specified by each module's spec file. + Stats module can read these through the config manager. Stats + module and stats httpd report statistics data and statistics + schema by each module via both bindctl and HTTP/XML. + (Trac #928,#929,#930,#1175, git 054699635affd9c9ecbe7a108d880829f3ba229e) + +290. [func] jinmei + libdns++/pydnspp: added an option parameter to the "from wire" + methods of the Message class. One option is defined, + PRESERVE_ORDER, which specifies the parser to handle each RR + separately, preserving the order, and constructs RRsets in the + message sections so that each RRset contains only one RR. + (Trac #1258, git c874cb056e2a5e656165f3c160e1b34ccfe8b302) + +289. [func]* jinmei + b10-xfrout: ACLs for xfrout can now be configured per zone basis. + A per zone ACl is part of a more general zone configuration. A + quick example for configuring an ACL for zone "example.com" that + rejects any transfer request for that zone is as follows: + > config add Xfrout/zone_config + > config set Xfrout/zone_config[0]/origin "example.com" + > config add Xfrout/zone_config[0]/transfer_acl + > config set Xfrout/zone_config[0]/transfer_acl[0] {"action": "REJECT"} + The previous global ACL (query_acl) was renamed to transfer_acl, + which now works as the default ACL. Note: backward compatibility + is not provided, so an existing configuration using query_acl + needs to be updated by hand. + Note: the per zone configuration framework is a temporary + workaround. It will eventually be redesigned as a system wide + configuration. + (Trac #1165, git 698176eccd5d55759fe9448b2c249717c932ac31) + +288. [bug] stephen + Fixed problem whereby the order in which component files appeared in + rdataclass.cc was system dependent, leading to problems on some + systems where data types were used before the header file in which + they were declared was included. + (Trac #1202, git 4a605525cda67bea8c43ca8b3eae6e6749797450) + +287. [bug]* jinmei + Python script files for log messages (xxx_messages.py) should have + been installed under the "isc" package. This fix itself should + be a transparent change without affecting existing configurations + or other operational practices, but you may want to clean up the + python files from the common directly (such as "site-packages"). + (Trac #1101, git 0eb576518f81c3758c7dbaa2522bd8302b1836b3) + +286. [func] ocean + libdns++: Implement the HINFO rrtype support according to RFC1034, + and RFC1035. + (Trac #1112, git 12d62d54d33fbb1572a1aa3089b0d547d02924aa) + +285. [bug] jelte + sqlite3 data source: fixed a race condition on initial startup, + when the database has not been initialized yet, and multiple + processes are trying to do so, resulting in one of them failing. + (Trac #326, git 5de6f9658f745e05361242042afd518b444d7466) + +284. [bug] jerry + b10-zonemgr: zonemgr will not terminate on empty zones, it will + log a warning and try to do zone transfer for them. + (Trac #1153, git 0a39659638fc68f60b95b102968d7d0ad75443ea) + +283. [bug] zhanglikun + Make stats and boss processes wait for answer messages from each + other in block mode to avoid orphan answer messages, add an internal + command "getstats" to boss process for getting statistics data from + boss. + (Trac #519, git 67d8e93028e014f644868fede3570abb28e5fb43) + +282. [func] ocean + libdns++: Implement the NAPTR rrtype according to RFC2915, + RFC2168 and RFC3403. + (Trac #1130, git 01d8d0f13289ecdf9996d6d5d26ac0d43e30549c) + +bind10-devel-20110819 released on August 19, 2011 + +281. [func] jelte + Added a new type for configuration data: "named set". This allows for + similar configuration as the current "list" type, but with strings + instead of indices as identifiers. The intended use is for instance + /foo/zones/example.org/bar instead of /foo/zones[2]/bar. Currently + this new type is not in use yet. + (Trac #926, git 06aeefc4787c82db7f5443651f099c5af47bd4d6) + +280. [func] jerry + libdns++: Implement the MINFO rrtype according to RFC1035. + (Trac #1113, git 7a9a19d6431df02d48a7bc9de44f08d9450d3a37) + +279. [func] jerry + libdns++: Implement the AFSDB rrtype according to RFC1183. + (Trac #1114, git ce052cd92cd128ea3db5a8f154bd151956c2920c) + +278. [doc] jelte + Add logging configuration documentation to the guide. + (Trac #1011, git 2cc500af0929c1f268aeb6f8480bc428af70f4c4) + +277. [func] jerry + libdns++: Implement the SRV rrtype according to RFC2782. + (Trac #1128, git 5fd94aa027828c50e63ae1073d9d6708e0a9c223) + +276. [func] stephen + Although the top-level loggers are named after the program (e.g. + b10-auth, b10-resolver), allow the logger configuration to omit the + "b10-" prefix and use just the module name. + (Trac #1003, git a01cd4ac5a68a1749593600c0f338620511cae2d) + +275. [func] jinmei + Added support for TSIG key matching in ACLs. The xfrout ACL can + now refer to TSIG key names using the "key" attribute. For + example, the following specifies an ACL that allows zone transfer + if and only if the request is signed with a TSIG of a key name + "key.example": + > config set Xfrout/query_acl[0] {"action": "ACCEPT", \ + "key": "key.example"} + (Trac #1104, git 9b2e89cabb6191db86f88ee717f7abc4171fa979) + +274. [bug] naokikambe + add unittests for functions xml_handler, xsd_handler and xsl_handler + respectively to make sure their behaviors are correct, regardless of + whether type which xml.etree.ElementTree.tostring() after Python3.2 + returns is str or byte. + (Trac #1021, git 486bf91e0ecc5fbecfe637e1e75ebe373d42509b) + +273. [func] vorner + It is possible to specify ACL for the xfrout module. It is in the ACL + configuration key and has the usual ACL syntax. It currently supports + only the source address. Default ACL accepts everything. + (Trac #772, git 50070c824270d5da1db0b716db73b726d458e9f7) + +272. [func] jinmei + libdns++/pydnspp: TSIG signing now handles truncated DNS messages + (i.e. with TC bit on) with TSIG correctly. + (Trac #910, 8e00f359e81c3cb03c5075710ead0f87f87e3220) + +271. [func] stephen + Default logging for unit tests changed to severity DEBUG (level 99) + with the output routed to /dev/null. This can be altered by setting + the B10_LOGGER_XXX environment variables. + (Trac #1024, git 72a0beb8dfe85b303f546d09986461886fe7a3d8) + +270. [func] jinmei + Added python bindings for ACLs using the DNS request as the + context. They are accessible via the isc.acl.dns module. + (Trac #983, git c24553e21fe01121a42e2136d0a1230d75812b27) + +269. [bug] y-aharen + Modified IntervalTimerTest not to rely on the accuracy of the timer. + This fix addresses occasional failure of build tests. + (Trac #1016, git 090c4c5abac33b2b28d7bdcf3039005a014f9c5b) + +268. [func] stephen + Add environment variable to allow redirection of logging output during + unit tests. + (Trac #1071, git 05164f9d61006869233b498d248486b4307ea8b6) + +bind10-devel-20110705 released on July 05, 2011 + +267. [func] tomek + Added a dummy module for DHCP6. This module does not actually + do anything at this point, and BIND 10 has no option for + starting it yet. It is included as a base for further + development. + (Trac #990, git 4a590df96a1b1d373e87f1f56edaceccb95f267d) + +266. [func] Multiple developers + Convert various error messages, debugging and other output + to the new logging interface, including for b10-resolver, + the resolver library, the CC library, b10-auth, b10-cfgmgr, + b10-xfrin, and b10-xfrout. This includes a lot of new + documentation describing the new log messages. + (Trac #738, #739, #742, #746, #759, #761, #762) + 265. [func]* jinmei b10-resolver: Introduced ACL on incoming queries. By default the resolver accepts queries from ::1 and 127.0.0.1 and rejects all @@ -62,7 +240,7 @@ Now builds and runs with Python 3.2 (Trac #710, git dae1d2e24f993e1eef9ab429326652f40a006dfb) -257. [bug] y-aharen +257. [bug] y-aharen Fixed a bug an instance of IntervalTimerImpl may be destructed while deadline_timer is holding the handler. This fix addresses occasional failure of IntervalTimerTest.destructIntervalTimer. @@ -71,25 +249,25 @@ 256. [bug] jerry src/bin/xfrin: update xfrin to check TSIG before other part of incoming message. - (Trac955, git 261450e93af0b0406178e9ef121f81e721e0855c) + (Trac #955, git 261450e93af0b0406178e9ef121f81e721e0855c) 255. [func] zhang likun src/lib/cache: remove empty code in lib/cache and the corresponding suppression rule in src/cppcheck-suppress.lst. - (Trac639, git 4f714bac4547d0a025afd314c309ca5cb603e212) + (Trac #639, git 4f714bac4547d0a025afd314c309ca5cb603e212) 254. [bug] jinmei b10-xfrout: failed to send notifies over IPv6 correctly. - (Trac964, git 3255c92714737bb461fb67012376788530f16e40) + (Trac #964, git 3255c92714737bb461fb67012376788530f16e40) -253. [func] jelte +253. [func] jelte Add configuration options for logging through the virtual module Logging. - (Trac 736, git 9fa2a95177265905408c51d13c96e752b14a0824) + (Trac #736, git 9fa2a95177265905408c51d13c96e752b14a0824) -252. [func] stephen +252. [func] stephen Add syslog as destination for logging. - (Trac976, git 31a30f5485859fd3df2839fc309d836e3206546e) + (Trac #976, git 31a30f5485859fd3df2839fc309d836e3206546e) 251. [bug]* jinmei Make sure bindctl private files are non readable to anyone except @@ -98,38 +276,38 @@ group will have to be adjusted. Also note that this change is only effective for a fresh install; if these files already exist, their permissions must be adjusted by hand (if necessary). - (Trac870, git 461fc3cb6ebabc9f3fa5213749956467a14ebfd4) + (Trac #870, git 461fc3cb6ebabc9f3fa5213749956467a14ebfd4) -250. [bug] ocean +250. [bug] ocean src/lib/util/encode, in some conditions, the DecodeNormalizer's iterator may reach the end() and when later being dereferenced it will cause crash on some platform. - (Trac838, git 83e33ec80c0c6485d8b116b13045b3488071770f) + (Trac #838, git 83e33ec80c0c6485d8b116b13045b3488071770f) -249. [func] jerry +249. [func] jerry xfrout: add support for TSIG verification. - (Trac816, git 3b2040e2af2f8139c1c319a2cbc429035d93f217) + (Trac #816, git 3b2040e2af2f8139c1c319a2cbc429035d93f217) -248. [func] stephen +248. [func] stephen Add file and stderr as destinations for logging. - (Trac555, git 38b3546867425bd64dbc5920111a843a3330646b) + (Trac #555, git 38b3546867425bd64dbc5920111a843a3330646b) -247. [func] jelte +247. [func] jelte Upstream queries from the resolver now set EDNS0 buffer size. - (Trac834, git 48e10c2530fe52c9bde6197db07674a851aa0f5d) + (Trac #834, git 48e10c2530fe52c9bde6197db07674a851aa0f5d) -246. [func] stephen +246. [func] stephen Implement logging using log4cplus (http://log4cplus.sourceforge.net) - (Trac899, git 31d3f525dc01638aecae460cb4bc2040c9e4df10) + (Trac #899, git 31d3f525dc01638aecae460cb4bc2040c9e4df10) -245. [func] vorner +245. [func] vorner Authoritative server can now sign the answers using TSIG (configured in tsig_keys/keys, list of strings like "name::sha1-hmac"). It doesn't use them for ACL yet, only verifies them and signs if the request is signed. - (Trac875, git fe5e7003544e4e8f18efa7b466a65f336d8c8e4d) + (Trac #875, git fe5e7003544e4e8f18efa7b466a65f336d8c8e4d) -244. [func] stephen +244. [func] stephen In unit tests, allow the choice of whether unhandled exceptions are caught in the unit test program (and details printed) or allowed to propagate to the default exception handler. See the bind10-dev thread @@ -139,7 +317,7 @@ 243. [func]* feng Add optional hmac algorithm SHA224/384/812. - (Trac#782, git 77d792c9d7c1a3f95d3e6a8b721ac79002cd7db1) + (Trac #782, git 77d792c9d7c1a3f95d3e6a8b721ac79002cd7db1) bind10-devel-20110519 released on May 19, 2011 @@ -186,7 +364,7 @@ bind10-devel-20110519 released on May 19, 2011 stats module and stats-httpd module, and maybe with other statistical modules in future. "stats.spec" has own configuration and commands of stats module, if it requires. - (Trac#719, git a234b20dc6617392deb8a1e00eb0eed0ff353c0a) + (Trac #719, git a234b20dc6617392deb8a1e00eb0eed0ff353c0a) 236. [func] jelte C++ client side of configuration now uses BIND10 logging system. @@ -229,13 +407,13 @@ bind10-devel-20110519 released on May 19, 2011 instead of '%s,%d', which allows us to cope better with mismatched placeholders and allows reordering of them in case of translation. - (Trac901, git 4903410e45670b30d7283f5d69dc28c2069237d6) + (Trac #901, git 4903410e45670b30d7283f5d69dc28c2069237d6) 230. [bug] naokikambe Removed too repeated verbose messages in two cases of: - when auth sends statistics data to stats - when stats receives statistics data from other modules - (Trac#620, git 0ecb807011196eac01f281d40bc7c9d44565b364) + (Trac #620, git 0ecb807011196eac01f281d40bc7c9d44565b364) 229. [doc] jreed Add manual page for b10-host. diff --git a/README b/README index a6509da2d2..4b84a88939 100644 --- a/README +++ b/README @@ -8,10 +8,10 @@ for serving, maintaining, and developing DNS. BIND10-devel is new development leading up to the production BIND 10 release. It contains prototype code and experimental interfaces. Nevertheless it is ready to use now for testing the -new BIND 10 infrastructure ideas. The Year 2 milestones of the -five year plan are described here: +new BIND 10 infrastructure ideas. The Year 3 goals of the five +year plan are described here: - https://bind10.isc.org/wiki/Year2Milestones + http://bind10.isc.org/wiki/Year3Goals This release includes the bind10 master process, b10-msgq message bus, b10-auth authoritative DNS server (with SQLite3 and in-memory @@ -67,8 +67,8 @@ e.g., Operating-System specific tips: - FreeBSD - You may need to install a python binding for sqlite3 by hand. A - sample procedure is as follows: + You may need to install a python binding for sqlite3 by hand. + A sample procedure is as follows: - add the following to /etc/make.conf PYTHON_VERSION=3.1 - build and install the python binding from ports, assuming the top diff --git a/src/bin/stats/tests/http/__init__.py b/TODO similarity index 100% rename from src/bin/stats/tests/http/__init__.py rename to TODO diff --git a/configure.ac b/configure.ac index 348708fde1..193c2ec1a5 100644 --- a/configure.ac +++ b/configure.ac @@ -2,7 +2,7 @@ # Process this file with autoconf to produce a configure script. AC_PREREQ([2.59]) -AC_INIT(bind10-devel, 20110519, bind10-dev@isc.org) +AC_INIT(bind10-devel, 20110809, bind10-dev@isc.org) AC_CONFIG_SRCDIR(README) AM_INIT_AUTOMAKE AC_CONFIG_HEADERS([config.h]) @@ -12,6 +12,12 @@ AC_PROG_CXX # Libtool configuration # + +# libtool cannot handle spaces in paths, so exit early if there is one +if [ test `echo $PWD | grep -c ' '` != "0" ]; then + AC_MSG_ERROR([BIND 10 cannot be built in a directory that contains spaces, because of libtool limitations. Please change the directory name, or use a symbolic link that does not contain spaces.]) +fi + # On FreeBSD (and probably some others), clang++ does not meet an autoconf # assumption in identifying libtool configuration regarding shared library: # the configure script will execute "$CC -shared $CFLAGS/$CXXFLAGS -v" and @@ -139,6 +145,26 @@ else AC_SUBST(pkgpyexecdir) fi +# We need to store the default pyexecdir in a separate variable so that +# we can specify in Makefile.am the install directory of various BIND 10 +# python scripts and loadable modules; in Makefile.am we cannot replace +# $(pyexecdir) using itself, e.g, this doesn't work: +# pyexecdir = $(pyexecdir)/isc/some_module +# The separate variable makes this setup possible as follows: +# pyexecdir = $(PYTHON_SITEPKG_DIR)/isc/some_module +PYTHON_SITEPKG_DIR=${pyexecdir} +AC_SUBST(PYTHON_SITEPKG_DIR) + +# This will be commonly used in various Makefile.am's that need to generate +# python log messages. +PYTHON_LOGMSGPKG_DIR="\$(top_builddir)/src/lib/python/isc/log_messages" +AC_SUBST(PYTHON_LOGMSGPKG_DIR) + +# This is python package paths commonly used in python tests. See +# README of log_messages for why it's included. +COMMON_PYTHON_PATH="\$(abs_top_builddir)/src/lib/python/isc/log_messages:\$(abs_top_srcdir)/src/lib/python:\$(abs_top_builddir)/src/lib/python" +AC_SUBST(COMMON_PYTHON_PATH) + # Check for python development environments if test -x ${PYTHON}-config; then PYTHON_INCLUDES=`${PYTHON}-config --includes` @@ -260,6 +286,8 @@ B10_CXXFLAGS="-Wall -Wextra -Wwrite-strings -Woverloaded-virtual -Wno-sign-compa case "$host" in *-solaris*) MULTITHREADING_FLAG=-pthreads + # In Solaris, IN6ADDR_ANY_INIT and IN6ADDR_LOOPBACK_INIT need -Wno-missing-braces + B10_CXXFLAGS="$B10_CXXFLAGS -Wno-missing-braces" ;; *) MULTITHREADING_FLAG=-pthread @@ -409,7 +437,7 @@ AC_ARG_WITH([botan], AC_HELP_STRING([--with-botan=PATH], [specify exact directory of Botan library]), [botan_path="$withval"]) -if test "${botan_path}" == "no" ; then +if test "${botan_path}" = "no" ; then AC_MSG_ERROR([Need botan for libcryptolink]) fi if test "${botan_path}" != "yes" ; then @@ -482,7 +510,7 @@ AC_ARG_WITH([log4cplus], AC_HELP_STRING([--with-log4cplus=PATH], [specify exact directory of log4cplus library and headers]), [log4cplus_path="$withval"]) -if test "${log4cplus_path}" == "no" ; then +if test "${log4cplus_path}" = "no" ; then AC_MSG_ERROR([Need log4cplus]) elif test "${log4cplus_path}" != "yes" ; then LOG4CPLUS_INCLUDES="-I${log4cplus_path}/include" @@ -789,12 +817,6 @@ AC_CONFIG_FILES([Makefile src/bin/zonemgr/tests/Makefile src/bin/stats/Makefile src/bin/stats/tests/Makefile - src/bin/stats/tests/isc/Makefile - src/bin/stats/tests/isc/cc/Makefile - src/bin/stats/tests/isc/config/Makefile - src/bin/stats/tests/isc/util/Makefile - src/bin/stats/tests/testdata/Makefile - src/bin/stats/tests/http/Makefile src/bin/usermgr/Makefile src/bin/tests/Makefile src/lib/Makefile @@ -809,21 +831,30 @@ AC_CONFIG_FILES([Makefile src/lib/cc/tests/Makefile src/lib/python/Makefile src/lib/python/isc/Makefile + src/lib/python/isc/acl/Makefile + src/lib/python/isc/acl/tests/Makefile src/lib/python/isc/util/Makefile src/lib/python/isc/util/tests/Makefile src/lib/python/isc/datasrc/Makefile src/lib/python/isc/datasrc/tests/Makefile + src/lib/python/isc/dns/Makefile src/lib/python/isc/cc/Makefile src/lib/python/isc/cc/tests/Makefile src/lib/python/isc/config/Makefile src/lib/python/isc/config/tests/Makefile src/lib/python/isc/log/Makefile src/lib/python/isc/log/tests/Makefile + src/lib/python/isc/log_messages/Makefile + src/lib/python/isc/log_messages/work/Makefile src/lib/python/isc/net/Makefile src/lib/python/isc/net/tests/Makefile src/lib/python/isc/notify/Makefile src/lib/python/isc/notify/tests/Makefile src/lib/python/isc/testutils/Makefile + src/lib/python/isc/bind10/Makefile + src/lib/python/isc/bind10/tests/Makefile + src/lib/python/isc/xfrin/Makefile + src/lib/python/isc/xfrin/tests/Makefile src/lib/config/Makefile src/lib/config/tests/Makefile src/lib/config/tests/testdata/Makefile @@ -839,6 +870,7 @@ AC_CONFIG_FILES([Makefile src/lib/exceptions/tests/Makefile src/lib/datasrc/Makefile src/lib/datasrc/tests/Makefile + src/lib/datasrc/tests/testdata/Makefile src/lib/xfr/Makefile src/lib/log/Makefile src/lib/log/compiler/Makefile @@ -856,6 +888,7 @@ AC_CONFIG_FILES([Makefile src/lib/util/Makefile src/lib/util/io/Makefile src/lib/util/unittests/Makefile + src/lib/util/python/Makefile src/lib/util/pyunittests/Makefile src/lib/util/tests/Makefile src/lib/acl/Makefile @@ -889,7 +922,7 @@ AC_OUTPUT([doc/version.ent src/bin/zonemgr/run_b10-zonemgr.sh src/bin/stats/stats.py src/bin/stats/stats_httpd.py - src/bin/bind10/bind10.py + src/bin/bind10/bind10_src.py src/bin/bind10/run_bind10.sh src/bin/bind10/tests/bind10_test.py src/bin/bindctl/run_bindctl.sh @@ -913,17 +946,19 @@ AC_OUTPUT([doc/version.ent src/lib/python/isc/cc/tests/cc_test src/lib/python/isc/notify/tests/notify_out_test src/lib/python/isc/log/tests/log_console.py + src/lib/python/isc/log_messages/work/__init__.py src/lib/dns/gen-rdatacode.py src/lib/python/bind10_config.py - src/lib/dns/tests/testdata/gen-wiredata.py src/lib/cc/session_config.h.pre src/lib/cc/tests/session_unittests_config.h src/lib/log/tests/console_test.sh src/lib/log/tests/destination_test.sh + src/lib/log/tests/init_logger_test.sh src/lib/log/tests/local_file_test.sh src/lib/log/tests/severity_test.sh src/lib/log/tests/tempdir.h src/lib/util/python/mkpywrapper.py + src/lib/util/python/gen_wiredata.py src/lib/server_common/tests/data_path.h tests/system/conf.sh tests/system/glue/setup.sh @@ -948,12 +983,13 @@ AC_OUTPUT([doc/version.ent chmod +x src/bin/msgq/run_msgq.sh chmod +x src/bin/msgq/tests/msgq_test chmod +x src/lib/dns/gen-rdatacode.py - chmod +x src/lib/dns/tests/testdata/gen-wiredata.py - chmod +x src/lib/log/tests/local_file_test.sh chmod +x src/lib/log/tests/console_test.sh chmod +x src/lib/log/tests/destination_test.sh + chmod +x src/lib/log/tests/init_logger_test.sh + chmod +x src/lib/log/tests/local_file_test.sh chmod +x src/lib/log/tests/severity_test.sh chmod +x src/lib/util/python/mkpywrapper.py + chmod +x src/lib/util/python/gen_wiredata.py chmod +x src/lib/python/isc/log/tests/log_console.py chmod +x tests/system/conf.sh ]) diff --git a/doc/Doxyfile b/doc/Doxyfile index 8857c1688c..8be9098bd7 100644 --- a/doc/Doxyfile +++ b/doc/Doxyfile @@ -568,10 +568,10 @@ WARN_LOGFILE = # directories like "/usr/src/myproject". Separate the files or directories # with spaces. -INPUT = ../src/lib/cc ../src/lib/config \ - ../src/lib/cryptolink ../src/lib/dns ../src/lib/datasrc \ - ../src/bin/auth ../src/bin/resolver ../src/lib/bench \ - ../src/lib/log ../src/lib/asiolink/ ../src/lib/nsas \ +INPUT = ../src/lib/exceptions ../src/lib/cc \ + ../src/lib/config ../src/lib/cryptolink ../src/lib/dns ../src/lib/datasrc \ + ../src/bin/auth ../src/bin/resolver ../src/lib/bench ../src/lib/log \ + ../src/lib/log/compiler ../src/lib/asiolink/ ../src/lib/nsas \ ../src/lib/testutils ../src/lib/cache ../src/lib/server_common/ \ ../src/bin/sockcreator/ ../src/lib/util/ \ ../src/lib/resolve ../src/lib/acl ../src/bin/dhcp6 diff --git a/doc/guide/bind10-guide.html b/doc/guide/bind10-guide.html index 5754cf001e..1070a2e4a8 100644 --- a/doc/guide/bind10-guide.html +++ b/doc/guide/bind10-guide.html @@ -1,24 +1,24 @@ -BIND 10 Guide

BIND 10 Guide

Administrator Reference for BIND 10

This is the reference guide for BIND 10 version - 20110519.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by +BIND 10 Guide

BIND 10 Guide

Administrator Reference for BIND 10

This is the reference guide for BIND 10 version + 20110809.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by Internet Systems Consortium (ISC). It includes DNS libraries and modular components for controlling authoritative and recursive DNS servers.

- This is the reference guide for BIND 10 version 20110519. + This is the reference guide for BIND 10 version 20110809. The most up-to-date version of this document, along with - other documents for BIND 10, can be found at http://bind10.isc.org/docs.


Chapter 1. Introduction

+ other documents for BIND 10, can be found at http://bind10.isc.org/docs.


Chapter 1. Introduction

BIND is the popular implementation of a DNS server, developer interfaces, and DNS tools. BIND 10 is a rewrite of BIND 9. BIND 10 is written in C++ and Python and provides a modular environment for serving and maintaining DNS.

Note

This guide covers the experimental prototype of - BIND 10 version 20110519. + BIND 10 version 20110809.

Note

BIND 10 provides a EDNS0- and DNSSEC-capable authoritative DNS server and a caching recursive name server which also provides forwarding. -

Supported Platforms

+

Supported Platforms

BIND 10 builds have been tested on Debian GNU/Linux 5, Ubuntu 9.10, NetBSD 5, Solaris 10, FreeBSD 7 and 8, and CentOS Linux 5.3. @@ -28,13 +28,15 @@ It is planned for BIND 10 to build, install and run on Windows and standard Unix-type platforms. -

Required Software

+

Required Software

BIND 10 requires Python 3.1. Later versions may work, but Python 3.1 is the minimum version which will work.

BIND 10 uses the Botan crypto library for C++. It requires - at least Botan version 1.8. To build BIND 10, install the - Botan libraries and development include headers. + at least Botan version 1.8. +

+ BIND 10 uses the log4cplus C++ logging library. It requires + at least log4cplus version 1.0.3.

The authoritative server requires SQLite 3.3.9 or newer. The b10-xfrin, b10-xfrout, @@ -136,7 +138,10 @@ and, of course, DNS. These include detailed developer documentation and code examples. -

Chapter 2. Installation

Building Requirements

Note

+

Chapter 2. Installation

Building Requirements

+ In addition to the run-time requirements, building BIND 10 + from source code requires various development include headers. +

Note

Some operating systems have split their distribution packages into a run-time and a development package. You will need to install the development package versions, which include header files and @@ -147,6 +152,11 @@

+ To build BIND 10, also install the Botan (at least version + 1.8) and the log4cplus (at least version 1.0.3) + development include headers. +

+ The Python Library and Python _sqlite3 module are required to enable the Xfrout and Xfrin support.

Note

@@ -156,7 +166,7 @@ Building BIND 10 also requires a C++ compiler and standard development headers, make, and pkg-config. BIND 10 builds have been tested with GCC g++ 3.4.3, 4.1.2, - 4.1.3, 4.2.1, 4.3.2, and 4.4.1. + 4.1.3, 4.2.1, 4.3.2, and 4.4.1; Clang++ 2.8; and Sun C++ 5.10.

Quick start

Note

This quickly covers the standard steps for installing and deploying BIND 10 as an authoritative name server using @@ -192,14 +202,14 @@ the Git code revision control system or as a downloadable tar file. It may also be available in pre-compiled ready-to-use packages from operating system vendors. -

Download Tar File

+

Download Tar File

Downloading a release tar file is the recommended method to obtain the source code.

The BIND 10 releases are available as tar file downloads from ftp://ftp.isc.org/isc/bind10/. Periodic development snapshots may also be available. -

Retrieve from Git

+

Retrieve from Git

Downloading this "bleeding edge" code is recommended only for developers or advanced users. Using development code in a production environment is not recommended. @@ -233,7 +243,7 @@ autoheader, automake, and related commands. -

Configure before the build

+

Configure before the build

BIND 10 uses the GNU Build System to discover build environment details. To generate the makefiles using the defaults, simply run: @@ -242,7 +252,7 @@ Run ./configure with the --help switch to view the different options. The commonly-used options are: -

--prefix
Define the the installation location (the +

--prefix
Define the installation location (the default is /usr/local/).
--with-boost-include
Define the path to find the Boost headers.
--with-pythonpath
Define the path to Python 3.1 if it is not in the @@ -264,16 +274,16 @@

If the configure fails, it may be due to missing or old dependencies. -

Build

+

Build

After the configure step is complete, to build the executables from the C++ code and prepare the Python scripts, run:

$ make

-

Install

+

Install

To install the BIND 10 executables, support files, and documentation, run:

$ make install

-

Note

The install step may require superuser privileges.

Install Hierarchy

+

Note

The install step may require superuser privileges.

Install Hierarchy

The following is the layout of the complete BIND 10 installation:

  • bin/ — @@ -304,14 +314,14 @@ data source and configuration databases.

Chapter 3. Starting BIND10 with bind10

Table of Contents

Starting BIND 10

- BIND 10 provides the bind10 command which + BIND 10 provides the bind10 command which starts up the required processes. bind10 will also restart processes that exit unexpectedly. This is the only command needed to start the BIND 10 system.

After starting the b10-msgq communications channel, - bind10 connects to it, + bind10 connects to it, runs the configuration manager, and reads its own configuration. Then it starts the other modules.

@@ -334,7 +344,12 @@ To start the BIND 10 service, simply run bind10. Run it with the --verbose switch to get additional debugging or diagnostic output. -

Chapter 4. Command channel

+

Note

+ If the setproctitle Python module is detected at start up, + the process names for the Python-based daemons will be renamed + to better identify them instead of just python. + This is not needed on some operating systems. +

Chapter 4. Command channel

The BIND 10 components use the b10-msgq message routing daemon to communicate with other BIND 10 components. The b10-msgq implements what is called the @@ -490,12 +505,12 @@ shutdown the details and relays (over a b10-msgq command channel) the configuration on to the specified module.

-

Chapter 8. Authoritative Server

+

Chapter 8. Authoritative Server

The b10-auth is the authoritative DNS server. It supports EDNS0 and DNSSEC. It supports IPv6. Normally it is started by the bind10 master process. -

Server Configurations

+

Server Configurations

b10-auth is configured via the b10-cfgmgr configuration manager. The module name is Auth. @@ -515,7 +530,7 @@ This may be a temporary setting until then.

shutdown
Stop the authoritative DNS server.

-

Data Source Backends

Note

+

Data Source Backends

Note

For the development prototype release, b10-auth supports a SQLite3 data source backend and in-memory data source backend. @@ -529,7 +544,7 @@ This may be a temporary setting until then. The default is /usr/local/var/.) This data file location may be changed by defining the database_file configuration. -

Loading Master Zones Files

+

Loading Master Zones Files

RFC 1035 style DNS master zone files may imported into a BIND 10 data source by using the b10-loadzone utility. @@ -569,7 +584,7 @@ This may be a temporary setting until then. provide secondary service.

Note

The current development release of BIND 10 only supports - AXFR. (IXFR is not supported.) + AXFR. (IXFR is not supported.) @@ -591,7 +606,7 @@ This may be a temporary setting until then. NOTIFY messages to slaves.

Note

The current development release of BIND 10 only supports - AXFR. (IXFR is not supported.) + AXFR. (IXFR is not supported.) Access control is not yet provided.

Chapter 11. Secondary Manager

The b10-zonemgr process is started by @@ -607,13 +622,13 @@ This may be a temporary setting until then.

Note

Access control (such as allowing notifies) is not yet provided. The primary/secondary service is not yet complete. -

Chapter 12. Recursive Name Server

Table of Contents

Forwarding

+

Chapter 12. Recursive Name Server

Table of Contents

Access Control
Forwarding

The b10-resolver process is started by bind10.

The main bind10 process can be configured - to select to run either the authoritative or resolver. + to select to run either the authoritative or resolver or both. By default, it starts the authoritative service. @@ -629,14 +644,52 @@ This may be a temporary setting until then. The master bind10 will stop and start the desired services.

- The resolver also needs to be configured to listen on an address - and port: + By default, the resolver listens on port 53 for 127.0.0.1 and ::1. + The following example shows how it can be configured to + listen on an additional address (and port):

-> config set Resolver/listen_on [{ "address": "127.0.0.1", "port": 53 }]
+> config add Resolver/listen_on
+> config set Resolver/listen_on[2]/address "192.168.1.1"
+> config set Resolver/listen_on[2]/port 53
 > config commit
 

-

Forwarding

+

(Replace the 2 + as needed; run config show + Resolver/listen_on if needed.)

Access Control

+ By default, the b10-resolver daemon only accepts + DNS queries from the localhost (127.0.0.1 and ::1). + The Resolver/query_acl configuration may + be used to reject, drop, or allow specific IPs or networks. + This configuration list is first match. +

+ The configuration's action item may be + set to ACCEPT to allow the incoming query, + REJECT to respond with a DNS REFUSED return + code, or DROP to ignore the query without + any response (such as a blackhole). For more information, + see the respective debugging messages: RESOLVER_QUERY_ACCEPTED, + RESOLVER_QUERY_REJECTED, + and RESOLVER_QUERY_DROPPED. +

+ The required configuration's from item is set + to an IPv4 or IPv6 address, addresses with an network mask, or to + the special lowercase keywords any6 (for + any IPv6 address) or any4 (for any IPv4 + address). +

+ For example to allow the 192.168.1.0/24 + network to use your recursive name server, at the + bindctl prompt run: +

+> config add Resolver/query_acl
+> config set Resolver/query_acl[2]/action "ACCEPT"
+> config set Resolver/query_acl[2]/from "192.168.1.0/24"
+> config commit
+

(Replace the 2 + as needed; run config show + Resolver/query_acl if needed.)

Note

This prototype access control configuration + syntax may be changed.

Forwarding

To enable forwarding, the upstream address and port must be configured to forward queries to, such as: @@ -664,68 +717,440 @@ This may be a temporary setting until then.

- This stats daemon provides commands to identify if it is running, - show specified or all statistics data, set values, remove data, - and reset data. + This stats daemon provides commands to identify if it is + running, show specified or all statistics data, show specified + or all statistics data schema, and set specified statistics + data. For example, using bindctl:

 > Stats show
 {
-    "auth.queries.tcp": 1749,
-    "auth.queries.udp": 867868,
-    "bind10.boot_time": "2011-01-20T16:59:03Z",
-    "report_time": "2011-01-20T17:04:06Z",
-    "stats.boot_time": "2011-01-20T16:59:05Z",
-    "stats.last_update_time": "2011-01-20T17:04:05Z",
-    "stats.lname": "4d3869d9_a@jreed.example.net",
-    "stats.start_time": "2011-01-20T16:59:05Z",
-    "stats.timestamp": 1295543046.823504
+    "Auth": {
+        "queries.tcp": 1749,
+        "queries.udp": 867868
+    },
+    "Boss": {
+        "boot_time": "2011-01-20T16:59:03Z"
+    },
+    "Stats": {
+        "boot_time": "2011-01-20T16:59:05Z",
+        "last_update_time": "2011-01-20T17:04:05Z",
+        "lname": "4d3869d9_a@jreed.example.net",
+        "report_time": "2011-01-20T17:04:06Z",
+        "timestamp": 1295543046.823504
+    }
 }
        

-

Chapter 14. Logging

- Each message written by BIND 10 to the configured logging destinations - comprises a number of components that identify the origin of the - message and, if the message indicates a problem, information about the - problem that may be useful in fixing it. -

- Consider the message below logged to a file: -

2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink]
-    ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53)

-

- Note: the layout of messages written to the system logging - file (syslog) may be slightly different. This message has - been split across two lines here for display reasons; in the - logging file, it will appear on one line.) -

- The log message comprises a number of components: +

Chapter 14. Logging

Logging configuration

-

2011-06-15 13:48:22.034

- The date and time at which the message was generated. -

ERROR

- The severity of the message. -

[b10-resolver.asiolink]

- The source of the message. This comprises two components: - the BIND 10 process generating the message (in this - case, b10-resolver) and the module - within the program from which the message originated - (which in the example is the asynchronous I/O link - module, asiolink). -

ASIODNS_OPENSOCK

+ The logging system in BIND 10 is configured through the + Logging module. All BIND 10 modules will look at the + configuration in Logging to see what should be logged and + to where. + + + +

Loggers

+ + Within BIND 10, a message is logged through a component + called a "logger". Different parts of BIND 10 log messages + through different loggers, and each logger can be configured + independently of one another. + +

+ + In the Logging module, you can specify the configuration + for zero or more loggers; any that are not specified will + take appropriate default values.. + +

+ + The three most important elements of a logger configuration + are the name (the component that is + generating the messages), the severity + (what to log), and the output_options + (where to log). + +

name (string)

+ Each logger in the system has a name, the name being that + of the component using it to log messages. For instance, + if you want to configure logging for the resolver module, + you add an entry for a logger named Resolver. This + configuration will then be used by the loggers in the + Resolver module, and all the libraries used by it. +

+ + If you want to specify logging for one specific library + within the module, you set the name to + module.library. For example, the + logger used by the nameserver address store component + has the full name of Resolver.nsas. If + there is no entry in Logging for a particular library, + it will use the configuration given for the module. + + + +

+ + + + To illustrate this, suppose you want the cache library + to log messages of severity DEBUG, and the rest of the + resolver code to log messages of severity INFO. To achieve + this you specify two loggers, one with the name + Resolver and severity INFO, and one with + the name Resolver.cache with severity + DEBUG. As there are no entries for other libraries (e.g. + the nsas), they will use the configuration for the module + (Resolver), so giving the desired behavior. + +

+ + One special case is that of a module name of * + (asterisks), which is interpreted as any + module. You can set global logging options by using this, + including setting the logging configuration for a library + that is used by multiple modules (e.g. *.config + specifies the configuration library code in whatever + module is using it). + +

+ + If there are multiple logger specifications in the + configuration that might match a particular logger, the + specification with the more specific logger name takes + precedence. For example, if there are entries for for + both * and Resolver, the + resolver module — and all libraries it uses — + will log messages according to the configuration in the + second entry (Resolver). All other modules + will use the configuration of the first entry + (*). If there was also a configuration + entry for Resolver.cache, the cache library + within the resolver would use that in preference to the + entry for Resolver. + +

+ + One final note about the naming. When specifying the + module name within a logger, use the name of the module + as specified in bindctl, e.g. + Resolver for the resolver module, + Xfrout for the xfrout module, etc. When + the message is logged, the message will include the name + of the logger generating the message, but with the module + name replaced by the name of the process implementing + the module (so for example, a message generated by the + Auth.cache logger will appear in the output + with a logger name of b10-auth.cache). + +

severity (string)

+ + This specifies the category of messages logged. + Each message is logged with an associated severity which + may be one of the following (in descending order of + severity): +

  • FATAL
  • ERROR
  • WARN
  • INFO
  • DEBUG

+ + When the severity of a logger is set to one of these + values, it will only log messages of that severity, and + the severities above it. The severity may also be set to + NONE, in which case all messages from that logger are + inhibited. + + + +

output_options (list)

+ + Each logger can have zero or more + output_options. These specify where log + messages are sent to. These are explained in detail below. + +

+ + The other options for a logger are: + +

debuglevel (integer)

+ + When a logger's severity is set to DEBUG, this value + specifies what debug messages should be printed. It ranges + from 0 (least verbose) to 99 (most verbose). +

+ + If severity for the logger is not DEBUG, this value is ignored. + +

additive (true or false)

+ + If this is true, the output_options from + the parent will be used. For example, if there are two + loggers configured; Resolver and + Resolver.cache, and additive + is true in the second, it will write the log messages + not only to the destinations specified for + Resolver.cache, but also to the destinations + as specified in the output_options in + the logger named Resolver. + + + +

Output Options

+ + The main settings for an output option are the + destination and a value called + output, the meaning of which depends on + the destination that is set. + +

destination (string)

+ + The destination is the type of output. It can be one of: + +

  • console
  • file
  • syslog

output (string)

+ + Depending on what is set as the output destination, this + value is interpreted as follows: + +

destination is console
+ The value of output must be one of stdout + (messages printed to standard output) or + stderr (messages printed to standard + error). +
destination is file
+ The value of output is interpreted as a file name; + log messages will be appended to this file. +
destination is syslog
+ The value of output is interpreted as the + syslog facility (e.g. + local0) that should be used + for log messages. +

+ + The other options for output_options are: + +

flush (true of false)

+ Flush buffers after each log message. Doing this will + reduce performance but will ensure that if the program + terminates abnormally, all messages up to the point of + termination are output. +

maxsize (integer)

+ Only relevant when destination is file, this is maximum + file size of output files in bytes. When the maximum + size is reached, the file is renamed and a new file opened. + (For example, a ".1" is appended to the name — + if a ".1" file exists, it is renamed ".2", + etc.) +

+ If this is 0, no maximum file size is used. +

maxver (integer)

+ Maximum number of old log files to keep around when + rolling the output file. Only relevant when + destination is file. +

Example session

+ + In this example we want to set the global logging to + write to the file /var/log/my_bind10.log, + at severity WARN. We want the authoritative server to + log at DEBUG with debuglevel 40, to a different file + (/tmp/debug_messages). + +

+ + Start bindctl. + +

+ +

["login success "]
+> config show Logging
+Logging/loggers	[]	list
+

+ +

+ + By default, no specific loggers are configured, in which + case the severity defaults to INFO and the output is + written to stderr. + +

+ + Let's first add a default logger: + +

+ +

> config add Logging/loggers
+> config show Logging
+Logging/loggers/	list	(modified)
+

+ +

+ + The loggers value line changed to indicate that it is no + longer an empty list: + +

+ +

> config show Logging/loggers
+Logging/loggers[0]/name	""	string	(default)
+Logging/loggers[0]/severity	"INFO"	string	(default)
+Logging/loggers[0]/debuglevel	0	integer	(default)
+Logging/loggers[0]/additive	false	boolean	(default)
+Logging/loggers[0]/output_options	[]	list	(default)
+

+ +

+ + The name is mandatory, so we must set it. We will also + change the severity as well. Let's start with the global + logger. + +

+ +

> config set Logging/loggers[0]/name *
+> config set Logging/loggers[0]/severity WARN
+> config show Logging/loggers
+Logging/loggers[0]/name	"*"	string	(modified)
+Logging/loggers[0]/severity	"WARN"	string	(modified)
+Logging/loggers[0]/debuglevel	0	integer	(default)
+Logging/loggers[0]/additive	false	boolean	(default)
+Logging/loggers[0]/output_options	[]	list	(default)
+

+ +

+ + Of course, we need to specify where we want the log + messages to go, so we add an entry for an output option. + +

+ +

>  config add Logging/loggers[0]/output_options
+>  config show Logging/loggers[0]/output_options
+Logging/loggers[0]/output_options[0]/destination	"console"	string	(default)
+Logging/loggers[0]/output_options[0]/output	"stdout"	string	(default)
+Logging/loggers[0]/output_options[0]/flush	false	boolean	(default)
+Logging/loggers[0]/output_options[0]/maxsize	0	integer	(default)
+Logging/loggers[0]/output_options[0]/maxver	0	integer	(default)
+

+ + +

+ + These aren't the values we are looking for. + +

+ +

>  config set Logging/loggers[0]/output_options[0]/destination file
+>  config set Logging/loggers[0]/output_options[0]/output /var/log/bind10.log
+>  config set Logging/loggers[0]/output_options[0]/maxsize 30000
+>  config set Logging/loggers[0]/output_options[0]/maxver 8
+

+ +

+ + Which would make the entire configuration for this logger + look like: + +

+ +

>  config show all Logging/loggers
+Logging/loggers[0]/name	"*"	string	(modified)
+Logging/loggers[0]/severity	"WARN"	string	(modified)
+Logging/loggers[0]/debuglevel	0	integer	(default)
+Logging/loggers[0]/additive	false	boolean	(default)
+Logging/loggers[0]/output_options[0]/destination	"file"	string	(modified)
+Logging/loggers[0]/output_options[0]/output	"/var/log/bind10.log"	string	(modified)
+Logging/loggers[0]/output_options[0]/flush	false	boolean	(default)
+Logging/loggers[0]/output_options[0]/maxsize	30000	integer	(modified)
+Logging/loggers[0]/output_options[0]/maxver	8	integer	(modified)
+

+ +

+ + That looks OK, so let's commit it before we add the + configuration for the authoritative server's logger. + +

+ +

>  config commit

+ +

+ + Now that we have set it, and checked each value along + the way, adding a second entry is quite similar. + +

+ +

>  config add Logging/loggers
+>  config set Logging/loggers[1]/name Auth
+>  config set Logging/loggers[1]/severity DEBUG
+>  config set Logging/loggers[1]/debuglevel 40
+>  config add Logging/loggers[1]/output_options
+>  config set Logging/loggers[1]/output_options[0]/destination file
+>  config set Logging/loggers[1]/output_options[0]/output /tmp/auth_debug.log
+>  config commit
+

+ +

+ + And that's it. Once we have found whatever it was we + needed the debug messages for, we can simply remove the + second logger to let the authoritative server use the + same settings as the rest. + +

+ +

>  config remove Logging/loggers[1]
+>  config commit
+

+ +

+ + And every module will now be using the values from the + logger named *. + +

Logging Message Format

+ Each message written by BIND 10 to the configured logging + destinations comprises a number of components that identify + the origin of the message and, if the message indicates + a problem, information about the problem that may be + useful in fixing it. +

+ Consider the message below logged to a file: +

2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink]
+    ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53)

+

+ Note: the layout of messages written to the system logging + file (syslog) may be slightly different. This message has + been split across two lines here for display reasons; in the + logging file, it will appear on one line.) +

+ The log message comprises a number of components: + +

2011-06-15 13:48:22.034

+ The date and time at which the message was generated. +

ERROR

+ The severity of the message. +

[b10-resolver.asiolink]

+ The source of the message. This comprises two components: + the BIND 10 process generating the message (in this + case, b10-resolver) and the module + within the program from which the message originated + (which in the example is the asynchronous I/O link + module, asiolink). +

ASIODNS_OPENSOCK

The message identification. Every message in BIND 10 has a unique identification, which can be used as an index into the BIND 10 Messages Manual (http://bind10.isc.org/docs/bind10-messages.html) from which more information can be obtained. -

error 111 opening TCP socket to 127.0.0.1(53)

- A brief description of the cause of the problem. Within this text, - information relating to the condition that caused the message to - be logged will be included. In this example, error number 111 - (an operating system-specific error number) was encountered when - trying to open a TCP connection to port 53 on the local system - (address 127.0.0.1). The next step would be to find out the reason - for the failure by consulting your system's documentation to - identify what error number 111 means. -

- -

+

error 111 opening TCP socket to 127.0.0.1(53)

+ A brief description of the cause of the problem. + Within this text, information relating to the condition + that caused the message to be logged will be included. + In this example, error number 111 (an operating + system-specific error number) was encountered when + trying to open a TCP connection to port 53 on the + local system (address 127.0.0.1). The next step + would be to find out the reason for the failure by + consulting your system's documentation to identify + what error number 111 means. +

+

diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 7d1a006545..00ffee60b5 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -5,6 +5,23 @@ %version; ]> + + + @@ -129,7 +146,7 @@ The processes started by the bind10 command have names starting with "b10-", including: - + @@ -224,7 +241,7 @@
Managing BIND 10 - + Once BIND 10 is running, a few commands are used to interact directly with the system: @@ -263,7 +280,7 @@ In addition, manual pages are also provided in the default installation. - + - + Starting BIND10 with <command>bind10</command> - BIND 10 provides the bind10 command which + BIND 10 provides the bind10 command which starts up the required processes. bind10 will also restart processes that exit unexpectedly. @@ -694,7 +711,7 @@ Debian and Ubuntu: After starting the b10-msgq communications channel, - bind10 connects to it, + bind10 connects to it, runs the configuration manager, and reads its own configuration. Then it starts the other modules. @@ -725,6 +742,16 @@ Debian and Ubuntu: get additional debugging or diagnostic output. + + + + If the setproctitle Python module is detected at start up, + the process names for the Python-based daemons will be renamed + to better identify them instead of just python. + This is not needed on some operating systems. + + +
@@ -752,7 +779,7 @@ Debian and Ubuntu: b10-msgq service. It listens on 127.0.0.1.
- + The configuration data item is: - + database_file - + This is an optional string to define the path to find the SQLite3 database file. @@ -1103,7 +1130,7 @@ This may be a temporary setting until then. shutdown - + Stop the authoritative DNS server. @@ -1159,7 +1186,7 @@ This may be a temporary setting until then. $INCLUDE - + Loads an additional zone file. This may be recursive. @@ -1167,7 +1194,7 @@ This may be a temporary setting until then. $ORIGIN - + Defines the relative domain name. @@ -1175,7 +1202,7 @@ This may be a temporary setting until then. $TTL - + Defines the time-to-live value used for following records that don't include a TTL. @@ -1240,7 +1267,7 @@ TODO The current development release of BIND 10 only supports - AXFR. (IXFR is not supported.) + AXFR. (IXFR is not supported.) @@ -1287,7 +1314,7 @@ what if a NOTIFY is sent? The current development release of BIND 10 only supports - AXFR. (IXFR is not supported.) + AXFR. (IXFR is not supported.) Access control is not yet provided. @@ -1343,7 +1370,7 @@ what is XfroutClient xfr_client?? The main bind10 process can be configured - to select to run either the authoritative or resolver. + to select to run either the authoritative or resolver or both. By default, it starts the authoritative service. @@ -1363,16 +1390,85 @@ what is XfroutClient xfr_client?? - The resolver also needs to be configured to listen on an address - and port: + By default, the resolver listens on port 53 for 127.0.0.1 and ::1. + The following example shows how it can be configured to + listen on an additional address (and port): -> config set Resolver/listen_on [{ "address": "127.0.0.1", "port": 53 }] +> config add Resolver/listen_on +> config set Resolver/listen_on[2]/address "192.168.1.1" +> config set Resolver/listen_on[2]/port 53 > config commit - + (Replace the 2 + as needed; run config show + Resolver/listen_on if needed.) + + +
+ Access Control + + + By default, the b10-resolver daemon only accepts + DNS queries from the localhost (127.0.0.1 and ::1). + The configuration may + be used to reject, drop, or allow specific IPs or networks. + This configuration list is first match. + + + + The configuration's item may be + set to ACCEPT to allow the incoming query, + REJECT to respond with a DNS REFUSED return + code, or DROP to ignore the query without + any response (such as a blackhole). For more information, + see the respective debugging messages: RESOLVER_QUERY_ACCEPTED, + RESOLVER_QUERY_REJECTED, + and RESOLVER_QUERY_DROPPED. + + + + The required configuration's item is set + to an IPv4 or IPv6 address, addresses with an network mask, or to + the special lowercase keywords any6 (for + any IPv6 address) or any4 (for any IPv4 + address). + + + + + + For example to allow the 192.168.1.0/24 + network to use your recursive name server, at the + bindctl prompt run: + + + +> config add Resolver/query_acl +> config set Resolver/query_acl[2]/action "ACCEPT" +> config set Resolver/query_acl[2]/from "192.168.1.0/24" +> config commit + + + (Replace the 2 + as needed; run config show + Resolver/query_acl if needed.) + + + This prototype access control configuration + syntax may be changed. + +
Forwarding @@ -1426,24 +1522,30 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - This stats daemon provides commands to identify if it is running, - show specified or all statistics data, set values, remove data, - and reset data. + This stats daemon provides commands to identify if it is + running, show specified or all statistics data, show specified + or all statistics data schema, and set specified statistics + data. For example, using bindctl: > Stats show { - "auth.queries.tcp": 1749, - "auth.queries.udp": 867868, - "bind10.boot_time": "2011-01-20T16:59:03Z", - "report_time": "2011-01-20T17:04:06Z", - "stats.boot_time": "2011-01-20T16:59:05Z", - "stats.last_update_time": "2011-01-20T17:04:05Z", - "stats.lname": "4d3869d9_a@jreed.example.net", - "stats.start_time": "2011-01-20T16:59:05Z", - "stats.timestamp": 1295543046.823504 + "Auth": { + "queries.tcp": 1749, + "queries.udp": 867868 + }, + "Boss": { + "boot_time": "2011-01-20T16:59:03Z" + }, + "Stats": { + "boot_time": "2011-01-20T16:59:05Z", + "last_update_time": "2011-01-20T17:04:05Z", + "lname": "4d3869d9_a@jreed.example.net", + "report_time": "2011-01-20T17:04:06Z", + "timestamp": 1295543046.823504 + } } @@ -1453,61 +1555,679 @@ then change those defaults with config set Resolver/forward_addresses[0]/address Logging - +
+ Logging configuration - - Each message written by BIND 10 to the configured logging destinations - comprises a number of components that identify the origin of the - message and, if the message indicates a problem, information about the - problem that may be useful in fixing it. - + - - Consider the message below logged to a file: - 2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink] - ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53) - + The logging system in BIND 10 is configured through the + Logging module. All BIND 10 modules will look at the + configuration in Logging to see what should be logged and + to where. - - Note: the layout of messages written to the system logging - file (syslog) may be slightly different. This message has - been split across two lines here for display reasons; in the - logging file, it will appear on one line.) - + - - The log message comprises a number of components: + + +
+ Loggers + + + + Within BIND 10, a message is logged through a component + called a "logger". Different parts of BIND 10 log messages + through different loggers, and each logger can be configured + independently of one another. + + + + + + In the Logging module, you can specify the configuration + for zero or more loggers; any that are not specified will + take appropriate default values.. + + + + + + The three most important elements of a logger configuration + are the (the component that is + generating the messages), the + (what to log), and the + (where to log). + + + +
+ name (string) + + + Each logger in the system has a name, the name being that + of the component using it to log messages. For instance, + if you want to configure logging for the resolver module, + you add an entry for a logger named Resolver. This + configuration will then be used by the loggers in the + Resolver module, and all the libraries used by it. + + + + + + + If you want to specify logging for one specific library + within the module, you set the name to + module.library. For example, the + logger used by the nameserver address store component + has the full name of Resolver.nsas. If + there is no entry in Logging for a particular library, + it will use the configuration given for the module. + + + + + + + + + + To illustrate this, suppose you want the cache library + to log messages of severity DEBUG, and the rest of the + resolver code to log messages of severity INFO. To achieve + this you specify two loggers, one with the name + Resolver and severity INFO, and one with + the name Resolver.cache with severity + DEBUG. As there are no entries for other libraries (e.g. + the nsas), they will use the configuration for the module + (Resolver), so giving the desired behavior. + + + + + + One special case is that of a module name of * + (asterisks), which is interpreted as any + module. You can set global logging options by using this, + including setting the logging configuration for a library + that is used by multiple modules (e.g. *.config + specifies the configuration library code in whatever + module is using it). + + + + + + If there are multiple logger specifications in the + configuration that might match a particular logger, the + specification with the more specific logger name takes + precedence. For example, if there are entries for for + both * and Resolver, the + resolver module — and all libraries it uses — + will log messages according to the configuration in the + second entry (Resolver). All other modules + will use the configuration of the first entry + (*). If there was also a configuration + entry for Resolver.cache, the cache library + within the resolver would use that in preference to the + entry for Resolver. + + + + + + One final note about the naming. When specifying the + module name within a logger, use the name of the module + as specified in bindctl, e.g. + Resolver for the resolver module, + Xfrout for the xfrout module, etc. When + the message is logged, the message will include the name + of the logger generating the message, but with the module + name replaced by the name of the process implementing + the module (so for example, a message generated by the + Auth.cache logger will appear in the output + with a logger name of b10-auth.cache). + + + +
+ +
+ severity (string) + + + + This specifies the category of messages logged. + Each message is logged with an associated severity which + may be one of the following (in descending order of + severity): + + + + + FATAL + + + + ERROR + + + + WARN + + + + INFO + + + + DEBUG + + + + + + When the severity of a logger is set to one of these + values, it will only log messages of that severity, and + the severities above it. The severity may also be set to + NONE, in which case all messages from that logger are + inhibited. + + + + + +
+ +
+ output_options (list) + + + + Each logger can have zero or more + . These specify where log + messages are sent to. These are explained in detail below. + + + + + + The other options for a logger are: + + + +
+ +
+ debuglevel (integer) + + + + When a logger's severity is set to DEBUG, this value + specifies what debug messages should be printed. It ranges + from 0 (least verbose) to 99 (most verbose). + + + + + + + + If severity for the logger is not DEBUG, this value is ignored. + + + +
+ +
+ additive (true or false) + + + + If this is true, the from + the parent will be used. For example, if there are two + loggers configured; Resolver and + Resolver.cache, and + is true in the second, it will write the log messages + not only to the destinations specified for + Resolver.cache, but also to the destinations + as specified in the in + the logger named Resolver. + + + + + +
+ +
+ +
+ Output Options + + + + The main settings for an output option are the + and a value called + , the meaning of which depends on + the destination that is set. + + + +
+ destination (string) + + + + The destination is the type of output. It can be one of: + + + + + + + console + + + + file + + + + syslog + + + + +
+ +
+ output (string) + + + + Depending on what is set as the output destination, this + value is interpreted as follows: + + - - 2011-06-15 13:48:22.034 - - The date and time at which the message was generated. - - - - ERROR - - The severity of the message. - - + + is console + + + The value of output must be one of stdout + (messages printed to standard output) or + stderr (messages printed to standard + error). + + + - - [b10-resolver.asiolink] - - The source of the message. This comprises two components: - the BIND 10 process generating the message (in this - case, b10-resolver) and the module - within the program from which the message originated - (which in the example is the asynchronous I/O link - module, asiolink). - - + + is file + + + The value of output is interpreted as a file name; + log messages will be appended to this file. + + + - - ASIODNS_OPENSOCK - + + is syslog + + + The value of output is interpreted as the + syslog facility (e.g. + local0) that should be used + for log messages. + + + + + + + + + The other options for are: + + + +
+ flush (true of false) + + + Flush buffers after each log message. Doing this will + reduce performance but will ensure that if the program + terminates abnormally, all messages up to the point of + termination are output. + + +
+ +
+ maxsize (integer) + + + Only relevant when destination is file, this is maximum + file size of output files in bytes. When the maximum + size is reached, the file is renamed and a new file opened. + (For example, a ".1" is appended to the name — + if a ".1" file exists, it is renamed ".2", + etc.) + + + + If this is 0, no maximum file size is used. + + +
+ +
+ maxver (integer) + + + Maximum number of old log files to keep around when + rolling the output file. Only relevant when + is file. + + +
+ +
+ +
+ +
+ Example session + + + + In this example we want to set the global logging to + write to the file /var/log/my_bind10.log, + at severity WARN. We want the authoritative server to + log at DEBUG with debuglevel 40, to a different file + (/tmp/debug_messages). + + + + + + Start bindctl. + + + + + + ["login success "] +> config show Logging +Logging/loggers [] list + + + + + + + By default, no specific loggers are configured, in which + case the severity defaults to INFO and the output is + written to stderr. + + + + + + Let's first add a default logger: + + + + + + + > config add Logging/loggers +> config show Logging +Logging/loggers/ list (modified) + + + + + + + The loggers value line changed to indicate that it is no + longer an empty list: + + + + + + > config show Logging/loggers +Logging/loggers[0]/name "" string (default) +Logging/loggers[0]/severity "INFO" string (default) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options [] list (default) + + + + + + + The name is mandatory, so we must set it. We will also + change the severity as well. Let's start with the global + logger. + + + + + + > config set Logging/loggers[0]/name * +> config set Logging/loggers[0]/severity WARN +> config show Logging/loggers +Logging/loggers[0]/name "*" string (modified) +Logging/loggers[0]/severity "WARN" string (modified) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options [] list (default) + + + + + + + Of course, we need to specify where we want the log + messages to go, so we add an entry for an output option. + + + + + + > config add Logging/loggers[0]/output_options +> config show Logging/loggers[0]/output_options +Logging/loggers[0]/output_options[0]/destination "console" string (default) +Logging/loggers[0]/output_options[0]/output "stdout" string (default) +Logging/loggers[0]/output_options[0]/flush false boolean (default) +Logging/loggers[0]/output_options[0]/maxsize 0 integer (default) +Logging/loggers[0]/output_options[0]/maxver 0 integer (default) + + + + + + + + These aren't the values we are looking for. + + + + + + > config set Logging/loggers[0]/output_options[0]/destination file +> config set Logging/loggers[0]/output_options[0]/output /var/log/bind10.log +> config set Logging/loggers[0]/output_options[0]/maxsize 30000 +> config set Logging/loggers[0]/output_options[0]/maxver 8 + + + + + + + Which would make the entire configuration for this logger + look like: + + + + + + > config show all Logging/loggers +Logging/loggers[0]/name "*" string (modified) +Logging/loggers[0]/severity "WARN" string (modified) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options[0]/destination "file" string (modified) +Logging/loggers[0]/output_options[0]/output "/var/log/bind10.log" string (modified) +Logging/loggers[0]/output_options[0]/flush false boolean (default) +Logging/loggers[0]/output_options[0]/maxsize 30000 integer (modified) +Logging/loggers[0]/output_options[0]/maxver 8 integer (modified) + + + + + + + That looks OK, so let's commit it before we add the + configuration for the authoritative server's logger. + + + + + + > config commit + + + + + + Now that we have set it, and checked each value along + the way, adding a second entry is quite similar. + + + + + + > config add Logging/loggers +> config set Logging/loggers[1]/name Auth +> config set Logging/loggers[1]/severity DEBUG +> config set Logging/loggers[1]/debuglevel 40 +> config add Logging/loggers[1]/output_options +> config set Logging/loggers[1]/output_options[0]/destination file +> config set Logging/loggers[1]/output_options[0]/output /tmp/auth_debug.log +> config commit + + + + + + + And that's it. Once we have found whatever it was we + needed the debug messages for, we can simply remove the + second logger to let the authoritative server use the + same settings as the rest. + + + + + + > config remove Logging/loggers[1] +> config commit + + + + + + + And every module will now be using the values from the + logger named *. + + + +
+ +
+ +
+ Logging Message Format + + + Each message written by BIND 10 to the configured logging + destinations comprises a number of components that identify + the origin of the message and, if the message indicates + a problem, information about the problem that may be + useful in fixing it. + + + + Consider the message below logged to a file: + 2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink] + ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53) + + + + Note: the layout of messages written to the system logging + file (syslog) may be slightly different. This message has + been split across two lines here for display reasons; in the + logging file, it will appear on one line.) + + + + The log message comprises a number of components: + + + + 2011-06-15 13:48:22.034 + + + The date and time at which the message was generated. + + + + + ERROR + + The severity of the message. + + + + + [b10-resolver.asiolink] + + The source of the message. This comprises two components: + the BIND 10 process generating the message (in this + case, b10-resolver) and the module + within the program from which the message originated + (which in the example is the asynchronous I/O link + module, asiolink). + + + + + ASIODNS_OPENSOCK + The message identification. Every message in BIND 10 has a unique identification, which can be used as an index into the () from which more information can be obtained. - - + + - - error 111 opening TCP socket to 127.0.0.1(53) - - A brief description of the cause of the problem. Within this text, - information relating to the condition that caused the message to - be logged will be included. In this example, error number 111 - (an operating system-specific error number) was encountered when - trying to open a TCP connection to port 53 on the local system - (address 127.0.0.1). The next step would be to find out the reason - for the failure by consulting your system's documentation to - identify what error number 111 means. - - - + + error 111 opening TCP socket to 127.0.0.1(53) + + A brief description of the cause of the problem. + Within this text, information relating to the condition + that caused the message to be logged will be included. + In this example, error number 111 (an operating + system-specific error number) was encountered when + trying to open a TCP connection to port 53 on the + local system (address 127.0.0.1). The next step + would be to find out the reason for the failure by + consulting your system's documentation to identify + what error number 111 means. + + + + + +
-
diff --git a/doc/guide/bind10-messages.html b/doc/guide/bind10-messages.html index b075e96eb3..237b7adf80 100644 --- a/doc/guide/bind10-messages.html +++ b/doc/guide/bind10-messages.html @@ -1,10 +1,10 @@ -BIND 10 Messages Manual

BIND 10 Messages Manual

This is the messages manual for BIND 10 version - 20110519.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by +BIND 10 Messages Manual

BIND 10 Messages Manual

This is the messages manual for BIND 10 version + 20110809.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by Internet Systems Consortium (ISC). It includes DNS libraries and modular components for controlling authoritative and recursive DNS servers.

- This is the messages manual for BIND 10 version 20110519. + This is the messages manual for BIND 10 version 20110809. The most up-to-date version of this document, along with other documents for BIND 10, can be found at http://bind10.isc.org/docs. @@ -26,38 +26,635 @@ For information on configuring and using BIND 10 logging, refer to the BIND 10 Guide.

Chapter 2. BIND 10 Messages

-

ASIODNS_FETCHCOMP upstream fetch to %1(%2) has now completed

-A debug message, this records the the upstream fetch (a query made by the +

ASIODNS_FETCH_COMPLETED upstream fetch to %1(%2) has now completed

+A debug message, this records that the upstream fetch (a query made by the resolver on behalf of its client) to the specified address has completed. -

ASIODNS_FETCHSTOP upstream fetch to %1(%2) has been stopped

+

ASIODNS_FETCH_STOPPED upstream fetch to %1(%2) has been stopped

An external component has requested the halting of an upstream fetch. This is an allowed operation, and the message should only appear if debug is enabled. -

ASIODNS_OPENSOCK error %1 opening %2 socket to %3(%4)

+

ASIODNS_OPEN_SOCKET error %1 opening %2 socket to %3(%4)

The asynchronous I/O code encountered an error when trying to open a socket of the specified protocol in order to send a message to the target address. -The the number of the system error that cause the problem is given in the +The number of the system error that caused the problem is given in the message. -

ASIODNS_RECVSOCK error %1 reading %2 data from %3(%4)

-The asynchronous I/O code encountered an error when trying read data from -the specified address on the given protocol. The the number of the system -error that cause the problem is given in the message. -

ASIODNS_RECVTMO receive timeout while waiting for data from %1(%2)

+

ASIODNS_READ_DATA error %1 reading %2 data from %3(%4)

+The asynchronous I/O code encountered an error when trying to read data from +the specified address on the given protocol. The number of the system +error that caused the problem is given in the message. +

ASIODNS_READ_TIMEOUT receive timeout while waiting for data from %1(%2)

An upstream fetch from the specified address timed out. This may happen for any number of reasons and is most probably a problem at the remote server or a problem on the network. The message will only appear if debug is enabled. -

ASIODNS_SENDSOCK error %1 sending data using %2 to %3(%4)

-The asynchronous I/O code encountered an error when trying send data to -the specified address on the given protocol. The the number of the system -error that cause the problem is given in the message. -

ASIODNS_UNKORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3)

-This message should not appear and indicates an internal error if it does. -Please enter a bug report. -

ASIODNS_UNKRESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3)

-The termination method of the resolver's upstream fetch class was called with -an unknown result code (which is given in the message). This message should -not appear and may indicate an internal error. Please enter a bug report. +

ASIODNS_SEND_DATA error %1 sending data using %2 to %3(%4)

+The asynchronous I/O code encountered an error when trying to send data to +the specified address on the given protocol. The number of the system +error that caused the problem is given in the message. +

ASIODNS_UNKNOWN_ORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3)

+An internal consistency check on the origin of a message from the +asynchronous I/O module failed. This may indicate an internal error; +please submit a bug report. +

ASIODNS_UNKNOWN_RESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3)

+An internal error indicating that the termination method of the resolver's +upstream fetch class was called with an unknown result code (which is +given in the message). Please submit a bug report. +

AUTH_AXFR_ERROR error handling AXFR request: %1

+This is a debug message produced by the authoritative server when it +has encountered an error processing an AXFR request. The message gives +the reason for the error, and the server will return a SERVFAIL code to +the sender. +

AUTH_AXFR_UDP AXFR query received over UDP

+This is a debug message output when the authoritative server has received +an AXFR query over UDP. Use of UDP for AXFRs is not permitted by the +protocol, so the server will return a FORMERR error to the sender. +

AUTH_COMMAND_FAILED execution of command channel instruction '%1' failed: %2

+Execution of the specified command by the authoritative server failed. The +message contains the reason for the failure. +

AUTH_CONFIG_CHANNEL_CREATED configuration session channel created

+This is a debug message indicating that authoritative server has created +the channel to the configuration manager. It is issued during server +startup is an indication that the initialization is proceeding normally. +

AUTH_CONFIG_CHANNEL_ESTABLISHED configuration session channel established

+This is a debug message indicating that authoritative server +has established communication the configuration manager over the +previously-created channel. It is issued during server startup is an +indication that the initialization is proceeding normally. +

AUTH_CONFIG_CHANNEL_STARTED configuration session channel started

+This is a debug message, issued when the authoritative server has +posted a request to be notified when new configuration information is +available. It is issued during server startup is an indication that +the initialization is proceeding normally. +

AUTH_CONFIG_LOAD_FAIL load of configuration failed: %1

+An attempt to configure the server with information from the configuration +database during the startup sequence has failed. (The reason for +the failure is given in the message.) The server will continue its +initialization although it may not be configured in the desired way. +

AUTH_CONFIG_UPDATE_FAIL update of configuration failed: %1

+At attempt to update the configuration the server with information +from the configuration database has failed, the reason being given in +the message. +

AUTH_DATA_SOURCE data source database file: %1

+This is a debug message produced by the authoritative server when it accesses a +datebase data source, listing the file that is being accessed. +

AUTH_DNS_SERVICES_CREATED DNS services created

+This is a debug message indicating that the component that will handling +incoming queries for the authoritative server (DNSServices) has been +successfully created. It is issued during server startup is an indication +that the initialization is proceeding normally. +

AUTH_HEADER_PARSE_FAIL unable to parse header in received DNS packet: %1

+This is a debug message, generated by the authoritative server when an +attempt to parse the header of a received DNS packet has failed. (The +reason for the failure is given in the message.) The server will drop the +packet. +

AUTH_LOAD_TSIG loading TSIG keys

+This is a debug message indicating that the authoritative server +has requested the keyring holding TSIG keys from the configuration +database. It is issued during server startup is an indication that the +initialization is proceeding normally. +

AUTH_LOAD_ZONE loaded zone %1/%2

+This debug message is issued during the processing of the 'loadzone' command +when the authoritative server has successfully loaded the named zone of the +named class. +

AUTH_MEM_DATASRC_DISABLED memory data source is disabled for class %1

+This is a debug message reporting that the authoritative server has +discovered that the memory data source is disabled for the given class. +

AUTH_MEM_DATASRC_ENABLED memory data source is enabled for class %1

+This is a debug message reporting that the authoritative server has +discovered that the memory data source is enabled for the given class. +

AUTH_NOTIFY_QUESTIONS invalid number of questions (%1) in incoming NOTIFY

+This debug message is logged by the authoritative server when it receives +a NOTIFY packet that contains zero or more than one question. (A valid +NOTIFY packet contains one question.) The server will return a FORMERR +error to the sender. +

AUTH_NOTIFY_RRTYPE invalid question RR type (%1) in incoming NOTIFY

+This debug message is logged by the authoritative server when it receives +a NOTIFY packet that an RR type of something other than SOA in the +question section. (The RR type received is included in the message.) The +server will return a FORMERR error to the sender. +

AUTH_NO_STATS_SESSION session interface for statistics is not available

+The authoritative server had no session with the statistics module at the +time it attempted to send it data: the attempt has been abandoned. This +could be an error in configuration. +

AUTH_NO_XFRIN received NOTIFY but XFRIN session is not running

+This is a debug message produced by the authoritative server when it receives +a NOTIFY packet but the XFRIN process is not running. The packet will be +dropped and nothing returned to the sender. +

AUTH_PACKET_PARSE_ERROR unable to parse received DNS packet: %1

+This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to something other +than a protocol error. The reason for the failure is given in the message; +the server will return a SERVFAIL error code to the sender. +

AUTH_PACKET_PROTOCOL_ERROR DNS packet protocol error: %1. Returning %2

+This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to a protocol error. +The reason for the failure is given in the message, as is the error code +that will be returned to the sender. +

AUTH_PACKET_RECEIVED message received:\n%1

+This is a debug message output by the authoritative server when it +receives a valid DNS packet. +

+Note: This message includes the packet received, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_PROCESS_FAIL message processing failure: %1

+This message is generated by the authoritative server when it has +encountered an internal error whilst processing a received packet: +the cause of the error is included in the message. +

+The server will return a SERVFAIL error code to the sender of the packet. +This message indicates a potential error in the server. Please open a +bug ticket for this issue. +

AUTH_RECEIVED_COMMAND command '%1' received

+This is a debug message issued when the authoritative server has received +a command on the command channel. +

AUTH_RECEIVED_SENDSTATS command 'sendstats' received

+This is a debug message issued when the authoritative server has received +a command from the statistics module to send it data. The 'sendstats' +command is handled differently to other commands, which is why the debug +message associated with it has its own code. +

AUTH_RESPONSE_RECEIVED received response message, ignoring

+This is a debug message, this is output if the authoritative server +receives a DNS packet with the QR bit set, i.e. a DNS response. The +server ignores the packet as it only responds to question packets. +

AUTH_SEND_ERROR_RESPONSE sending an error response (%1 bytes):\n%2

+This is a debug message recording that the authoritative server is sending +an error response to the originator of the query. A previous message will +have recorded details of the failure. +

+Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_SEND_NORMAL_RESPONSE sending an error response (%1 bytes):\n%2

+This is a debug message recording that the authoritative server is sending +a response to the originator of a query. +

+Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_SERVER_CREATED server created

+An informational message indicating that the authoritative server process has +been created and is initializing. The AUTH_SERVER_STARTED message will be +output when initialization has successfully completed and the server starts +accepting queries. +

AUTH_SERVER_FAILED server failed: %1

+The authoritative server has encountered a fatal error and is terminating. The +reason for the failure is included in the message. +

AUTH_SERVER_STARTED server started

+Initialization of the authoritative server has completed successfully +and it is entering the main loop, waiting for queries to arrive. +

AUTH_SQLITE3 nothing to do for loading sqlite3

+This is a debug message indicating that the authoritative server has +found that the data source it is loading is an SQLite3 data source, +so no further validation is needed. +

AUTH_STATS_CHANNEL_CREATED STATS session channel created

+This is a debug message indicating that the authoritative server has +created a channel to the statistics process. It is issued during server +startup is an indication that the initialization is proceeding normally. +

AUTH_STATS_CHANNEL_ESTABLISHED STATS session channel established

+This is a debug message indicating that the authoritative server +has established communication over the previously created statistics +channel. It is issued during server startup is an indication that the +initialization is proceeding normally. +

AUTH_STATS_COMMS communication error in sending statistics data: %1

+An error was encountered when the authoritative server tried to send data +to the statistics daemon. The message includes additional information +describing the reason for the failure. +

AUTH_STATS_TIMEOUT timeout while sending statistics data: %1

+The authoritative server sent data to the statistics daemon but received +no acknowledgement within the specified time. The message includes +additional information describing the reason for the failure. +

AUTH_STATS_TIMER_DISABLED statistics timer has been disabled

+This is a debug message indicating that the statistics timer has been +disabled in the authoritative server and no statistics information is +being produced. +

AUTH_STATS_TIMER_SET statistics timer set to %1 second(s)

+This is a debug message indicating that the statistics timer has been +enabled and that the authoritative server will produce statistics data +at the specified interval. +

AUTH_UNSUPPORTED_OPCODE unsupported opcode: %1

+This is a debug message, produced when a received DNS packet being +processed by the authoritative server has been found to contain an +unsupported opcode. (The opcode is included in the message.) The server +will return an error code of NOTIMPL to the sender. +

AUTH_XFRIN_CHANNEL_CREATED XFRIN session channel created

+This is a debug message indicating that the authoritative server has +created a channel to the XFRIN (Transfer-in) process. It is issued +during server startup is an indication that the initialization is +proceeding normally. +

AUTH_XFRIN_CHANNEL_ESTABLISHED XFRIN session channel established

+This is a debug message indicating that the authoritative server has +established communication over the previously-created channel to the +XFRIN (Transfer-in) process. It is issued during server startup is an +indication that the initialization is proceeding normally. +

AUTH_ZONEMGR_COMMS error communicating with zone manager: %1

+This is a debug message output during the processing of a NOTIFY request. +An error (listed in the message) has been encountered whilst communicating +with the zone manager. The NOTIFY request will not be honored. +

AUTH_ZONEMGR_ERROR received error response from zone manager: %1

+This is a debug message output during the processing of a NOTIFY +request. The zone manager component has been informed of the request, +but has returned an error response (which is included in the message). The +NOTIFY request will not be honored. +

BIND10_CHECK_MSGQ_ALREADY_RUNNING checking if msgq is already running

+The boss process is starting up and will now check if the message bus +daemon is already running. If so, it will not be able to start, as it +needs a dedicated message bus. +

BIND10_CONFIGURATION_START_AUTH start authoritative server: %1

+This message shows whether or not the authoritative server should be +started according to the configuration. +

BIND10_CONFIGURATION_START_RESOLVER start resolver: %1

+This message shows whether or not the resolver should be +started according to the configuration. +

BIND10_INVALID_USER invalid user: %1

+The boss process was started with the -u option, to drop root privileges +and continue running as the specified user, but the user is unknown. +

BIND10_KILLING_ALL_PROCESSES killing all started processes

+The boss module was not able to start every process it needed to start +during startup, and will now kill the processes that did get started. +

BIND10_KILL_PROCESS killing process %1

+The boss module is sending a kill signal to process with the given name, +as part of the process of killing all started processes during a failed +startup, as described for BIND10_KILLING_ALL_PROCESSES +

BIND10_MSGQ_ALREADY_RUNNING msgq daemon already running, cannot start

+There already appears to be a message bus daemon running. Either an +old process was not shut down correctly, and needs to be killed, or +another instance of BIND10, with the same msgq domain socket, is +running, which needs to be stopped. +

BIND10_MSGQ_DAEMON_ENDED b10-msgq process died, shutting down

+The message bus daemon has died. This is a fatal error, since it may +leave the system in an inconsistent state. BIND10 will now shut down. +

BIND10_MSGQ_DISAPPEARED msgq channel disappeared

+While listening on the message bus channel for messages, it suddenly +disappeared. The msgq daemon may have died. This might lead to an +inconsistent state of the system, and BIND 10 will now shut down. +

BIND10_PROCESS_ENDED_NO_EXIT_STATUS process %1 (PID %2) died: exit status not available

+The given process ended unexpectedly, but no exit status is +available. See BIND10_PROCESS_ENDED_WITH_EXIT_STATUS for a longer +description. +

BIND10_PROCESS_ENDED_WITH_EXIT_STATUS process %1 (PID %2) terminated, exit status = %3

+The given process ended unexpectedly with the given exit status. +Depending on which module it was, it may simply be restarted, or it +may be a problem that will cause the boss module to shut down too. +The latter happens if it was the message bus daemon, which, if it has +died suddenly, may leave the system in an inconsistent state. BIND10 +will also shut down now if it has been run with --brittle. +

BIND10_READING_BOSS_CONFIGURATION reading boss configuration

+The boss process is starting up, and will now process the initial +configuration, as received from the configuration manager. +

BIND10_RECEIVED_COMMAND received command: %1

+The boss module received a command and shall now process it. The command +is printed. +

BIND10_RECEIVED_NEW_CONFIGURATION received new configuration: %1

+The boss module received a configuration update and is going to apply +it now. The new configuration is printed. +

BIND10_RECEIVED_SIGNAL received signal %1

+The boss module received the given signal. +

BIND10_RESURRECTED_PROCESS resurrected %1 (PID %2)

+The given process has been restarted successfully, and is now running +with the given process id. +

BIND10_RESURRECTING_PROCESS resurrecting dead %1 process...

+The given process has ended unexpectedly, and is now restarted. +

BIND10_SELECT_ERROR error in select() call: %1

+There was a fatal error in the call to select(), used to see if a child +process has ended or if there is a message on the message bus. This +should not happen under normal circumstances and is considered fatal, +so BIND 10 will now shut down. The specific error is printed. +

BIND10_SEND_SIGKILL sending SIGKILL to %1 (PID %2)

+The boss module is sending a SIGKILL signal to the given process. +

BIND10_SEND_SIGTERM sending SIGTERM to %1 (PID %2)

+The boss module is sending a SIGTERM signal to the given process. +

BIND10_SHUTDOWN stopping the server

+The boss process received a command or signal telling it to shut down. +It will send a shutdown command to each process. The processes that do +not shut down will then receive a SIGTERM signal. If that doesn't work, +it shall send SIGKILL signals to the processes still alive. +

BIND10_SHUTDOWN_COMPLETE all processes ended, shutdown complete

+All child processes have been stopped, and the boss process will now +stop itself. +

BIND10_SOCKCREATOR_BAD_CAUSE unknown error cause from socket creator: %1

+The socket creator reported an error when creating a socket. But the function +which failed is unknown (not one of 'S' for socket or 'B' for bind). +

BIND10_SOCKCREATOR_BAD_RESPONSE unknown response for socket request: %1

+The boss requested a socket from the creator, but the answer is unknown. This +looks like a programmer error. +

BIND10_SOCKCREATOR_CRASHED the socket creator crashed

+The socket creator terminated unexpectedly. It is not possible to restart it +(because the boss already gave up root privileges), so the system is going +to terminate. +

BIND10_SOCKCREATOR_EOF eof while expecting data from socket creator

+There should be more data from the socket creator, but it closed the socket. +It probably crashed. +

BIND10_SOCKCREATOR_INIT initializing socket creator parser

+The boss module initializes routines for parsing the socket creator +protocol. +

BIND10_SOCKCREATOR_KILL killing the socket creator

+The socket creator is being terminated the aggressive way, by sending it +sigkill. This should not happen usually. +

BIND10_SOCKCREATOR_TERMINATE terminating socket creator

+The boss module sends a request to terminate to the socket creator. +

BIND10_SOCKCREATOR_TRANSPORT_ERROR transport error when talking to the socket creator: %1

+Either sending or receiving data from the socket creator failed with the given +error. The creator probably crashed or some serious OS-level problem happened, +as the communication happens only on local host. +

BIND10_SOCKET_CREATED successfully created socket %1

+The socket creator successfully created and sent a requested socket, it has +the given file number. +

BIND10_SOCKET_ERROR error on %1 call in the creator: %2/%3

+The socket creator failed to create the requested socket. It failed on the +indicated OS API function with given error. +

BIND10_SOCKET_GET requesting socket [%1]:%2 of type %3 from the creator

+The boss forwards a request for a socket to the socket creator. +

BIND10_STARTED_PROCESS started %1

+The given process has successfully been started. +

BIND10_STARTED_PROCESS_PID started %1 (PID %2)

+The given process has successfully been started, and has the given PID. +

BIND10_STARTING starting BIND10: %1

+Informational message on startup that shows the full version. +

BIND10_STARTING_PROCESS starting process %1

+The boss module is starting the given process. +

BIND10_STARTING_PROCESS_PORT starting process %1 (to listen on port %2)

+The boss module is starting the given process, which will listen on the +given port number. +

BIND10_STARTING_PROCESS_PORT_ADDRESS starting process %1 (to listen on %2#%3)

+The boss module is starting the given process, which will listen on the +given address and port number (written as <address>#<port>). +

BIND10_STARTUP_COMPLETE BIND 10 started

+All modules have been successfully started, and BIND 10 is now running. +

BIND10_STARTUP_ERROR error during startup: %1

+There was a fatal error when BIND10 was trying to start. The error is +shown, and BIND10 will now shut down. +

BIND10_START_AS_NON_ROOT starting %1 as a user, not root. This might fail.

+The given module is being started or restarted without root privileges. +If the module needs these privileges, it may have problems starting. +Note that this issue should be resolved by the pending 'socket-creator' +process; once that has been implemented, modules should not need root +privileges anymore. See tickets #800 and #801 for more information. +

BIND10_STOP_PROCESS asking %1 to shut down

+The boss module is sending a shutdown command to the given module over +the message channel. +

BIND10_UNKNOWN_CHILD_PROCESS_ENDED unknown child pid %1 exited

+An unknown child process has exited. The PID is printed, but no further +action will be taken by the boss process. +

CACHE_ENTRY_MISSING_RRSET missing RRset to generate message for %1

+The cache tried to generate the complete answer message. It knows the structure +of the message, but some of the RRsets to be put there are not in cache (they +probably expired already). Therefore it pretends the message was not found. +

CACHE_LOCALZONE_FOUND found entry with key %1 in local zone data

+Debug message, noting that the requested data was successfully found in the +local zone data of the cache. +

CACHE_LOCALZONE_UNKNOWN entry with key %1 not found in local zone data

+Debug message. The requested data was not found in the local zone data. +

CACHE_LOCALZONE_UPDATE updating local zone element at key %1

+Debug message issued when there's update to the local zone section of cache. +

CACHE_MESSAGES_DEINIT deinitialized message cache

+Debug message. It is issued when the server deinitializes the message cache. +

CACHE_MESSAGES_EXPIRED found an expired message entry for %1 in the message cache

+Debug message. The requested data was found in the message cache, but it +already expired. Therefore the cache removes the entry and pretends it found +nothing. +

CACHE_MESSAGES_FOUND found a message entry for %1 in the message cache

+Debug message. We found the whole message in the cache, so it can be returned +to user without any other lookups. +

CACHE_MESSAGES_INIT initialized message cache for %1 messages of class %2

+Debug message issued when a new message cache is issued. It lists the class +of messages it can hold and the maximum size of the cache. +

CACHE_MESSAGES_REMOVE removing old instance of %1/%2/%3 first

+Debug message. This may follow CACHE_MESSAGES_UPDATE and indicates that, while +updating, the old instance is being removed prior of inserting a new one. +

CACHE_MESSAGES_UNCACHEABLE not inserting uncacheable message %1/%2/%3

+Debug message, noting that the given message can not be cached. This is because +there's no SOA record in the message. See RFC 2308 section 5 for more +information. +

CACHE_MESSAGES_UNKNOWN no entry for %1 found in the message cache

+Debug message. The message cache didn't find any entry for the given key. +

CACHE_MESSAGES_UPDATE updating message entry %1/%2/%3

+Debug message issued when the message cache is being updated with a new +message. Either the old instance is removed or, if none is found, new one +is created. +

CACHE_RESOLVER_DEEPEST looking up deepest NS for %1/%2

+Debug message. The resolver cache is looking up the deepest known nameserver, +so the resolution doesn't have to start from the root. +

CACHE_RESOLVER_INIT initializing resolver cache for class %1

+Debug message. The resolver cache is being created for this given class. +

CACHE_RESOLVER_INIT_INFO initializing resolver cache for class %1

+Debug message, the resolver cache is being created for this given class. The +difference from CACHE_RESOLVER_INIT is only in different format of passed +information, otherwise it does the same. +

CACHE_RESOLVER_LOCAL_MSG message for %1/%2 found in local zone data

+Debug message. The resolver cache found a complete message for the user query +in the zone data. +

CACHE_RESOLVER_LOCAL_RRSET RRset for %1/%2 found in local zone data

+Debug message. The resolver cache found a requested RRset in the local zone +data. +

CACHE_RESOLVER_LOOKUP_MSG looking up message in resolver cache for %1/%2

+Debug message. The resolver cache is trying to find a message to answer the +user query. +

CACHE_RESOLVER_LOOKUP_RRSET looking up RRset in resolver cache for %1/%2

+Debug message. The resolver cache is trying to find an RRset (which usually +originates as internally from resolver). +

CACHE_RESOLVER_NO_QUESTION answer message for %1/%2 has empty question section

+The cache tried to fill in found data into the response message. But it +discovered the message contains no question section, which is invalid. +This is likely a programmer error, please submit a bug report. +

CACHE_RESOLVER_UNKNOWN_CLASS_MSG no cache for class %1

+Debug message. While trying to lookup a message in the resolver cache, it was +discovered there's no cache for this class at all. Therefore no message is +found. +

CACHE_RESOLVER_UNKNOWN_CLASS_RRSET no cache for class %1

+Debug message. While trying to lookup an RRset in the resolver cache, it was +discovered there's no cache for this class at all. Therefore no data is found. +

CACHE_RESOLVER_UPDATE_MSG updating message for %1/%2/%3

+Debug message. The resolver is updating a message in the cache. +

CACHE_RESOLVER_UPDATE_RRSET updating RRset for %1/%2/%3

+Debug message. The resolver is updating an RRset in the cache. +

CACHE_RESOLVER_UPDATE_UNKNOWN_CLASS_MSG no cache for class %1

+Debug message. While trying to insert a message into the cache, it was +discovered that there's no cache for the class of message. Therefore +the message will not be cached. +

CACHE_RESOLVER_UPDATE_UNKNOWN_CLASS_RRSET no cache for class %1

+Debug message. While trying to insert an RRset into the cache, it was +discovered that there's no cache for the class of the RRset. Therefore +the message will not be cached. +

CACHE_RRSET_EXPIRED found expired RRset %1/%2/%3

+Debug message. The requested data was found in the RRset cache. However, it is +expired, so the cache removed it and is going to pretend nothing was found. +

CACHE_RRSET_INIT initializing RRset cache for %1 RRsets of class %2

+Debug message. The RRset cache to hold at most this many RRsets for the given +class is being created. +

CACHE_RRSET_LOOKUP looking up %1/%2/%3 in RRset cache

+Debug message. The resolver is trying to look up data in the RRset cache. +

CACHE_RRSET_NOT_FOUND no RRset found for %1/%2/%3

+Debug message which can follow CACHE_RRSET_LOOKUP. This means the data is not +in the cache. +

CACHE_RRSET_REMOVE_OLD removing old RRset for %1/%2/%3 to make space for new one

+Debug message which can follow CACHE_RRSET_UPDATE. During the update, the cache +removed an old instance of the RRset to replace it with the new one. +

CACHE_RRSET_UNTRUSTED not replacing old RRset for %1/%2/%3, it has higher trust level

+Debug message which can follow CACHE_RRSET_UPDATE. The cache already holds the +same RRset, but from more trusted source, so the old one is kept and new one +ignored. +

CACHE_RRSET_UPDATE updating RRset %1/%2/%3 in the cache

+Debug message. The RRset is updating its data with this given RRset. +

CC_ASYNC_READ_FAILED asynchronous read failed

+This marks a low level error, we tried to read data from the message queue +daemon asynchronously, but the ASIO library returned an error. +

CC_CONN_ERROR error connecting to message queue (%1)

+It is impossible to reach the message queue daemon for the reason given. It +is unlikely there'll be reason for whatever program this currently is to +continue running, as the communication with the rest of BIND 10 is vital +for the components. +

CC_DISCONNECT disconnecting from message queue daemon

+The library is disconnecting from the message queue daemon. This debug message +indicates that the program is trying to shut down gracefully. +

CC_ESTABLISH trying to establish connection with message queue daemon at %1

+This debug message indicates that the command channel library is about to +connect to the message queue daemon, which should be listening on the UNIX-domain +socket listed in the output. +

CC_ESTABLISHED successfully connected to message queue daemon

+This debug message indicates that the connection was successfully made, this +should follow CC_ESTABLISH. +

CC_GROUP_RECEIVE trying to receive a message

+Debug message, noting that a message is expected to come over the command +channel. +

CC_GROUP_RECEIVED message arrived ('%1', '%2')

+Debug message, noting that we successfully received a message (its envelope and +payload listed). This follows CC_GROUP_RECEIVE, but might happen some time +later, depending if we waited for it or just polled. +

CC_GROUP_SEND sending message '%1' to group '%2'

+Debug message, we're about to send a message over the command channel. +

CC_INVALID_LENGTHS invalid length parameters (%1, %2)

+This happens when garbage comes over the command channel or some kind of +confusion happens in the program. The data received from the socket make no +sense if we interpret it as lengths of message. The first one is total length +of the message; the second is the length of the header. The header +and its length (2 bytes) is counted in the total length. +

CC_LENGTH_NOT_READY length not ready

+There should be data representing the length of message on the socket, but it +is not there. +

CC_NO_MESSAGE no message ready to be received yet

+The program polled for incoming messages, but there was no message waiting. +This is a debug message which may happen only after CC_GROUP_RECEIVE. +

CC_NO_MSGQ unable to connect to message queue (%1)

+It isn't possible to connect to the message queue daemon, for reason listed. +It is unlikely any program will be able continue without the communication. +

CC_READ_ERROR error reading data from command channel (%1)

+A low level error happened when the library tried to read data from the +command channel socket. The reason is listed. +

CC_READ_EXCEPTION error reading data from command channel (%1)

+We received an exception while trying to read data from the command +channel socket. The reason is listed. +

CC_REPLY replying to message from '%1' with '%2'

+Debug message, noting we're sending a response to the original message +with the given envelope. +

CC_SET_TIMEOUT setting timeout to %1ms

+Debug message. A timeout for which the program is willing to wait for a reply +is being set. +

CC_START_READ starting asynchronous read

+Debug message. From now on, when a message (or command) comes, it'll wake the +program and the library will automatically pass it over to correct place. +

CC_SUBSCRIBE subscribing to communication group %1

+Debug message. The program wants to receive messages addressed to this group. +

CC_TIMEOUT timeout reading data from command channel

+The program waited too long for data from the command channel (usually when it +sent a query to different program and it didn't answer for whatever reason). +

CC_UNSUBSCRIBE unsubscribing from communication group %1

+Debug message. The program no longer wants to receive messages addressed to +this group. +

CC_WRITE_ERROR error writing data to command channel (%1)

+A low level error happened when the library tried to write data to the command +channel socket. +

CC_ZERO_LENGTH invalid message length (0)

+The library received a message length being zero, which makes no sense, since +all messages must contain at least the envelope. +

CFGMGR_AUTOMATIC_CONFIG_DATABASE_UPDATE Updating configuration database from version %1 to %2

+An older version of the configuration database has been found, from which +there was an automatic upgrade path to the current version. These changes +are now applied, and no action from the administrator is necessary. +

CFGMGR_BAD_UPDATE_RESPONSE_FROM_MODULE Unable to parse response from module %1: %2

+The configuration manager sent a configuration update to a module, but +the module responded with an answer that could not be parsed. The answer +message appears to be invalid JSON data, or not decodable to a string. +This is likely to be a problem in the module in question. The update is +assumed to have failed, and will not be stored. +

CFGMGR_CC_SESSION_ERROR Error connecting to command channel: %1

+The configuration manager daemon was unable to connect to the messaging +system. The most likely cause is that msgq is not running. +

CFGMGR_DATA_READ_ERROR error reading configuration database from disk: %1

+There was a problem reading the persistent configuration data as stored +on disk. The file may be corrupted, or it is of a version from where +there is no automatic upgrade path. The file needs to be repaired or +removed. The configuration manager daemon will now shut down. +

CFGMGR_IOERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1

+There was an IO error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the directory where +the file is stored does not exist, or is not writable. The updated +configuration is not stored. +

CFGMGR_OSERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1

+There was an OS error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the system does not have +write access to the configuration database file. The updated +configuration is not stored. +

CFGMGR_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the cfgmgr daemon. The +daemon will now shut down. +

CMDCTL_BAD_CONFIG_DATA error in config data: %1

+There was an error reading the updated configuration data. The specific +error is printed. +

CMDCTL_BAD_PASSWORD bad password for user: %1

+A login attempt was made to b10-cmdctl, but the password was wrong. +Users can be managed with the tool b10-cmdctl-usermgr. +

CMDCTL_CC_SESSION_ERROR error reading from cc channel: %1

+There was a problem reading from the command and control channel. The +most likely cause is that the message bus daemon is not running. +

CMDCTL_CC_SESSION_TIMEOUT timeout on cc channel

+A timeout occurred when waiting for essential data from the cc session. +This usually occurs when b10-cfgmgr is not running or not responding. +Since we are waiting for essential information, this is a fatal error, +and the cmdctl daemon will now shut down. +

CMDCTL_COMMAND_ERROR error in command %1 to module %2: %3

+An error was encountered sending the given command to the given module. +Either there was a communication problem with the module, or the module +was not able to process the command, and sent back an error. The +specific error is printed in the message. +

CMDCTL_COMMAND_SENT command '%1' to module '%2' was sent

+This debug message indicates that the given command has been sent to +the given module. +

CMDCTL_NO_SUCH_USER username not found in user database: %1

+A login attempt was made to b10-cmdctl, but the username was not known. +Users can be added with the tool b10-cmdctl-usermgr. +

CMDCTL_NO_USER_ENTRIES_READ failed to read user information, all users will be denied

+The b10-cmdctl daemon was unable to find any user data in the user +database file. Either it was unable to read the file (in which case +this message follows a message CMDCTL_USER_DATABASE_READ_ERROR +containing a specific error), or the file was empty. Users can be added +with the tool b10-cmdctl-usermgr. +

CMDCTL_SEND_COMMAND sending command %1 to module %2

+This debug message indicates that the given command is being sent to +the given module. +

CMDCTL_SSL_SETUP_FAILURE_USER_DENIED failed to create an SSL connection (user denied): %1

+The user was denied because the SSL connection could not successfully +be set up. The specific error is given in the log message. Possible +causes may be that the ssl request itself was bad, or the local key or +certificate file could not be read. +

CMDCTL_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the cmdctl daemon. The +daemon will now shut down. +

CMDCTL_UNCAUGHT_EXCEPTION uncaught exception: %1

+The b10-cmdctl daemon encountered an uncaught exception and +will now shut down. This is indicative of a programming error and +should not happen under normal circumstances. The exception message +is printed. +

CMDCTL_USER_DATABASE_READ_ERROR failed to read user database file %1: %2

+The b10-cmdctl daemon was unable to read the user database file. The +file may be unreadable for the daemon, or it may be corrupted. In the +latter case, it can be recreated with b10-cmdctl-usermgr. The specific +error is printed in the log message.

CONFIG_CCSESSION_MSG error in CC session message: %1

There was a problem with an incoming message on the command and control channel. The message does not appear to be a valid command, and is @@ -65,77 +662,152 @@ missing a required element or contains an unknown data format. This most likely means that another BIND10 module is sending a bad message. The message itself is ignored by this module.

CONFIG_CCSESSION_MSG_INTERNAL error handling CC session message: %1

-There was an internal problem handling an incoming message on the -command and control channel. An unexpected exception was thrown. This -most likely points to an internal inconsistency in the module code. The -exception message is appended to the log error, and the module will -continue to run, but will not send back an answer. -

CONFIG_FOPEN_ERR error opening %1: %2

-There was an error opening the given file. -

CONFIG_JSON_PARSE JSON parse error in %1: %2

-There was a parse error in the JSON file. The given file does not appear -to be in valid JSON format. Please verify that the filename is correct -and that the contents are valid JSON. -

CONFIG_MANAGER_CONFIG error getting configuration from cfgmgr: %1

+There was an internal problem handling an incoming message on the command +and control channel. An unexpected exception was thrown, details of +which are appended to the message. The module will continue to run, +but will not send back an answer. +

+The most likely cause of this error is a programming error. Please raise +a bug report. +

CONFIG_GET_FAIL error getting configuration from cfgmgr: %1

The configuration manager returned an error when this module requested the configuration. The full error message answer from the configuration manager is appended to the log error. The most likely cause is that the module is of a different (command specification) version than the running configuration manager. -

CONFIG_MANAGER_MOD_SPEC module specification not accepted by cfgmgr: %1

-The module specification file for this module was rejected by the -configuration manager. The full error message answer from the -configuration manager is appended to the log error. The most likely -cause is that the module is of a different (specification file) version -than the running configuration manager. -

CONFIG_MODULE_SPEC module specification error in %1: %2

-The given file does not appear to be a valid specification file. Please -verify that the filename is correct and that its contents are a valid -BIND10 module specification. +

CONFIG_GET_FAILED error getting configuration from cfgmgr: %1

+The configuration manager returned an error response when the module +requested its configuration. The full error message answer from the +configuration manager is appended to the log error. +

CONFIG_JSON_PARSE JSON parse error in %1: %2

+There was an error parsing the JSON file. The given file does not appear +to be in valid JSON format. Please verify that the filename is correct +and that the contents are valid JSON. +

CONFIG_LOG_CONFIG_ERRORS error(s) in logging configuration: %1

+There was a logging configuration update, but the internal validator +for logging configuration found that it contained errors. The errors +are shown, and the update is ignored. +

CONFIG_LOG_EXPLICIT will use logging configuration for explicitly-named logger %1

+This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found an entry for the named +logger that matches the logger specification for the program. The logging +configuration for the program will be updated with the information. +

CONFIG_LOG_IGNORE_EXPLICIT ignoring logging configuration for explicitly-named logger %1

+This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found an entry for the +named logger. As this does not match the logger specification for the +program, it has been ignored. +

CONFIG_LOG_IGNORE_WILD ignoring logging configuration for wildcard logger %1

+This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found the named wildcard +entry (one containing the "*" character) that matched a logger already +matched by an explicitly named entry. The configuration is ignored. +

CONFIG_LOG_WILD_MATCH will use logging configuration for wildcard logger %1

+This is a debug message. When processing the "loggers" part of +the configuration file, the configuration library found the named +wildcard entry (one containing the "*" character) that matches a logger +specification in the program. The logging configuration for the program +will be updated with the information. +

CONFIG_MOD_SPEC_FORMAT module specification error in %1: %2

+The given file does not appear to be a valid specification file: details +are included in the message. Please verify that the filename is correct +and that its contents are a valid BIND10 module specification. +

CONFIG_MOD_SPEC_REJECT module specification rejected by cfgmgr: %1

+The specification file for this module was rejected by the configuration +manager. The full error message answer from the configuration manager is +appended to the log error. The most likely cause is that the module is of +a different (specification file) version than the running configuration +manager. +

CONFIG_OPEN_FAIL error opening %1: %2

+There was an error opening the given file. The reason for the failure +is included in the message.

DATASRC_CACHE_CREATE creating the hotspot cache

-Debug information that the hotspot cache was created at startup. +This is a debug message issued during startup when the hotspot cache +is created.

DATASRC_CACHE_DESTROY destroying the hotspot cache

Debug information. The hotspot cache is being destroyed. -

DATASRC_CACHE_DISABLE disabling the cache

-The hotspot cache is disabled from now on. It is not going to store -information or return anything. -

DATASRC_CACHE_ENABLE enabling the cache

-The hotspot cache is enabled from now on. -

DATASRC_CACHE_EXPIRED the item '%1' is expired

-Debug information. There was an attempt to look up an item in the hotspot -cache. And the item was actually there, but it was too old, so it was removed -instead and nothing is reported (the external behaviour is the same as with -CACHE_NOT_FOUND). +

DATASRC_CACHE_DISABLE disabling the hotspot cache

+A debug message issued when the hotspot cache is disabled. +

DATASRC_CACHE_ENABLE enabling the hotspot cache

+A debug message issued when the hotspot cache is enabled. +

DATASRC_CACHE_EXPIRED item '%1' in the hotspot cache has expired

+A debug message issued when a hotspot cache lookup located the item but it +had expired. The item was removed and the program proceeded as if the item +had not been found.

DATASRC_CACHE_FOUND the item '%1' was found

-Debug information. An item was successfully looked up in the hotspot cache. -

DATASRC_CACHE_FULL cache is full, dropping oldest

+Debug information. An item was successfully located in the hotspot cache. +

DATASRC_CACHE_FULL hotspot cache is full, dropping oldest

Debug information. After inserting an item into the hotspot cache, the maximum number of items was exceeded, so the least recently used item will be dropped. This should be directly followed by CACHE_REMOVE. -

DATASRC_CACHE_INSERT inserting item '%1' into the cache

-Debug information. It means a new item is being inserted into the hotspot +

DATASRC_CACHE_INSERT inserting item '%1' into the hotspot cache

+A debug message indicating that a new item is being inserted into the hotspot cache. -

DATASRC_CACHE_NOT_FOUND the item '%1' was not found

-Debug information. It was attempted to look up an item in the hotspot cache, -but it is not there. -

DATASRC_CACHE_OLD_FOUND older instance of cache item found, replacing

+

DATASRC_CACHE_NOT_FOUND the item '%1' was not found in the hotspot cache

+A debug message issued when hotspot cache was searched for the specified +item but it was not found. +

DATASRC_CACHE_OLD_FOUND older instance of hotspot cache item '%1' found, replacing

Debug information. While inserting an item into the hotspot cache, an older -instance of an item with the same name was found. The old instance will be -removed. This should be directly followed by CACHE_REMOVE. -

DATASRC_CACHE_REMOVE removing '%1' from the cache

+instance of an item with the same name was found; the old instance will be +removed. This will be directly followed by CACHE_REMOVE. +

DATASRC_CACHE_REMOVE removing '%1' from the hotspot cache

Debug information. An item is being removed from the hotspot cache. -

DATASRC_CACHE_SLOTS setting the cache size to '%1', dropping '%2' items

+

DATASRC_CACHE_SLOTS setting the hotspot cache size to '%1', dropping '%2' items

The maximum allowed number of items of the hotspot cache is set to the given number. If there are too many, some of them will be dropped. The size of 0 means no limit. +

DATASRC_DATABASE_FIND_ERROR error retrieving data from datasource %1: %2

+This was an internal error while reading data from a datasource. This can either +mean the specific data source implementation is not behaving correctly, or the +data it provides is invalid. The current search is aborted. +The error message contains specific information about the error. +

DATASRC_DATABASE_FIND_RECORDS looking in datasource %1 for record %2/%3

+Debug information. The database data source is looking up records with the given +name and type in the database. +

DATASRC_DATABASE_FIND_TTL_MISMATCH TTL values differ in %1 for elements of %2/%3/%4, setting to %5

+The datasource backend provided resource records for the given RRset with +different TTL values. The TTL of the RRSET is set to the lowest value, which +is printed in the log message. +

DATASRC_DATABASE_FIND_UNCAUGHT_ERROR uncaught general error retrieving data from datasource %1: %2

+There was an uncaught general exception while reading data from a datasource. +This most likely points to a logic error in the code, and can be considered a +bug. The current search is aborted. Specific information about the exception is +printed in this error message. +

DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR uncaught error retrieving data from datasource %1: %2

+There was an uncaught ISC exception while reading data from a datasource. This +most likely points to a logic error in the code, and can be considered a bug. +The current search is aborted. Specific information about the exception is +printed in this error message. +

DATASRC_DATABASE_FOUND_DELEGATION Found delegation at %2 in %1

+When searching for a domain, the program met a delegation to a different zone +at the given domain name. It will return that one instead. +

DATASRC_DATABASE_FOUND_DELEGATION_EXACT Found delegation at %2 (exact match) in %1

+The program found the domain requested, but it is a delegation point to a +different zone, therefore it is not authoritative for this domain name. +It will return the NS record instead. +

DATASRC_DATABASE_FOUND_DNAME Found DNAME at %2 in %1

+When searching for a domain, the program met a DNAME redirection to a different +place in the domain space at the given domain name. It will return that one +instead. +

DATASRC_DATABASE_FOUND_NXDOMAIN search in datasource %1 resulted in NXDOMAIN for %2/%3/%4

+The data returned by the database backend did not contain any data for the given +domain name, class and type. +

DATASRC_DATABASE_FOUND_NXRRSET search in datasource %1 resulted in NXRRSET for %2/%3/%4

+The data returned by the database backend contained data for the given domain +name and class, but not for the given type. +

DATASRC_DATABASE_FOUND_RRSET search in datasource %1 resulted in RRset %2

+The data returned by the database backend contained data for the given domain +name, and it either matches the type or has a relevant type. The RRset that is +returned is printed.

DATASRC_DO_QUERY handling query for '%1/%2'

-Debug information. We're processing some internal query for given name and -type. +A debug message indicating that a query for the given name and RR type is being +processed.

DATASRC_MEM_ADD_RRSET adding RRset '%1/%2' into zone '%3'

Debug information. An RRset is being added to the in-memory data source.

DATASRC_MEM_ADD_WILDCARD adding wildcards for '%1'

-Debug information. Some special marks above each * in wildcard name are needed. -They are being added now for this name. +This is a debug message issued during the processing of a wildcard +name. The internal domain name tree is scanned and some nodes are +specially marked to allow the wildcard lookup to succeed.

DATASRC_MEM_ADD_ZONE adding zone '%1/%2'

Debug information. A zone is being added into the in-memory data source.

DATASRC_MEM_ANY_SUCCESS ANY query for '%1' successful

@@ -146,7 +818,7 @@ Debug information. The requested domain is an alias to a different domain, returning the CNAME instead.

DATASRC_MEM_CNAME_COEXIST can't add data to CNAME in domain '%1'

This is the same problem as in MEM_CNAME_TO_NONEMPTY, but it happened the -other way around -- adding some outher data to CNAME. +other way around -- adding some other data to CNAME.

DATASRC_MEM_CNAME_TO_NONEMPTY can't add CNAME to domain with other data in '%1'

Someone or something tried to add a CNAME into a domain that already contains some other data. But the protocol forbids coexistence of CNAME with anything @@ -164,10 +836,10 @@ encountered on the way. This may lead to redirection to a different domain and stop the search.

DATASRC_MEM_DNAME_FOUND DNAME found at '%1'

Debug information. A DNAME was found instead of the requested information. -

DATASRC_MEM_DNAME_NS dNAME and NS can't coexist in non-apex domain '%1'

-It was requested for DNAME and NS records to be put into the same domain -which is not the apex (the top of the zone). This is forbidden by RFC -2672, section 3. This indicates a problem with provided data. +

DATASRC_MEM_DNAME_NS DNAME and NS can't coexist in non-apex domain '%1'

+A request was made for DNAME and NS records to be put into the same +domain which is not the apex (the top of the zone). This is forbidden +by RFC 2672 (section 3) and indicates a problem with provided data.

DATASRC_MEM_DOMAIN_EMPTY requested domain '%1' is empty

Debug information. The requested domain exists in the tree of domains, but it is empty. Therefore it doesn't contain the requested resource type. @@ -186,7 +858,7 @@ Debug information. A zone object for this zone is being searched for in the in-memory data source.

DATASRC_MEM_LOAD loading zone '%1' from file '%2'

Debug information. The content of master file is being loaded into the memory. -

DATASRC_MEM_NOTFOUND requested domain '%1' not found

+

DATASRC_MEM_NOT_FOUND requested domain '%1' not found

Debug information. The requested domain does not exist.

DATASRC_MEM_NS_ENCOUNTERED encountered a NS

Debug information. While searching for the requested domain, a NS was @@ -222,21 +894,21 @@ destroyed. Debug information. A domain above wildcard was reached, but there's something below the requested domain. Therefore the wildcard doesn't apply here. This behaviour is specified by RFC 1034, section 4.3.3 -

DATASRC_MEM_WILDCARD_DNAME dNAME record in wildcard domain '%1'

+

DATASRC_MEM_WILDCARD_DNAME DNAME record in wildcard domain '%1'

The software refuses to load DNAME records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should behave and BIND 9 refuses that as well. Please describe your intention using different tools. -

DATASRC_MEM_WILDCARD_NS nS record in wildcard domain '%1'

+

DATASRC_MEM_WILDCARD_NS NS record in wildcard domain '%1'

The software refuses to load NS records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should behave and BIND 9 refuses that as well. Please describe your intention using different tools.

DATASRC_META_ADD adding a data source into meta data source

-Debug information. Yet another data source is being added into the meta data -source. (probably at startup or reconfiguration) +This is a debug message issued during startup or reconfiguration. +Another data source is being added into the meta data source.

DATASRC_META_ADD_CLASS_MISMATCH mismatch between classes '%1' and '%2'

-It was attempted to add a data source into a meta data source. But their +It was attempted to add a data source into a meta data source, but their classes do not match.

DATASRC_META_REMOVE removing data source from meta data source

Debug information. A data source is being removed from meta data source. @@ -257,10 +929,10 @@ specific error already.

DATASRC_QUERY_BAD_REFERRAL bad referral to '%1'

The domain lives in another zone. But it is not possible to generate referral information for it. -

DATASRC_QUERY_CACHED data for %1/%2 found in cache

+

DATASRC_QUERY_CACHED data for %1/%2 found in hotspot cache

Debug information. The requested data were found in the hotspot cache, so no query is sent to the real data source. -

DATASRC_QUERY_CHECK_CACHE checking cache for '%1/%2'

+

DATASRC_QUERY_CHECK_CACHE checking hotspot cache for '%1/%2'

Debug information. While processing a query, lookup to the hotspot cache is being made.

DATASRC_QUERY_COPY_AUTH copying authoritative section into message

@@ -269,20 +941,19 @@ response message.

DATASRC_QUERY_DELEGATION looking for delegation on the path to '%1'

Debug information. The software is trying to identify delegation points on the way down to the given domain. -

DATASRC_QUERY_EMPTY_CNAME cNAME at '%1' is empty

-There was an CNAME and it was being followed. But it contains no records, -so there's nowhere to go. There will be no answer. This indicates a problem -with supplied data. -We tried to follow +

DATASRC_QUERY_EMPTY_CNAME CNAME at '%1' is empty

+A CNAME chain was being followed and an entry was found that pointed +to a domain name that had no RRsets associated with it. As a result, +the query cannot be answered. This indicates a problem with supplied data.

DATASRC_QUERY_EMPTY_DNAME the DNAME on '%1' is empty

During an attempt to synthesize CNAME from this DNAME it was discovered the DNAME is empty (it has no records). This indicates problem with supplied data.

DATASRC_QUERY_FAIL query failed

Some subtask of query processing failed. The reason should have been reported -already. We are returning SERVFAIL. +already and a SERVFAIL will be returned to the querying system.

DATASRC_QUERY_FOLLOW_CNAME following CNAME at '%1'

-Debug information. The domain is a CNAME (or a DNAME and we created a CNAME -for it already), so it's being followed. +Debug information. The domain is a CNAME (or a DNAME and a CNAME for it +has already been created) and the search is following this chain.

DATASRC_QUERY_GET_MX_ADDITIONAL addition of A/AAAA for '%1' requested by MX '%2'

Debug information. While processing a query, a MX record was met. It references the mentioned address, so A/AAAA records for it are looked up @@ -301,12 +972,12 @@ operation code.

DATASRC_QUERY_IS_AUTH auth query (%1/%2)

Debug information. The last DO_QUERY is an auth query.

DATASRC_QUERY_IS_GLUE glue query (%1/%2)

-Debug information. The last DO_QUERY is query for glue addresses. +Debug information. The last DO_QUERY is a query for glue addresses.

DATASRC_QUERY_IS_NOGLUE query for non-glue addresses (%1/%2)

-Debug information. The last DO_QUERY is query for addresses that are not +Debug information. The last DO_QUERY is a query for addresses that are not glue.

DATASRC_QUERY_IS_REF query for referral (%1/%2)

-Debug information. The last DO_QUERY is query for referral information. +Debug information. The last DO_QUERY is a query for referral information.

DATASRC_QUERY_IS_SIMPLE simple query (%1/%2)

Debug information. The last DO_QUERY is a simple query.

DATASRC_QUERY_MISPLACED_TASK task of this type should not be here

@@ -324,10 +995,10 @@ does not have one. This indicates problem with provided data. The underlying data source failed to answer the no-glue query. 1 means some error, 2 is not implemented. The data source should have logged the specific error already. -

DATASRC_QUERY_NO_CACHE_ANY_AUTH ignoring cache for ANY query (%1/%2 in %3 class)

+

DATASRC_QUERY_NO_CACHE_ANY_AUTH ignoring hotspot cache for ANY query (%1/%2 in %3 class)

Debug information. The hotspot cache is ignored for authoritative ANY queries for consistency reasons. -

DATASRC_QUERY_NO_CACHE_ANY_SIMPLE ignoring cache for ANY query (%1/%2 in %3 class)

+

DATASRC_QUERY_NO_CACHE_ANY_SIMPLE ignoring hotspot cache for ANY query (%1/%2 in %3 class)

Debug information. The hotspot cache is ignored for ANY queries for consistency reasons.

DATASRC_QUERY_NO_DS_NSEC there's no DS record in the '%1' zone

@@ -341,7 +1012,7 @@ Lookup of domain failed because the data have no zone that contain the domain. Maybe someone sent a query to the wrong server for some reason.

DATASRC_QUERY_PROCESS processing query '%1/%2' in the '%3' class

Debug information. A sure query is being processed now. -

DATASRC_QUERY_PROVENX_FAIL unable to prove nonexistence of '%1'

+

DATASRC_QUERY_PROVE_NX_FAIL unable to prove nonexistence of '%1'

The user wants DNSSEC and we discovered the entity doesn't exist (either domain or the record). But there was an error getting NSEC/NSEC3 record to prove the nonexistence. @@ -357,13 +1028,13 @@ The underlying data source failed to answer the simple query. 1 means some error, 2 is not implemented. The data source should have logged the specific error already.

DATASRC_QUERY_SYNTH_CNAME synthesizing CNAME from DNAME on '%1'

-Debug information. While answering a query, a DNAME was met. The DNAME itself -will be returned, but along with it a CNAME for clients which don't understand -DNAMEs will be synthesized. +This is a debug message. While answering a query, a DNAME was encountered. The +DNAME itself will be returned, along with a synthesized CNAME for clients that +do not understand the DNAME RR.

DATASRC_QUERY_TASK_FAIL task failed with %1

The query subtask failed. The reason should have been reported by the subtask already. The code is 1 for error, 2 for not implemented. -

DATASRC_QUERY_TOO_MANY_CNAMES cNAME chain limit exceeded at '%1'

+

DATASRC_QUERY_TOO_MANY_CNAMES CNAME chain limit exceeded at '%1'

A CNAME led to another CNAME and it led to another, and so on. After 16 CNAMEs, the software gave up. Long CNAME chains are discouraged, and this might possibly be a loop as well. Note that some of the CNAMEs might have @@ -377,7 +1048,7 @@ domain is being looked for now.

DATASRC_QUERY_WILDCARD_FAIL error processing wildcard for '%1'

During an attempt to cover the domain by a wildcard an error happened. The exact kind was hopefully already reported. -

DATASRC_QUERY_WILDCARD_PROVENX_FAIL unable to prove nonexistence of '%1' (%2)

+

DATASRC_QUERY_WILDCARD_PROVE_NX_FAIL unable to prove nonexistence of '%1' (%2)

While processing a wildcard, it wasn't possible to prove nonexistence of the given domain or record. The code is 1 for error and 2 for not implemented.

DATASRC_QUERY_WILDCARD_REFERRAL unable to find referral info for '%1' (%2)

@@ -385,15 +1056,21 @@ While processing a wildcard, a referral was met. But it wasn't possible to get enough information for it. The code is 1 for error, 2 for not implemented.

DATASRC_SQLITE_CLOSE closing SQLite database

Debug information. The SQLite data source is closing the database file. -

DATASRC_SQLITE_CREATE sQLite data source created

+

DATASRC_SQLITE_CONNCLOSE Closing sqlite database

+The database file is no longer needed and is being closed. +

DATASRC_SQLITE_CONNOPEN Opening sqlite database file '%1'

+The database file is being opened so it can start providing data. +

DATASRC_SQLITE_CREATE SQLite data source created

Debug information. An instance of SQLite data source is being created. -

DATASRC_SQLITE_DESTROY sQLite data source destroyed

+

DATASRC_SQLITE_DESTROY SQLite data source destroyed

Debug information. An instance of SQLite data source is being destroyed. +

DATASRC_SQLITE_DROPCONN SQLite3Database is being deinitialized

+The object around a database connection is being destroyed.

DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1'

-Debug information. The SQLite data source is trying to identify, which zone +Debug information. The SQLite data source is trying to identify which zone should hold this domain. -

DATASRC_SQLITE_ENCLOSURE_NOTFOUND no zone contains it

-Debug information. The last SQLITE_ENCLOSURE query was unsuccessful, there's +

DATASRC_SQLITE_ENCLOSURE_NOT_FOUND no zone contains '%1'

+Debug information. The last SQLITE_ENCLOSURE query was unsuccessful; there's no such zone in our data.

DATASRC_SQLITE_FIND looking for RRset '%1/%2'

Debug information. The SQLite data source is looking up a resource record @@ -417,7 +1094,7 @@ and type in the database. Debug information. The SQLite data source is identifying if this domain is a referral and where it goes.

DATASRC_SQLITE_FINDREF_BAD_CLASS class mismatch looking for referral ('%1' and '%2')

-The SQLite data source was trying to identify, if there's a referral. But +The SQLite data source was trying to identify if there's a referral. But it contains different class than the query was for.

DATASRC_SQLITE_FIND_BAD_CLASS class mismatch looking for an RRset ('%1' and '%2')

The SQLite data source was looking up an RRset, but the data source contains @@ -428,21 +1105,30 @@ source.

DATASRC_SQLITE_FIND_NSEC3_NO_ZONE no such zone '%1'

The SQLite data source was asked to provide a NSEC3 record for given zone. But it doesn't contain that zone. +

DATASRC_SQLITE_NEWCONN SQLite3Database is being initialized

+A wrapper object to hold database connection is being initialized.

DATASRC_SQLITE_OPEN opening SQLite database '%1'

Debug information. The SQLite data source is loading an SQLite database in the provided file.

DATASRC_SQLITE_PREVIOUS looking for name previous to '%1'

-Debug information. We're trying to look up name preceding the supplied one. +This is a debug message. The name given was not found, so the program +is searching for the next name higher up the hierarchy (e.g. if +www.example.com were queried for and not found, the software searches +for the "previous" name, example.com).

DATASRC_SQLITE_PREVIOUS_NO_ZONE no zone containing '%1'

-The SQLite data source tried to identify name preceding this one. But this -one is not contained in any zone in the data source. +The name given was not found, so the program is searching for the next +name higher up the hierarchy (e.g. if www.example.com were queried +for and not found, the software searches for the "previous" name, +example.com). However, this name is not contained in any zone in the +data source. This is an error since it indicates a problem in the earlier +processing of the query.

DATASRC_SQLITE_SETUP setting up SQLite database

The database for SQLite data source was found empty. It is assumed this is the first run and it is being initialized with current schema. It'll still contain no data, but it will be ready for use. -

DATASRC_STATIC_BAD_CLASS static data source can handle CH only

-For some reason, someone asked the static data source a query that is not in -the CH class. +

DATASRC_STATIC_CLASS_NOT_CH static data source can handle CH class only

+An error message indicating that a query requesting a RR for a class other +that CH was sent to the static data source (which only handles CH queries).

DATASRC_STATIC_CREATE creating the static datasource

Debug information. The static data source (the one holding stuff like version.bind) is being created. @@ -452,142 +1138,229 @@ data source.

DATASRC_UNEXPECTED_QUERY_STATE unexpected query state

This indicates a programming error. An internal task of unknown type was generated. -

LOGIMPL_ABOVEDBGMAX debug level of %1 is too high and will be set to the maximum of %2

-A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is above the maximum allowed value and has -been reduced to that value. -

LOGIMPL_BADDEBUG debug string is '%1': must be of the form DEBUGn

-The string indicating the extended logging level (used by the underlying -logger implementation code) is not of the stated form. In particular, -it starts DEBUG but does not end with an integer. -

LOGIMPL_BELOWDBGMIN debug level of %1 is too low and will be set to the minimum of %2

-A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is below the minimum allowed value and has -been increased to that value. -

MSG_BADDESTINATION unrecognized log destination: %1

+

LOGIMPL_ABOVE_MAX_DEBUG debug level of %1 is too high and will be set to the maximum of %2

+A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is above the maximum allowed value and has +been reduced to that value. The appearance of this message may indicate +a programming error - please submit a bug report. +

LOGIMPL_BAD_DEBUG_STRING debug string '%1' has invalid format

+A message from the interface to the underlying logger implementation +reporting that an internally-created string used to set the debug level +is not of the correct format (it should be of the form DEBUGn, where n +is an integer, e.g. DEBUG22). The appearance of this message indicates +a programming error - please submit a bug report. +

LOGIMPL_BELOW_MIN_DEBUG debug level of %1 is too low and will be set to the minimum of %2

+A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is below the minimum allowed value and has +been increased to that value. The appearance of this message may indicate +a programming error - please submit a bug report. +

LOG_BAD_DESTINATION unrecognized log destination: %1

A logger destination value was given that was not recognized. The destination should be one of "console", "file", or "syslog". -

MSG_BADSEVERITY unrecognized log severity: %1

+

LOG_BAD_SEVERITY unrecognized log severity: %1

A logger severity value was given that was not recognized. The severity -should be one of "DEBUG", "INFO", "WARN", "ERROR", or "FATAL". -

MSG_BADSTREAM bad log console output stream: %1

-A log console output stream was given that was not recognized. The -output stream should be one of "stdout", or "stderr" -

MSG_DUPLNS line %1: duplicate $NAMESPACE directive found

-When reading a message file, more than one $NAMESPACE directive was found. In -this version of the code, such a condition is regarded as an error and the -read will be abandoned. -

MSG_DUPMSGID duplicate message ID (%1) in compiled code

-Indicative of a programming error, when it started up, BIND10 detected that -the given message ID had been registered by one or more modules. (All message -IDs should be unique throughout BIND10.) This has no impact on the operation -of the server other that erroneous messages may be logged. (When BIND10 loads -the message IDs (and their associated text), if a duplicate ID is found it is -discarded. However, when the module that supplied the duplicate ID logs that -particular message, the text supplied by the module that added the original -ID will be output - something that may bear no relation to the condition being -logged. -

MSG_IDNOTFND could not replace message text for '%1': no such message

+should be one of "DEBUG", "INFO", "WARN", "ERROR", "FATAL" or "NONE". +

LOG_BAD_STREAM bad log console output stream: %1

+Logging has been configured so that output is written to the terminal +(console) but the stream on which it is to be written is not recognised. +Allowed values are "stdout" and "stderr". +

LOG_DUPLICATE_MESSAGE_ID duplicate message ID (%1) in compiled code

+During start-up, BIND 10 detected that the given message identification +had been defined multiple times in the BIND 10 code. This indicates a +programming error; please submit a bug report. +

LOG_DUPLICATE_NAMESPACE line %1: duplicate $NAMESPACE directive found

+When reading a message file, more than one $NAMESPACE directive was found. +(This directive is used to set a C++ namespace when generating header +files during software development.) Such a condition is regarded as an +error and the read will be abandoned. +

LOG_INPUT_OPEN_FAIL unable to open message file %1 for input: %2

+The program was not able to open the specified input message file for +the reason given. +

LOG_INVALID_MESSAGE_ID line %1: invalid message identification '%2'

+An invalid message identification (ID) has been found during the read of +a message file. Message IDs should comprise only alphanumeric characters +and the underscore, and should not start with a digit. +

LOG_NAMESPACE_EXTRA_ARGS line %1: $NAMESPACE directive has too many arguments

+The $NAMESPACE directive in a message file takes a single argument, a +namespace in which all the generated symbol names are placed. This error +is generated when the compiler finds a $NAMESPACE directive with more +than one argument. +

LOG_NAMESPACE_INVALID_ARG line %1: $NAMESPACE directive has an invalid argument ('%2')

+The $NAMESPACE argument in a message file should be a valid C++ namespace. +This message is output if the simple check on the syntax of the string +carried out by the reader fails. +

LOG_NAMESPACE_NO_ARGS line %1: no arguments were given to the $NAMESPACE directive

+The $NAMESPACE directive in a message file takes a single argument, +a C++ namespace in which all the generated symbol names are placed. +This error is generated when the compiler finds a $NAMESPACE directive +with no arguments. +

LOG_NO_MESSAGE_ID line %1: message definition line found without a message ID

+Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line in +the message file comprising just the "%" and nothing else. +

LOG_NO_MESSAGE_TEXT line %1: line found containing a message ID ('%2') and no text

+Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line +in the message file comprising just the "%" and message identification, +but no text. +

LOG_NO_SUCH_MESSAGE could not replace message text for '%1': no such message

During start-up a local message file was read. A line with the listed -message identification was found in the file, but the identification is not -one contained in the compiled-in message dictionary. Either the message -identification has been mis-spelled in the file, or the local file was used -for an earlier version of the software and the message with that -identification has been removed. +message identification was found in the file, but the identification is +not one contained in the compiled-in message dictionary. This message +may appear a number of times in the file, once for every such unknown +message identification.

-This message may appear a number of times in the file, once for every such -unknown message identification. -

MSG_INVMSGID line %1: invalid message identification '%2'

-The concatenation of the prefix and the message identification is used as -a symbol in the C++ module; as such it may only contain -

MSG_NOMSGID line %1: message definition line found without a message ID

-Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -indicates the message compiler found a line in the message file comprising -just the "%" and nothing else. -

MSG_NOMSGTXT line %1: line found containing a message ID ('%2') and no text

-Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -is generated when a line is found in the message file that contains the -leading "%" and the message identification but no text. -

MSG_NSEXTRARG line %1: $NAMESPACE directive has too many arguments

-The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with more than one argument. -

MSG_NSINVARG line %1: $NAMESPACE directive has an invalid argument ('%2')

-The $NAMESPACE argument should be a valid C++ namespace. The reader does a -cursory check on its validity, checking that the characters in the namespace -are correct. The error is generated when the reader finds an invalid -character. (Valid are alphanumeric characters, underscores and colons.) -

MSG_NSNOARG line %1: no arguments were given to the $NAMESPACE directive

-The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with no arguments. -

MSG_OPENIN unable to open message file %1 for input: %2

-The program was not able to open the specified input message file for the -reason given. -

MSG_OPENOUT unable to open %1 for output: %2

-The program was not able to open the specified output file for the reason -given. -

MSG_PRFEXTRARG line %1: $PREFIX directive has too many arguments

-The $PREFIX directive takes a single argument, a prefix to be added to the -symbol names when a C++ .h file is created. This error is generated when the -compiler finds a $PREFIX directive with more than one argument. -

MSG_PRFINVARG line %1: $PREFIX directive has an invalid argument ('%2')

-The $PREFIX argument is used in a symbol name in a C++ header file. As such, -it must adhere to restrictions on C++ symbol names (e.g. may only contain -alphanumeric characters or underscores, and may nor start with a digit). -A $PREFIX directive was found with an argument (given in the message) that -violates those restictions. -

MSG_RDLOCMES reading local message file %1

-This is an informational message output by BIND10 when it starts to read a -local message file. (A local message file may replace the text of one of more -messages; the ID of the message will not be changed though.) -

MSG_READERR error reading from message file %1: %2

+There may be several reasons why this message may appear: +

+- The message ID has been mis-spelled in the local message file. +

+- The program outputting the message may not use that particular message +(e.g. it originates in a module not used by the program.) +

+- The local file was written for an earlier version of the BIND 10 software +and the later version no longer generates that message. +

+Whatever the reason, there is no impact on the operation of BIND 10. +

LOG_OPEN_OUTPUT_FAIL unable to open %1 for output: %2

+Originating within the logging code, the program was not able to open +the specified output file for the reason given. +

LOG_PREFIX_EXTRA_ARGS line %1: $PREFIX directive has too many arguments

+Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +This error is generated when the compiler finds a $PREFIX directive with +more than one argument. +

+Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND 10. +

LOG_PREFIX_INVALID_ARG line %1: $PREFIX directive has an invalid argument ('%2')

+Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +As such, it must adhere to restrictions on C++ symbol names (e.g. may +only contain alphanumeric characters or underscores, and may nor start +with a digit). A $PREFIX directive was found with an argument (given +in the message) that violates those restrictions. +

+Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND 10. +

LOG_READING_LOCAL_FILE reading local message file %1

+This is an informational message output by BIND 10 when it starts to read +a local message file. (A local message file may replace the text of +one of more messages; the ID of the message will not be changed though.) +

LOG_READ_ERROR error reading from message file %1: %2

The specified error was encountered reading from the named message file. -

MSG_UNRECDIR line %1: unrecognised directive '%2'

-A line starting with a dollar symbol was found, but the first word on the line -(shown in the message) was not a recognised message compiler directive. -

MSG_WRITERR error writing to %1: %2

-The specified error was encountered by the message compiler when writing to -the named output file. -

NSAS_INVRESPSTR queried for %1 but got invalid response

-This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for a RR for the -specified nameserver but received an invalid response. Either the success -function was called without a DNS message or the message was invalid on some -way. (In the latter case, the error should have been picked up elsewhere in -the processing logic, hence the raising of the error here.) -

NSAS_INVRESPTC queried for %1 RR of type/class %2/%3, received response %4/%5

-This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for the given RR -type and class, but instead received an answer with the given type and class. -

NSAS_LOOKUPCANCEL lookup for zone %1 has been cancelled

-A debug message, this is output when a NSAS (nameserver address store - -part of the resolver) lookup for a zone has been cancelled. -

NSAS_LOOKUPZONE searching NSAS for nameservers for zone %1

-A debug message, this is output when a call is made to the nameserver address -store (part of the resolver) to obtain the nameservers for the specified zone. -

NSAS_NSADDR asking resolver to obtain A and AAAA records for %1

-A debug message, the NSAS (nameserver address store - part of the resolver) is -making a callback into the resolver to retrieve the address records for the -specified nameserver. -

NSAS_NSLKUPFAIL failed to lookup any %1 for %2

-A debug message, the NSAS (nameserver address store - part of the resolver) -has been unable to retrieve the specified resource record for the specified -nameserver. This is not necessarily a problem - the nameserver may be -unreachable, in which case the NSAS will try other nameservers in the zone. -

NSAS_NSLKUPSUCC found address %1 for %2

-A debug message, the NSAS (nameserver address store - part of the resolver) -has retrieved the given address for the specified nameserver through an -external query. -

NSAS_SETRTT reporting RTT for %1 as %2; new value is now %3

+

LOG_UNRECOGNISED_DIRECTIVE line %1: unrecognised directive '%2'

+Within a message file, a line starting with a dollar symbol was found +(indicating the presence of a directive) but the first word on the line +(shown in the message) was not recognised. +

LOG_WRITE_ERROR error writing to %1: %2

+The specified error was encountered by the message compiler when writing +to the named output file. +

NOTIFY_OUT_INVALID_ADDRESS invalid address %1#%2: %3

+The notify_out library tried to send a notify message to the given +address, but it appears to be an invalid address. The configuration +for secondary nameservers might contain a typographic error, or a +different BIND 10 module has forgotten to validate its data before +sending this module a notify command. As such, this should normally +not happen, and points to an oversight in a different module. +

NOTIFY_OUT_REPLY_BAD_OPCODE bad opcode in notify reply from %1#%2: %3

+The notify_out library sent a notify message to the nameserver at +the given address, but the response did not have the opcode set to +NOTIFY. The opcode in the response is printed. Since there was a +response, no more notifies will be sent to this server for this +notification event. +

NOTIFY_OUT_REPLY_BAD_QID bad QID in notify reply from %1#%2: got %3, should be %4

+The notify_out library sent a notify message to the nameserver at +the given address, but the query id in the response does not match +the one we sent. Since there was a response, no more notifies will +be sent to this server for this notification event. +

NOTIFY_OUT_REPLY_BAD_QUERY_NAME bad query name in notify reply from %1#%2: got %3, should be %4

+The notify_out library sent a notify message to the nameserver at +the given address, but the query name in the response does not match +the one we sent. Since there was a response, no more notifies will +be sent to this server for this notification event. +

NOTIFY_OUT_REPLY_QR_NOT_SET QR flags set to 0 in reply to notify from %1#%2

+The notify_out library sent a notify message to the namesever at the +given address, but the reply did not have the QR bit set to one. +Since there was a response, no more notifies will be sent to this +server for this notification event. +

NOTIFY_OUT_REPLY_UNCAUGHT_EXCEPTION uncaught exception: %1

+There was an uncaught exception in the handling of a notify reply +message, either in the message parser, or while trying to extract data +from the parsed message. The error is printed, and notify_out will +treat the response as a bad message, but this does point to a +programming error, since all exceptions should have been caught +explicitly. Please file a bug report. Since there was a response, +no more notifies will be sent to this server for this notification +event. +

NOTIFY_OUT_RETRY_EXCEEDED notify to %1#%2: number of retries (%3) exceeded

+The maximum number of retries for the notify target has been exceeded. +Either the address of the secondary nameserver is wrong, or it is not +responding. +

NOTIFY_OUT_SENDING_NOTIFY sending notify to %1#%2

+A notify message is sent to the secondary nameserver at the given +address. +

NOTIFY_OUT_SOCKET_ERROR socket error sending notify to %1#%2: %3

+There was a network error while trying to send a notify message to +the given address. The address might be unreachable. The socket +error is printed and should provide more information. +

NOTIFY_OUT_SOCKET_RECV_ERROR socket error reading notify reply from %1#%2: %3

+There was a network error while trying to read a notify reply +message from the given address. The socket error is printed and should +provide more information. +

NOTIFY_OUT_TIMEOUT retry notify to %1#%2

+The notify message to the given address (noted as address#port) has +timed out, and the message will be resent until the max retry limit +is reached. +

NSAS_FIND_NS_ADDRESS asking resolver to obtain A and AAAA records for %1

+A debug message issued when the NSAS (nameserver address store - part +of the resolver) is making a callback into the resolver to retrieve the +address records for the specified nameserver. +

NSAS_FOUND_ADDRESS found address %1 for %2

+A debug message issued when the NSAS (nameserver address store - part +of the resolver) has retrieved the given address for the specified +nameserver through an external query. +

NSAS_INVALID_RESPONSE queried for %1 but got invalid response

+The NSAS (nameserver address store - part of the resolver) made a query +for a RR for the specified nameserver but received an invalid response. +Either the success function was called without a DNS message or the +message was invalid on some way. (In the latter case, the error should +have been picked up elsewhere in the processing logic, hence the raising +of the error here.) +

+This message indicates an internal error in the NSAS. Please raise a +bug report. +

NSAS_LOOKUP_CANCEL lookup for zone %1 has been canceled

+A debug message issued when an NSAS (nameserver address store - part of +the resolver) lookup for a zone has been canceled. +

NSAS_NS_LOOKUP_FAIL failed to lookup any %1 for %2

+A debug message issued when the NSAS (nameserver address store - part of +the resolver) has been unable to retrieve the specified resource record +for the specified nameserver. This is not necessarily a problem - the +nameserver may be unreachable, in which case the NSAS will try other +nameservers in the zone. +

NSAS_SEARCH_ZONE_NS searching NSAS for nameservers for zone %1

+A debug message output when a call is made to the NSAS (nameserver +address store - part of the resolver) to obtain the nameservers for +the specified zone. +

NSAS_UPDATE_RTT update RTT for %1: was %2 ms, is now %3 ms

A NSAS (nameserver address store - part of the resolver) debug message -reporting the round-trip time (RTT) for a query made to the specified -nameserver. The RTT has been updated using the value given and the new RTT is -displayed. (The RTT is subject to a calculation that damps out sudden -changes. As a result, the new RTT is not necessarily equal to the RTT -reported.) +reporting the update of a round-trip time (RTT) for a query made to the +specified nameserver. The RTT has been updated using the value given +and the new RTT is displayed. (The RTT is subject to a calculation that +damps out sudden changes. As a result, the new RTT used by the NSAS in +future decisions of which nameserver to use is not necessarily equal to +the RTT reported.) +

NSAS_WRONG_ANSWER queried for %1 RR of type/class %2/%3, received response %4/%5

+A NSAS (nameserver address store - part of the resolver) made a query for +a resource record of a particular type and class, but instead received +an answer with a different given type and class. +

+This message indicates an internal error in the NSAS. Please raise a +bug report.

RESLIB_ANSWER answer received in response to query for <%1>

A debug message recording that an answer has been received to an upstream query for the specified question. Previous debug messages will have indicated @@ -599,95 +1372,95 @@ the server to which the question was sent.

RESLIB_DEEPEST did not find <%1> in cache, deepest delegation found is %2

A debug message, a cache lookup did not find the specified <name, class, type> tuple in the cache; instead, the deepest delegation found is indicated. -

RESLIB_FOLLOWCNAME following CNAME chain to <%1>

+

RESLIB_FOLLOW_CNAME following CNAME chain to <%1>

A debug message, a CNAME response was received and another query is being issued for the <name, class, type> tuple. -

RESLIB_LONGCHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded

+

RESLIB_LONG_CHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded

A debug message recording that a CNAME response has been received to an upstream query for the specified question (Previous debug messages will have indicated the server to which the question was sent). However, receipt of this CNAME has meant that the resolver has exceeded the CNAME chain limit (a CNAME chain is where on CNAME points to another) and so an error is being returned. -

RESLIB_NONSRRSET no NS RRSet in referral response received to query for <%1>

+

RESLIB_NO_NS_RRSET no NS RRSet in referral response received to query for <%1>

A debug message, this indicates that a response was received for the specified -query and was categorised as a referral. However, the received message did +query and was categorized as a referral. However, the received message did not contain any NS RRsets. This may indicate a programming error in the response classification code. -

RESLIB_NSASLOOK looking up nameserver for zone %1 in the NSAS

+

RESLIB_NSAS_LOOKUP looking up nameserver for zone %1 in the NSAS

A debug message, the RunningQuery object is querying the NSAS for the nameservers for the specified zone. -

RESLIB_NXDOMRR NXDOMAIN/NXRRSET received in response to query for <%1>

+

RESLIB_NXDOM_NXRR NXDOMAIN/NXRRSET received in response to query for <%1>

A debug message recording that either a NXDOMAIN or an NXRRSET response has been received to an upstream query for the specified question. Previous debug messages will have indicated the server to which the question was sent.

RESLIB_PROTOCOL protocol error in answer for %1: %3

A debug message indicating that a protocol error was received. As there are no retries left, an error will be reported. -

RESLIB_PROTOCOLRTRY protocol error in answer for %1: %2 (retries left: %3)

+

RESLIB_PROTOCOL_RETRY protocol error in answer for %1: %2 (retries left: %3)

A debug message indicating that a protocol error was received and that the resolver is repeating the query to the same nameserver. After this repeated query, there will be the indicated number of retries left. -

RESLIB_RCODERR RCODE indicates error in response to query for <%1>

+

RESLIB_RCODE_ERR RCODE indicates error in response to query for <%1>

A debug message, the response to the specified query indicated an error that is not covered by a specific code path. A SERVFAIL will be returned. -

RESLIB_REFERRAL referral received in response to query for <%1>

-A debug message recording that a referral response has been received to an -upstream query for the specified question. Previous debug messages will -have indicated the server to which the question was sent. -

RESLIB_REFERZONE referred to zone %1

-A debug message indicating that the last referral message was to the specified -zone. -

RESLIB_RESCAFND found <%1> in the cache (resolve() instance %2)

+

RESLIB_RECQ_CACHE_FIND found <%1> in the cache (resolve() instance %2)

This is a debug message and indicates that a RecursiveQuery object found the the specified <name, class, type> tuple in the cache. The instance number at the end of the message indicates which of the two resolve() methods has been called. -

RESLIB_RESCANOTFND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2)

+

RESLIB_RECQ_CACHE_NO_FIND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2)

This is a debug message and indicates that the look in the cache made by the RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery object has been created to resolve the question. The instance number at the end of the message indicates which of the two resolve() methods has been called. +

RESLIB_REFERRAL referral received in response to query for <%1>

+A debug message recording that a referral response has been received to an +upstream query for the specified question. Previous debug messages will +have indicated the server to which the question was sent. +

RESLIB_REFER_ZONE referred to zone %1

+A debug message indicating that the last referral message was to the specified +zone.

RESLIB_RESOLVE asked to resolve <%1> (resolve() instance %2)

A debug message, the RecursiveQuery::resolve method has been called to resolve the specified <name, class, type> tuple. The first action will be to lookup the specified tuple in the cache. The instance number at the end of the message indicates which of the two resolve() methods has been called. -

RESLIB_RRSETFND found single RRset in the cache when querying for <%1> (resolve() instance %2)

+

RESLIB_RRSET_FOUND found single RRset in the cache when querying for <%1> (resolve() instance %2)

A debug message, indicating that when RecursiveQuery::resolve queried the cache, a single RRset was found which was put in the answer. The instance number at the end of the message indicates which of the two resolve() methods has been called.

RESLIB_RTT round-trip time of last query calculated as %1 ms

A debug message giving the round-trip time of the last query and response. -

RESLIB_RUNCAFND found <%1> in the cache

+

RESLIB_RUNQ_CACHE_FIND found <%1> in the cache

This is a debug message and indicates that a RunningQuery object found the specified <name, class, type> tuple in the cache. -

RESLIB_RUNCALOOK looking up up <%1> in the cache

+

RESLIB_RUNQ_CACHE_LOOKUP looking up up <%1> in the cache

This is a debug message and indicates that a RunningQuery object has made a call to its doLookup() method to look up the specified <name, class, type> tuple, the first action of which will be to examine the cache. -

RESLIB_RUNQUFAIL failure callback - nameservers are unreachable

+

RESLIB_RUNQ_FAIL failure callback - nameservers are unreachable

A debug message indicating that a RunningQuery's failure callback has been called because all nameservers for the zone in question are unreachable. -

RESLIB_RUNQUSUCC success callback - sending query to %1

+

RESLIB_RUNQ_SUCCESS success callback - sending query to %1

A debug message indicating that a RunningQuery's success callback has been called because a nameserver has been found, and that a query is being sent to the specified nameserver. -

RESLIB_TESTSERV setting test server to %1(%2)

-This is an internal debugging message and is only generated in unit tests. -It indicates that all upstream queries from the resolver are being routed to -the specified server, regardless of the address of the nameserver to which -the query would normally be routed. As it should never be seen in normal -operation, it is a warning message instead of a debug message. -

RESLIB_TESTUPSTR sending upstream query for <%1> to test server at %2

+

RESLIB_TEST_SERVER setting test server to %1(%2)

+This is a warning message only generated in unit tests. It indicates +that all upstream queries from the resolver are being routed to the +specified server, regardless of the address of the nameserver to which +the query would normally be routed. If seen during normal operation, +please submit a bug report. +

RESLIB_TEST_UPSTREAM sending upstream query for <%1> to test server at %2

This is a debug message and should only be seen in unit tests. A query for the specified <name, class, type> tuple is being sent to a test nameserver whose address is given in the message.

RESLIB_TIMEOUT query <%1> to %2 timed out

-A debug message indicating that the specified query has timed out and as -there are no retries left, an error will be reported. -

RESLIB_TIMEOUTRTRY query <%1> to %2 timed out, re-trying (retries left: %3)

+A debug message indicating that the specified upstream query has timed out and +there are no retries left. +

RESLIB_TIMEOUT_RETRY query <%1> to %2 timed out, re-trying (retries left: %3)

A debug message indicating that the specified query has timed out and that the resolver is repeating the query to the same nameserver. After this repeated query, there will be the indicated number of retries left. @@ -699,143 +1472,610 @@ gives no cause for concern.

RESLIB_UPSTREAM sending upstream query for <%1> to %2

A debug message indicating that a query for the specified <name, class, type> tuple is being sent to a nameserver whose address is given in the message. -

RESOLVER_AXFRTCP AXFR request received over TCP

-A debug message, the resolver received a NOTIFY message over TCP. The server -cannot process it and will return an error message to the sender with the -RCODE set to NOTIMP. -

RESOLVER_AXFRUDP AXFR request received over UDP

-A debug message, the resolver received a NOTIFY message over UDP. The server -cannot process it (and in any case, an AXFR request should be sent over TCP) -and will return an error message to the sender with the RCODE set to FORMERR. -

RESOLVER_CLTMOSMALL client timeout of %1 is too small

-An error indicating that the configuration value specified for the query -timeout is too small. -

RESOLVER_CONFIGCHAN configuration channel created

-A debug message, output when the resolver has successfully established a -connection to the configuration channel. -

RESOLVER_CONFIGERR error in configuration: %1

-An error was detected in a configuration update received by the resolver. This -may be in the format of the configuration message (in which case this is a -programming error) or it may be in the data supplied (in which case it is -a user error). The reason for the error, given as a parameter in the message, -will give more details. -

RESOLVER_CONFIGLOAD configuration loaded

-A debug message, output when the resolver configuration has been successfully -loaded. -

RESOLVER_CONFIGUPD configuration updated: %1

-A debug message, the configuration has been updated with the specified -information. +

RESOLVER_AXFR_TCP AXFR request received over TCP

+This is a debug message output when the resolver received a request for +an AXFR (full transfer of a zone) over TCP. Only authoritative servers +are able to handle AXFR requests, so the resolver will return an error +message to the sender with the RCODE set to NOTIMP. +

RESOLVER_AXFR_UDP AXFR request received over UDP

+This is a debug message output when the resolver received a request for +an AXFR (full transfer of a zone) over UDP. Only authoritative servers +are able to handle AXFR requests (and in any case, an AXFR request should +be sent over TCP), so the resolver will return an error message to the +sender with the RCODE set to NOTIMP. +

RESOLVER_CLIENT_TIME_SMALL client timeout of %1 is too small

+During the update of the resolver's configuration parameters, the value +of the client timeout was found to be too small. The configuration +update was abandoned and the parameters were not changed. +

RESOLVER_CONFIG_CHANNEL configuration channel created

+This is a debug message output when the resolver has successfully +established a connection to the configuration channel. +

RESOLVER_CONFIG_ERROR error in configuration: %1

+An error was detected in a configuration update received by the +resolver. This may be in the format of the configuration message (in +which case this is a programming error) or it may be in the data supplied +(in which case it is a user error). The reason for the error, included +in the message, will give more details. The configuration update is +not applied and the resolver parameters were not changed. +

RESOLVER_CONFIG_LOADED configuration loaded

+This is a debug message output when the resolver configuration has been +successfully loaded. +

RESOLVER_CONFIG_UPDATED configuration updated: %1

+This is a debug message output when the resolver configuration is being +updated with the specified information.

RESOLVER_CREATED main resolver object created

-A debug message, output when the Resolver() object has been created. -

RESOLVER_DNSMSGRCVD DNS message received: %1

-A debug message, this always precedes some other logging message and is the -formatted contents of the DNS packet that the other message refers to. -

RESOLVER_DNSMSGSENT DNS message of %1 bytes sent: %2

-A debug message, this contains details of the response sent back to the querying -system. +This is a debug message indicating that the main resolver object has +been created. +

RESOLVER_DNS_MESSAGE_RECEIVED DNS message received: %1

+This is a debug message from the resolver listing the contents of a +received DNS message. +

RESOLVER_DNS_MESSAGE_SENT DNS message of %1 bytes sent: %2

+This is a debug message containing details of the response returned by +the resolver to the querying system.

RESOLVER_FAILED resolver failed, reason: %1

-This is an error message output when an unhandled exception is caught by the -resolver. All it can do is to shut down. -

RESOLVER_FWDADDR setting forward address %1(%2)

-This message may appear multiple times during startup, and it lists the -forward addresses used by the resolver when running in forwarding mode. -

RESOLVER_FWDQUERY processing forward query

-The received query has passed all checks and is being forwarded to upstream +This is an error message output when an unhandled exception is caught +by the resolver. After this, the resolver will shut itself down. +Please submit a bug report. +

RESOLVER_FORWARD_ADDRESS setting forward address %1(%2)

+If the resolver is running in forward mode, this message will appear +during startup to list the forward address. If multiple addresses are +specified, it will appear once for each address. +

RESOLVER_FORWARD_QUERY processing forward query

+This is a debug message indicating that a query received by the resolver +has passed a set of checks (message is well-formed, it is allowed by the +ACL, it is a supported opcode, etc.) and is being forwarded to upstream servers. -

RESOLVER_HDRERR message received, exception when processing header: %1

-A debug message noting that an exception occurred during the processing of -a received packet. The packet has been dropped. +

RESOLVER_HEADER_ERROR message received, exception when processing header: %1

+This is a debug message from the resolver noting that an exception +occurred during the processing of a received packet. The packet has +been dropped.

RESOLVER_IXFR IXFR request received

-The resolver received a NOTIFY message over TCP. The server cannot process it -and will return an error message to the sender with the RCODE set to NOTIMP. -

RESOLVER_LKTMOSMALL lookup timeout of %1 is too small

-An error indicating that the configuration value specified for the lookup -timeout is too small. -

RESOLVER_NFYNOTAUTH NOTIFY arrived but server is not authoritative

-The resolver received a NOTIFY message. As the server is not authoritative it -cannot process it, so it returns an error message to the sender with the RCODE -set to NOTAUTH. -

RESOLVER_NORMQUERY processing normal query

-The received query has passed all checks and is being processed by the resolver. -

RESOLVER_NOROOTADDR no root addresses available

-A warning message during startup, indicates that no root addresses have been -set. This may be because the resolver will get them from a priming query. -

RESOLVER_NOTIN non-IN class request received, returning REFUSED message

-A debug message, the resolver has received a DNS packet that was not IN class. -The resolver cannot handle such packets, so is returning a REFUSED response to -the sender. -

RESOLVER_NOTONEQUES query contained %1 questions, exactly one question was expected

-A debug message, the resolver received a query that contained the number of -entires in the question section detailed in the message. This is a malformed -message, as a DNS query must contain only one question. The resolver will -return a message to the sender with the RCODE set to FORMERR. -

RESOLVER_OPCODEUNS opcode %1 not supported by the resolver

-A debug message, the resolver received a message with an unsupported opcode -(it can only process QUERY opcodes). It will return a message to the sender -with the RCODE set to NOTIMP. -

RESOLVER_PARSEERR error parsing received message: %1 - returning %2

-A debug message noting that the resolver received a message and the parsing -of the body of the message failed due to some non-protocol related reason -(although the parsing of the header succeeded). The message parameters give -a textual description of the problem and the RCODE returned. -

RESOLVER_PRINTMSG print message command, aeguments are: %1

-This message is logged when a "print_message" command is received over the -command channel. -

RESOLVER_PROTERR protocol error parsing received message: %1 - returning %2

-A debug message noting that the resolver received a message and the parsing -of the body of the message failed due to some protocol error (although the -parsing of the header succeeded). The message parameters give a textual -description of the problem and the RCODE returned. -

RESOLVER_QUSETUP query setup

-A debug message noting that the resolver is creating a RecursiveQuery object. -

RESOLVER_QUSHUT query shutdown

-A debug message noting that the resolver is destroying a RecursiveQuery object. -

RESOLVER_QUTMOSMALL query timeout of %1 is too small

-An error indicating that the configuration value specified for the query -timeout is too small. +This is a debug message indicating that the resolver received a request +for an IXFR (incremental transfer of a zone). Only authoritative servers +are able to handle IXFR requests, so the resolver will return an error +message to the sender with the RCODE set to NOTIMP. +

RESOLVER_LOOKUP_TIME_SMALL lookup timeout of %1 is too small

+During the update of the resolver's configuration parameters, the value +of the lookup timeout was found to be too small. The configuration +update will not be applied. +

RESOLVER_MESSAGE_ERROR error parsing received message: %1 - returning %2

+This is a debug message noting that parsing of the body of a received +message by the resolver failed due to some error (although the parsing of +the header succeeded). The message parameters give a textual description +of the problem and the RCODE returned. +

RESOLVER_NEGATIVE_RETRIES negative number of retries (%1) specified in the configuration

+This error is issued when a resolver configuration update has specified +a negative retry count: only zero or positive values are valid. The +configuration update was abandoned and the parameters were not changed. +

RESOLVER_NON_IN_PACKET non-IN class request received, returning REFUSED message

+This debug message is issued when resolver has received a DNS packet that +was not IN (Internet) class. The resolver cannot handle such packets, +so is returning a REFUSED response to the sender. +

RESOLVER_NORMAL_QUERY processing normal query

+This is a debug message indicating that the query received by the resolver +has passed a set of checks (message is well-formed, it is allowed by the +ACL, it is a supported opcode, etc.) and is being processed by the resolver. +

RESOLVER_NOTIFY_RECEIVED NOTIFY arrived but server is not authoritative

+The resolver has received a NOTIFY message. As the server is not +authoritative it cannot process it, so it returns an error message to +the sender with the RCODE set to NOTAUTH. +

RESOLVER_NOT_ONE_QUESTION query contained %1 questions, exactly one question was expected

+This debug message indicates that the resolver received a query that +contained the number of entries in the question section detailed in +the message. This is a malformed message, as a DNS query must contain +only one question. The resolver will return a message to the sender +with the RCODE set to FORMERR. +

RESOLVER_NO_ROOT_ADDRESS no root addresses available

+A warning message issued during resolver startup, this indicates that +no root addresses have been set. This may be because the resolver will +get them from a priming query. +

RESOLVER_PARSE_ERROR error parsing received message: %1 - returning %2

+This is a debug message noting that the resolver received a message and +the parsing of the body of the message failed due to some non-protocol +related reason (although the parsing of the header succeeded). +The message parameters give a textual description of the problem and +the RCODE returned. +

RESOLVER_PRINT_COMMAND print message command, arguments are: %1

+This debug message is logged when a "print_message" command is received +by the resolver over the command channel. +

RESOLVER_PROTOCOL_ERROR protocol error parsing received message: %1 - returning %2

+This is a debug message noting that the resolver received a message and +the parsing of the body of the message failed due to some protocol error +(although the parsing of the header succeeded). The message parameters +give a textual description of the problem and the RCODE returned. +

RESOLVER_QUERY_ACCEPTED query accepted: '%1/%2/%3' from %4

+This debug message is produced by the resolver when an incoming query +is accepted in terms of the query ACL. The log message shows the query +in the form of <query name>/<query type>/<query class>, and the client +that sends the query in the form of <Source IP address>#<source port>. +

RESOLVER_QUERY_DROPPED query dropped: '%1/%2/%3' from %4

+This is an informational message that indicates an incoming query has +been dropped by the resolver because of the query ACL. Unlike the +RESOLVER_QUERY_REJECTED case, the server does not return any response. +The log message shows the query in the form of <query name>/<query +type>/<query class>, and the client that sends the query in the form of +<Source IP address>#<source port>. +

RESOLVER_QUERY_REJECTED query rejected: '%1/%2/%3' from %4

+This is an informational message that indicates an incoming query has +been rejected by the resolver because of the query ACL. This results +in a response with an RCODE of REFUSED. The log message shows the query +in the form of <query name>/<query type>/<query class>, and the client +that sends the query in the form of <Source IP address>#<source port>. +

RESOLVER_QUERY_SETUP query setup

+This is a debug message noting that the resolver is creating a +RecursiveQuery object. +

RESOLVER_QUERY_SHUTDOWN query shutdown

+This is a debug message noting that the resolver is destroying a +RecursiveQuery object. +

RESOLVER_QUERY_TIME_SMALL query timeout of %1 is too small

+During the update of the resolver's configuration parameters, the value +of the query timeout was found to be too small. The configuration +parameters were not changed. +

RESOLVER_RECEIVED_MESSAGE resolver has received a DNS message

+This is a debug message indicating that the resolver has received a +DNS message. Depending on the debug settings, subsequent log output +will indicate the nature of the message.

RESOLVER_RECURSIVE running in recursive mode

-This is an informational message that appears at startup noting that the -resolver is running in recursive mode. -

RESOLVER_RECVMSG resolver has received a DNS message

-A debug message indicating that the resolver has received a message. Depending -on the debug settings, subsequent log output will indicate the nature of the -message. -

RESOLVER_RETRYNEG negative number of retries (%1) specified in the configuration

-An error message indicating that the resolver configuration has specified a -negative retry count. Only zero or positive values are valid. -

RESOLVER_ROOTADDR setting root address %1(%2)

-This message may appear multiple times during startup; it lists the root -addresses used by the resolver. -

RESOLVER_SERVICE service object created

-A debug message, output when the main service object (which handles the -received queries) is created. -

RESOLVER_SETPARAM query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4

-A debug message, lists the parameters associated with the message. These are: +This is an informational message that appears at startup noting that +the resolver is running in recursive mode. +

RESOLVER_SERVICE_CREATED service object created

+This debug message is output when resolver creates the main service object +(which handles the received queries). +

RESOLVER_SET_PARAMS query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4

+This debug message lists the parameters being set for the resolver. These are: query timeout: the timeout (in ms) used for queries originated by the resolver -to upstream servers. Client timeout: the interval to resolver a query by +to upstream servers. Client timeout: the interval to resolve a query by a client: after this time, the resolver sends back a SERVFAIL to the client -whilst continuing to resolver the query. Lookup timeout: the time at which the +whilst continuing to resolve the query. Lookup timeout: the time at which the resolver gives up trying to resolve a query. Retry count: the number of times the resolver will retry a query to an upstream server if it gets a timeout.

The client and lookup timeouts require a bit more explanation. The -resolution of the clent query might require a large number of queries to +resolution of the client query might require a large number of queries to upstream nameservers. Even if none of these queries timeout, the total time taken to perform all the queries may exceed the client timeout. When this happens, a SERVFAIL is returned to the client, but the resolver continues -with the resolution process. Data received is added to the cache. However, -there comes a time - the lookup timeout - when even the resolve gives up. +with the resolution process; data received is added to the cache. However, +there comes a time - the lookup timeout - when even the resolver gives up. At this point it will wait for pending upstream queries to complete or timeout and drop the query. +

RESOLVER_SET_QUERY_ACL query ACL is configured

+This debug message is generated when a new query ACL is configured for +the resolver. +

RESOLVER_SET_ROOT_ADDRESS setting root address %1(%2)

+This message gives the address of one of the root servers used by the +resolver. It is output during startup and may appear multiple times, +once for each root server address.

RESOLVER_SHUTDOWN resolver shutdown complete

-This information message is output when the resolver has shut down. +This informational message is output when the resolver has shut down.

RESOLVER_STARTED resolver started

This informational message is output by the resolver when all initialization has been completed and it is entering its main loop.

RESOLVER_STARTING starting resolver with command line '%1'

An informational message, this is output when the resolver starts up. -

RESOLVER_UNEXRESP received unexpected response, ignoring

-A debug message noting that the server has received a response instead of a -query and is ignoring it. +

RESOLVER_UNEXPECTED_RESPONSE received unexpected response, ignoring

+This is a debug message noting that the resolver received a DNS response +packet on the port on which is it listening for queries. The packet +has been ignored. +

RESOLVER_UNSUPPORTED_OPCODE opcode %1 not supported by the resolver

+This is debug message output when the resolver received a message with an +unsupported opcode (it can only process QUERY opcodes). It will return +a message to the sender with the RCODE set to NOTIMP. +

SRVCOMM_ADDRESSES_NOT_LIST the address and port specification is not a list in %1

+This points to an error in configuration. What was supposed to be a list of +IP address - port pairs isn't a list at all but something else. +

SRVCOMM_ADDRESS_FAIL failed to listen on addresses (%1)

+The server failed to bind to one of the address/port pair it should according +to configuration, for reason listed in the message (usually because that pair +is already used by other service or missing privileges). The server will try +to recover and bind the address/port pairs it was listening to before (if any). +

SRVCOMM_ADDRESS_MISSING address specification is missing "address" or "port" element in %1

+This points to an error in configuration. An address specification in the +configuration is missing either an address or port and so cannot be used. The +specification causing the error is given in the message. +

SRVCOMM_ADDRESS_TYPE address specification type is invalid in %1

+This points to an error in configuration. An address specification in the +configuration malformed. The specification causing the error is given in the +message. A valid specification contains an address part (which must be a string +and must represent a valid IPv4 or IPv6 address) and port (which must be an +integer in the range valid for TCP/UDP ports on your system). +

SRVCOMM_ADDRESS_UNRECOVERABLE failed to recover original addresses also (%2)

+The recovery of old addresses after SRVCOMM_ADDRESS_FAIL also failed for +the reason listed. +

+The condition indicates problems with the server and/or the system on +which it is running. The server will continue running to allow +reconfiguration, but will not be listening on any address or port until +an administrator does so. +

SRVCOMM_ADDRESS_VALUE address to set: %1#%2

+Debug message. This lists one address and port value of the set of +addresses we are going to listen on (eg. there will be one log message +per pair). This appears only after SRVCOMM_SET_LISTEN, but might +be hidden, as it has higher debug level. +

SRVCOMM_KEYS_DEINIT deinitializing TSIG keyring

+Debug message indicating that the server is deinitializing the TSIG keyring. +

SRVCOMM_KEYS_INIT initializing TSIG keyring

+Debug message indicating that the server is initializing the global TSIG +keyring. This should be seen only at server start. +

SRVCOMM_KEYS_UPDATE updating TSIG keyring

+Debug message indicating new keyring is being loaded from configuration (either +on startup or as a result of configuration update). +

SRVCOMM_PORT_RANGE port out of valid range (%1 in %2)

+This points to an error in configuration. The port in an address +specification is outside the valid range of 0 to 65535. +

SRVCOMM_SET_LISTEN setting addresses to listen to

+Debug message, noting that the server is about to start listening on a +different set of IP addresses and ports than before. +

STATHTTPD_BAD_OPTION_VALUE bad command line argument: %1

+The stats-httpd module was called with a bad command-line argument +and will not start. +

STATHTTPD_CC_SESSION_ERROR error connecting to message bus: %1

+The stats-httpd module was unable to connect to the BIND 10 command +and control bus. A likely problem is that the message bus daemon +(b10-msgq) is not running. The stats-httpd module will now shut down. +

STATHTTPD_CLOSING closing %1#%2

+The stats-httpd daemon will stop listening for requests on the given +address and port number. +

STATHTTPD_CLOSING_CC_SESSION stopping cc session

+Debug message indicating that the stats-httpd module is disconnecting +from the command and control bus. +

STATHTTPD_HANDLE_CONFIG reading configuration: %1

+The stats-httpd daemon has received new configuration data and will now +process it. The (changed) data is printed. +

STATHTTPD_RECEIVED_SHUTDOWN_COMMAND shutdown command received

+A shutdown command was sent to the stats-httpd module, and it will +now shut down. +

STATHTTPD_RECEIVED_STATUS_COMMAND received command to return status

+A status command was sent to the stats-httpd module, and it will +respond with 'Stats Httpd is up.' and its PID. +

STATHTTPD_RECEIVED_UNKNOWN_COMMAND received unknown command: %1

+An unknown command has been sent to the stats-httpd module. The +stats-httpd module will respond with an error, and the command will +be ignored. +

STATHTTPD_SERVER_ERROR HTTP server error: %1

+An internal error occurred while handling an HTTP request. An HTTP 500 +response will be sent back, and the specific error is printed. This +is an error condition that likely points to a module that is not +responding correctly to statistic requests. +

STATHTTPD_SERVER_INIT_ERROR HTTP server initialization error: %1

+There was a problem initializing the HTTP server in the stats-httpd +module upon receiving its configuration data. The most likely cause +is a port binding problem or a bad configuration value. The specific +error is printed in the message. The new configuration is ignored, +and an error is sent back. +

STATHTTPD_SHUTDOWN shutting down

+The stats-httpd daemon is shutting down. +

STATHTTPD_STARTED listening on %1#%2

+The stats-httpd daemon will now start listening for requests on the +given address and port number. +

STATHTTPD_STARTING_CC_SESSION starting cc session

+Debug message indicating that the stats-httpd module is connecting to +the command and control bus. +

STATHTTPD_START_SERVER_INIT_ERROR HTTP server initialization error: %1

+There was a problem initializing the HTTP server in the stats-httpd +module upon startup. The most likely cause is that it was not able +to bind to the listening port. The specific error is printed, and the +module will shut down. +

STATHTTPD_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the stats-httpd +daemon. The daemon will now shut down. +

STATHTTPD_UNKNOWN_CONFIG_ITEM unknown configuration item: %1

+The stats-httpd daemon received a configuration update from the +configuration manager. However, one of the items in the +configuration is unknown. The new configuration is ignored, and an +error is sent back. As possible cause is that there was an upgrade +problem, and the stats-httpd version is out of sync with the rest of +the system. +

STATS_BAD_OPTION_VALUE bad command line argument: %1

+The stats module was called with a bad command-line argument and will +not start. +

STATS_CC_SESSION_ERROR error connecting to message bus: %1

+The stats module was unable to connect to the BIND 10 command and +control bus. A likely problem is that the message bus daemon +(b10-msgq) is not running. The stats module will now shut down. +

STATS_RECEIVED_NEW_CONFIG received new configuration: %1

+This debug message is printed when the stats module has received a +configuration update from the configuration manager. +

STATS_RECEIVED_REMOVE_COMMAND received command to remove %1

+A remove command for the given name was sent to the stats module, and +the given statistics value will now be removed. It will not appear in +statistics reports until it appears in a statistics update from a +module again. +

STATS_RECEIVED_RESET_COMMAND received command to reset all statistics

+The stats module received a command to clear all collected statistics. +The data is cleared until it receives an update from the modules again. +

STATS_RECEIVED_SHOW_ALL_COMMAND received command to show all statistics

+The stats module received a command to show all statistics that it has +collected. +

STATS_RECEIVED_SHOW_NAME_COMMAND received command to show statistics for %1

+The stats module received a command to show the statistics that it has +collected for the given item. +

STATS_RECEIVED_SHUTDOWN_COMMAND shutdown command received

+A shutdown command was sent to the stats module and it will now shut down. +

STATS_RECEIVED_STATUS_COMMAND received command to return status

+A status command was sent to the stats module. It will return a +response indicating that it is running normally. +

STATS_RECEIVED_UNKNOWN_COMMAND received unknown command: %1

+An unknown command has been sent to the stats module. The stats module +will respond with an error and the command will be ignored. +

STATS_SEND_REQUEST_BOSS requesting boss to send statistics

+This debug message is printed when a request is sent to the boss module +to send its data to the stats module. +

STATS_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the stats module. The +daemon will now shut down. +

STATS_UNKNOWN_COMMAND_IN_SPEC unknown command in specification file: %1

+The specification file for the stats module contains a command that +is unknown in the implementation. The most likely cause is an +installation problem, where the specification file stats.spec is +from a different version of BIND 10 than the stats module itself. +Please check your installation. +

XFRIN_AXFR_DATABASE_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to a database problem. +The error is shown in the log message. +

XFRIN_AXFR_INTERNAL_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to an internal +problem in the bind10 python wrapper library. +The error is shown in the log message. +

XFRIN_AXFR_TRANSFER_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to a protocol error. +The error is shown in the log message. +

XFRIN_AXFR_TRANSFER_STARTED AXFR transfer of zone %1 started

+A connection to the master server has been made, the serial value in +the SOA record has been checked, and a zone transfer has been started. +

XFRIN_AXFR_TRANSFER_SUCCESS AXFR transfer of zone %1 succeeded

+The AXFR transfer of the given zone was successfully completed. +

XFRIN_BAD_MASTER_ADDR_FORMAT bad format for master address: %1

+The given master address is not a valid IP address. +

XFRIN_BAD_MASTER_PORT_FORMAT bad format for master port: %1

+The master port as read from the configuration is not a valid port number. +

XFRIN_BAD_TSIG_KEY_STRING bad TSIG key string: %1

+The TSIG key string as read from the configuration does not represent +a valid TSIG key. +

XFRIN_BAD_ZONE_CLASS Invalid zone class: %1

+The zone class as read from the configuration is not a valid DNS class. +

XFRIN_CC_SESSION_ERROR error reading from cc channel: %1

+There was a problem reading from the command and control channel. The +most likely cause is that xfrin the msgq daemon is not running. +

XFRIN_COMMAND_ERROR error while executing command '%1': %2

+There was an error while the given command was being processed. The +error is given in the log message. +

XFRIN_CONNECT_MASTER error connecting to master at %1: %2

+There was an error opening a connection to the master. The error is +shown in the log message. +

XFRIN_IMPORT_DNS error importing python DNS module: %1

+There was an error importing the python DNS module pydnspp. The most +likely cause is a PYTHONPATH problem. +

XFRIN_MSGQ_SEND_ERROR error while contacting %1 and %2

+There was a problem sending a message to the xfrout module or the +zone manager. This most likely means that the msgq daemon has quit or +was killed. +

XFRIN_MSGQ_SEND_ERROR_ZONE_MANAGER error while contacting %1

+There was a problem sending a message to the zone manager. This most +likely means that the msgq daemon has quit or was killed. +

XFRIN_RETRANSFER_UNKNOWN_ZONE got notification to retransfer unknown zone %1

+There was an internal command to retransfer the given zone, but the +zone is not known to the system. This may indicate that the configuration +for xfrin is incomplete, or there was a typographical error in the +zone name in the configuration. +

XFRIN_STARTING starting resolver with command line '%1'

+An informational message, this is output when the resolver starts up. +

XFRIN_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the xfrin daemon. The +daemon will now shut down. +

XFRIN_UNKNOWN_ERROR unknown error: %1

+An uncaught exception was raised while running the xfrin daemon. The +exception message is printed in the log message. +

XFROUT_AXFR_TRANSFER_DONE transfer of %1/%2 complete

+The transfer of the given zone has been completed successfully, or was +aborted due to a shutdown event. +

XFROUT_AXFR_TRANSFER_ERROR error transferring zone %1/%2: %3

+An uncaught exception was encountered while sending the response to +an AXFR query. The error message of the exception is included in the +log message, but this error most likely points to incomplete exception +handling in the code. +

XFROUT_AXFR_TRANSFER_FAILED transfer of %1/%2 failed, rcode: %3

+A transfer out for the given zone failed. An error response is sent +to the client. The given rcode is the rcode that is set in the error +response. This is either NOTAUTH (we are not authoritative for the +zone), SERVFAIL (our internal database is missing the SOA record for +the zone), or REFUSED (the limit of simultaneous outgoing AXFR +transfers, as specified by the configuration value +Xfrout/max_transfers_out, has been reached). +

XFROUT_AXFR_TRANSFER_STARTED transfer of zone %1/%2 has started

+A transfer out of the given zone has started. +

XFROUT_BAD_TSIG_KEY_STRING bad TSIG key string: %1

+The TSIG key string as read from the configuration does not represent +a valid TSIG key. +

XFROUT_CC_SESSION_ERROR error reading from cc channel: %1

+There was a problem reading from the command and control channel. The +most likely cause is that the msgq daemon is not running. +

XFROUT_CC_SESSION_TIMEOUT_ERROR timeout waiting for cc response

+There was a problem reading a response from another module over the +command and control channel. The most likely cause is that the +configuration manager b10-cfgmgr is not running. +

XFROUT_FETCH_REQUEST_ERROR socket error while fetching a request from the auth daemon

+There was a socket error while contacting the b10-auth daemon to +fetch a transfer request. The auth daemon may have shutdown. +

XFROUT_HANDLE_QUERY_ERROR error while handling query: %1

+There was a general error handling an xfrout query. The error is shown +in the message. In principle this error should not appear, and points +to an oversight catching exceptions in the right place. However, to +ensure the daemon keeps running, this error is caught and reported. +

XFROUT_IMPORT error importing python module: %1

+There was an error importing a python module. One of the modules needed +by xfrout could not be found. This suggests that either some libraries +are missing on the system, or the PYTHONPATH variable is not correct. +The specific place where this library needs to be depends on your +system and your specific installation. +

XFROUT_NEW_CONFIG Update xfrout configuration

+New configuration settings have been sent from the configuration +manager. The xfrout daemon will now apply them. +

XFROUT_NEW_CONFIG_DONE Update xfrout configuration done

+The xfrout daemon is now done reading the new configuration settings +received from the configuration manager. +

XFROUT_NOTIFY_COMMAND received command to send notifies for %1/%2

+The xfrout daemon received a command on the command channel that +NOTIFY packets should be sent for the given zone. +

XFROUT_PARSE_QUERY_ERROR error parsing query: %1

+There was a parse error while reading an incoming query. The parse +error is shown in the log message. A remote client sent a packet we +do not understand or support. The xfrout request will be ignored. +In general, this should only occur for unexpected problems like +memory allocation failures, as the query should already have been +parsed by the b10-auth daemon, before it was passed here. +

XFROUT_PROCESS_REQUEST_ERROR error processing transfer request: %2

+There was an error processing a transfer request. The error is included +in the log message, but at this point no specific information other +than that could be given. This points to incomplete exception handling +in the code. +

XFROUT_QUERY_DROPPED request to transfer %1/%2 to [%3]:%4 dropped

+The xfrout process silently dropped a request to transfer zone to given host. +This is required by the ACLs. The %1 and %2 represent the zone name and class, +the %3 and %4 the IP address and port of the peer requesting the transfer. +

XFROUT_QUERY_REJECTED request to transfer %1/%2 to [%3]:%4 rejected

+The xfrout process rejected (by REFUSED rcode) a request to transfer zone to +given host. This is because of ACLs. The %1 and %2 represent the zone name and +class, the %3 and %4 the IP address and port of the peer requesting the +transfer. +

XFROUT_RECEIVED_SHUTDOWN_COMMAND shutdown command received

+The xfrout daemon received a shutdown command from the command channel +and will now shut down. +

XFROUT_RECEIVE_FILE_DESCRIPTOR_ERROR error receiving the file descriptor for an XFR connection

+There was an error receiving the file descriptor for the transfer +request. Normally, the request is received by b10-auth, and passed on +to the xfrout daemon, so it can answer directly. However, there was a +problem receiving this file descriptor. The request will be ignored. +

XFROUT_REMOVE_OLD_UNIX_SOCKET_FILE_ERROR error removing unix socket file %1: %2

+The unix socket file xfrout needs for contact with the auth daemon +already exists, and needs to be removed first, but there is a problem +removing it. It is likely that we do not have permission to remove +this file. The specific error is show in the log message. The xfrout +daemon will shut down. +

XFROUT_REMOVE_UNIX_SOCKET_FILE_ERROR error clearing unix socket file %1: %2

+When shutting down, the xfrout daemon tried to clear the unix socket +file used for communication with the auth daemon. It failed to remove +the file. The reason for the failure is given in the error message. +

XFROUT_SOCKET_SELECT_ERROR error while calling select() on request socket: %1

+There was an error while calling select() on the socket that informs +the xfrout daemon that a new xfrout request has arrived. This should +be a result of rare local error such as memory allocation failure and +shouldn't happen under normal conditions. The error is included in the +log message. +

XFROUT_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the xfrout daemon. The +daemon will now shut down. +

XFROUT_STOPPING the xfrout daemon is shutting down

+The current transfer is aborted, as the xfrout daemon is shutting down. +

XFROUT_UNIX_SOCKET_FILE_IN_USE another xfrout process seems to be using the unix socket file %1

+While starting up, the xfrout daemon tried to clear the unix domain +socket needed for contacting the b10-auth daemon to pass requests +on, but the file is in use. The most likely cause is that another +xfrout daemon process is still running. This xfrout daemon (the one +printing this message) will not start. +

ZONEMGR_CCSESSION_ERROR command channel session error: %1

+An error was encountered on the command channel. The message indicates +the nature of the error. +

ZONEMGR_JITTER_TOO_BIG refresh_jitter is too big, setting to 0.5

+The value specified in the configuration for the refresh jitter is too large +so its value has been set to the maximum of 0.5. +

ZONEMGR_KEYBOARD_INTERRUPT exiting zonemgr process as result of keyboard interrupt

+An informational message output when the zone manager was being run at a +terminal and it was terminated via a keyboard interrupt signal. +

ZONEMGR_LOAD_ZONE loading zone %1 (class %2)

+This is a debug message indicating that the zone of the specified class +is being loaded. +

ZONEMGR_NO_MASTER_ADDRESS internal BIND 10 command did not contain address of master

+A command received by the zone manager from the Auth module did not +contain the address of the master server from which a NOTIFY message +was received. This may be due to an internal programming error; please +submit a bug report. +

ZONEMGR_NO_SOA zone %1 (class %2) does not have an SOA record

+When loading the named zone of the specified class the zone manager +discovered that the data did not contain an SOA record. The load has +been abandoned. +

ZONEMGR_NO_TIMER_THREAD trying to stop zone timer thread but it is not running

+An attempt was made to stop the timer thread (used to track when zones +should be refreshed) but it was not running. This may indicate an +internal program error. Please submit a bug report. +

ZONEMGR_NO_ZONE_CLASS internal BIND 10 command did not contain class of zone

+A command received by the zone manager from another BIND 10 module did +not contain the class of the zone on which the zone manager should act. +This may be due to an internal programming error; please submit a +bug report. +

ZONEMGR_NO_ZONE_NAME internal BIND 10 command did not contain name of zone

+A command received by the zone manager from another BIND 10 module did +not contain the name of the zone on which the zone manager should act. +This may be due to an internal programming error; please submit a +bug report. +

ZONEMGR_RECEIVE_NOTIFY received NOTIFY command for zone %1 (class %2)

+This is a debug message indicating that the zone manager has received a +NOTIFY command over the command channel. The command is sent by the Auth +process when it is acting as a slave server for the zone and causes the +zone manager to record the master server for the zone and start a timer; +when the timer expires, the master will be polled to see if it contains +new data. +

ZONEMGR_RECEIVE_SHUTDOWN received SHUTDOWN command

+This is a debug message indicating that the zone manager has received +a SHUTDOWN command over the command channel from the Boss process. +It will act on this command and shut down. +

ZONEMGR_RECEIVE_UNKNOWN received unknown command '%1'

+This is a warning message indicating that the zone manager has received +the stated command over the command channel. The command is not known +to the zone manager and although the command is ignored, its receipt +may indicate an internal error. Please submit a bug report. +

ZONEMGR_RECEIVE_XFRIN_FAILED received XFRIN FAILED command for zone %1 (class %2)

+This is a debug message indicating that the zone manager has received +an XFRIN FAILED command over the command channel. The command is sent +by the Xfrin process when a transfer of zone data into the system has +failed, and causes the zone manager to schedule another transfer attempt. +

ZONEMGR_RECEIVE_XFRIN_SUCCESS received XFRIN SUCCESS command for zone %1 (class %2)

+This is a debug message indicating that the zone manager has received +an XFRIN SUCCESS command over the command channel. The command is sent +by the Xfrin process when the transfer of zone data into the system has +succeeded, and causes the data to be loaded and served by BIND 10. +

ZONEMGR_REFRESH_ZONE refreshing zone %1 (class %2)

+The zone manager is refreshing the named zone of the specified class +with updated information. +

ZONEMGR_SELECT_ERROR error with select(): %1

+An attempt to wait for input from a socket failed. The failing operation +is a call to the operating system's select() function, which failed for +the given reason. +

ZONEMGR_SEND_FAIL failed to send command to %1, session has been closed

+The zone manager attempted to send a command to the named BIND 10 module, +but the send failed. The session between the modules has been closed. +

ZONEMGR_SESSION_ERROR unable to establish session to command channel daemon

+The zonemgr process was not able to be started because it could not +connect to the command channel daemon. The most usual cause of this +problem is that the daemon is not running. +

ZONEMGR_SESSION_TIMEOUT timeout on session to command channel daemon

+The zonemgr process was not able to be started because it timed out when +connecting to the command channel daemon. The most usual cause of this +problem is that the daemon is not running. +

ZONEMGR_SHUTDOWN zone manager has shut down

+A debug message, output when the zone manager has shut down completely. +

ZONEMGR_STARTING zone manager starting

+A debug message output when the zone manager starts up. +

ZONEMGR_TIMER_THREAD_RUNNING trying to start timer thread but one is already running

+This message is issued when an attempt is made to start the timer +thread (which keeps track of when zones need a refresh) but one is +already running. It indicates either an error in the program logic or +a problem with stopping a previous instance of the timer. Please submit +a bug report. +

ZONEMGR_UNKNOWN_ZONE_FAIL zone %1 (class %2) is not known to the zone manager

+An XFRIN operation has failed but the zone that was the subject of the +operation is not being managed by the zone manager. This may indicate +an error in the program (as the operation should not have been initiated +if this were the case). Please submit a bug report. +

ZONEMGR_UNKNOWN_ZONE_NOTIFIED notified zone %1 (class %2) is not known to the zone manager

+A NOTIFY was received but the zone that was the subject of the operation +is not being managed by the zone manager. This may indicate an error +in the program (as the operation should not have been initiated if this +were the case). Please submit a bug report. +

ZONEMGR_UNKNOWN_ZONE_SUCCESS zone %1 (class %2) is not known to the zone manager

+An XFRIN operation has succeeded but the zone received is not being +managed by the zone manager. This may indicate an error in the program +(as the operation should not have been initiated if this were the case). +Please submit a bug report.

diff --git a/doc/guide/bind10-messages.xml b/doc/guide/bind10-messages.xml index eaa8bb99a1..f5c44b33d8 100644 --- a/doc/guide/bind10-messages.xml +++ b/doc/guide/bind10-messages.xml @@ -5,6 +5,12 @@ %version; ]> + @@ -62,16 +68,16 @@ - -ASIODNS_FETCHCOMP upstream fetch to %1(%2) has now completed + +ASIODNS_FETCH_COMPLETED upstream fetch to %1(%2) has now completed -A debug message, this records the the upstream fetch (a query made by the +A debug message, this records that the upstream fetch (a query made by the resolver on behalf of its client) to the specified address has completed. - -ASIODNS_FETCHSTOP upstream fetch to %1(%2) has been stopped + +ASIODNS_FETCH_STOPPED upstream fetch to %1(%2) has been stopped An external component has requested the halting of an upstream fetch. This is an allowed operation, and the message should only appear if debug is @@ -79,27 +85,27 @@ enabled. - -ASIODNS_OPENSOCK error %1 opening %2 socket to %3(%4) + +ASIODNS_OPEN_SOCKET error %1 opening %2 socket to %3(%4) The asynchronous I/O code encountered an error when trying to open a socket of the specified protocol in order to send a message to the target address. -The the number of the system error that cause the problem is given in the +The number of the system error that caused the problem is given in the message. - -ASIODNS_RECVSOCK error %1 reading %2 data from %3(%4) + +ASIODNS_READ_DATA error %1 reading %2 data from %3(%4) -The asynchronous I/O code encountered an error when trying read data from -the specified address on the given protocol. The the number of the system -error that cause the problem is given in the message. +The asynchronous I/O code encountered an error when trying to read data from +the specified address on the given protocol. The number of the system +error that caused the problem is given in the message. - -ASIODNS_RECVTMO receive timeout while waiting for data from %1(%2) + +ASIODNS_READ_TIMEOUT receive timeout while waiting for data from %1(%2) An upstream fetch from the specified address timed out. This may happen for any number of reasons and is most probably a problem at the remote server @@ -108,29 +114,1436 @@ enabled. - -ASIODNS_SENDSOCK error %1 sending data using %2 to %3(%4) + +ASIODNS_SEND_DATA error %1 sending data using %2 to %3(%4) -The asynchronous I/O code encountered an error when trying send data to -the specified address on the given protocol. The the number of the system -error that cause the problem is given in the message. +The asynchronous I/O code encountered an error when trying to send data to +the specified address on the given protocol. The number of the system +error that caused the problem is given in the message. - -ASIODNS_UNKORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3) + +ASIODNS_UNKNOWN_ORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3) -This message should not appear and indicates an internal error if it does. -Please enter a bug report. +An internal consistency check on the origin of a message from the +asynchronous I/O module failed. This may indicate an internal error; +please submit a bug report. - -ASIODNS_UNKRESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3) + +ASIODNS_UNKNOWN_RESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3) -The termination method of the resolver's upstream fetch class was called with -an unknown result code (which is given in the message). This message should -not appear and may indicate an internal error. Please enter a bug report. +An internal error indicating that the termination method of the resolver's +upstream fetch class was called with an unknown result code (which is +given in the message). Please submit a bug report. + + + + +AUTH_AXFR_ERROR error handling AXFR request: %1 + +This is a debug message produced by the authoritative server when it +has encountered an error processing an AXFR request. The message gives +the reason for the error, and the server will return a SERVFAIL code to +the sender. + + + + +AUTH_AXFR_UDP AXFR query received over UDP + +This is a debug message output when the authoritative server has received +an AXFR query over UDP. Use of UDP for AXFRs is not permitted by the +protocol, so the server will return a FORMERR error to the sender. + + + + +AUTH_COMMAND_FAILED execution of command channel instruction '%1' failed: %2 + +Execution of the specified command by the authoritative server failed. The +message contains the reason for the failure. + + + + +AUTH_CONFIG_CHANNEL_CREATED configuration session channel created + +This is a debug message indicating that authoritative server has created +the channel to the configuration manager. It is issued during server +startup is an indication that the initialization is proceeding normally. + + + + +AUTH_CONFIG_CHANNEL_ESTABLISHED configuration session channel established + +This is a debug message indicating that authoritative server +has established communication the configuration manager over the +previously-created channel. It is issued during server startup is an +indication that the initialization is proceeding normally. + + + + +AUTH_CONFIG_CHANNEL_STARTED configuration session channel started + +This is a debug message, issued when the authoritative server has +posted a request to be notified when new configuration information is +available. It is issued during server startup is an indication that +the initialization is proceeding normally. + + + + +AUTH_CONFIG_LOAD_FAIL load of configuration failed: %1 + +An attempt to configure the server with information from the configuration +database during the startup sequence has failed. (The reason for +the failure is given in the message.) The server will continue its +initialization although it may not be configured in the desired way. + + + + +AUTH_CONFIG_UPDATE_FAIL update of configuration failed: %1 + +At attempt to update the configuration the server with information +from the configuration database has failed, the reason being given in +the message. + + + + +AUTH_DATA_SOURCE data source database file: %1 + +This is a debug message produced by the authoritative server when it accesses a +datebase data source, listing the file that is being accessed. + + + + +AUTH_DNS_SERVICES_CREATED DNS services created + +This is a debug message indicating that the component that will handling +incoming queries for the authoritative server (DNSServices) has been +successfully created. It is issued during server startup is an indication +that the initialization is proceeding normally. + + + + +AUTH_HEADER_PARSE_FAIL unable to parse header in received DNS packet: %1 + +This is a debug message, generated by the authoritative server when an +attempt to parse the header of a received DNS packet has failed. (The +reason for the failure is given in the message.) The server will drop the +packet. + + + + +AUTH_LOAD_TSIG loading TSIG keys + +This is a debug message indicating that the authoritative server +has requested the keyring holding TSIG keys from the configuration +database. It is issued during server startup is an indication that the +initialization is proceeding normally. + + + + +AUTH_LOAD_ZONE loaded zone %1/%2 + +This debug message is issued during the processing of the 'loadzone' command +when the authoritative server has successfully loaded the named zone of the +named class. + + + + +AUTH_MEM_DATASRC_DISABLED memory data source is disabled for class %1 + +This is a debug message reporting that the authoritative server has +discovered that the memory data source is disabled for the given class. + + + + +AUTH_MEM_DATASRC_ENABLED memory data source is enabled for class %1 + +This is a debug message reporting that the authoritative server has +discovered that the memory data source is enabled for the given class. + + + + +AUTH_NOTIFY_QUESTIONS invalid number of questions (%1) in incoming NOTIFY + +This debug message is logged by the authoritative server when it receives +a NOTIFY packet that contains zero or more than one question. (A valid +NOTIFY packet contains one question.) The server will return a FORMERR +error to the sender. + + + + +AUTH_NOTIFY_RRTYPE invalid question RR type (%1) in incoming NOTIFY + +This debug message is logged by the authoritative server when it receives +a NOTIFY packet that an RR type of something other than SOA in the +question section. (The RR type received is included in the message.) The +server will return a FORMERR error to the sender. + + + + +AUTH_NO_STATS_SESSION session interface for statistics is not available + +The authoritative server had no session with the statistics module at the +time it attempted to send it data: the attempt has been abandoned. This +could be an error in configuration. + + + + +AUTH_NO_XFRIN received NOTIFY but XFRIN session is not running + +This is a debug message produced by the authoritative server when it receives +a NOTIFY packet but the XFRIN process is not running. The packet will be +dropped and nothing returned to the sender. + + + + +AUTH_PACKET_PARSE_ERROR unable to parse received DNS packet: %1 + +This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to something other +than a protocol error. The reason for the failure is given in the message; +the server will return a SERVFAIL error code to the sender. + + + + +AUTH_PACKET_PROTOCOL_ERROR DNS packet protocol error: %1. Returning %2 + +This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to a protocol error. +The reason for the failure is given in the message, as is the error code +that will be returned to the sender. + + + + +AUTH_PACKET_RECEIVED message received:\n%1 + +This is a debug message output by the authoritative server when it +receives a valid DNS packet. + +Note: This message includes the packet received, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_PROCESS_FAIL message processing failure: %1 + +This message is generated by the authoritative server when it has +encountered an internal error whilst processing a received packet: +the cause of the error is included in the message. + +The server will return a SERVFAIL error code to the sender of the packet. +This message indicates a potential error in the server. Please open a +bug ticket for this issue. + + + + +AUTH_RECEIVED_COMMAND command '%1' received + +This is a debug message issued when the authoritative server has received +a command on the command channel. + + + + +AUTH_RECEIVED_SENDSTATS command 'sendstats' received + +This is a debug message issued when the authoritative server has received +a command from the statistics module to send it data. The 'sendstats' +command is handled differently to other commands, which is why the debug +message associated with it has its own code. + + + + +AUTH_RESPONSE_RECEIVED received response message, ignoring + +This is a debug message, this is output if the authoritative server +receives a DNS packet with the QR bit set, i.e. a DNS response. The +server ignores the packet as it only responds to question packets. + + + + +AUTH_SEND_ERROR_RESPONSE sending an error response (%1 bytes):\n%2 + +This is a debug message recording that the authoritative server is sending +an error response to the originator of the query. A previous message will +have recorded details of the failure. + +Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_SEND_NORMAL_RESPONSE sending an error response (%1 bytes):\n%2 + +This is a debug message recording that the authoritative server is sending +a response to the originator of a query. + +Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_SERVER_CREATED server created + +An informational message indicating that the authoritative server process has +been created and is initializing. The AUTH_SERVER_STARTED message will be +output when initialization has successfully completed and the server starts +accepting queries. + + + + +AUTH_SERVER_FAILED server failed: %1 + +The authoritative server has encountered a fatal error and is terminating. The +reason for the failure is included in the message. + + + + +AUTH_SERVER_STARTED server started + +Initialization of the authoritative server has completed successfully +and it is entering the main loop, waiting for queries to arrive. + + + + +AUTH_SQLITE3 nothing to do for loading sqlite3 + +This is a debug message indicating that the authoritative server has +found that the data source it is loading is an SQLite3 data source, +so no further validation is needed. + + + + +AUTH_STATS_CHANNEL_CREATED STATS session channel created + +This is a debug message indicating that the authoritative server has +created a channel to the statistics process. It is issued during server +startup is an indication that the initialization is proceeding normally. + + + + +AUTH_STATS_CHANNEL_ESTABLISHED STATS session channel established + +This is a debug message indicating that the authoritative server +has established communication over the previously created statistics +channel. It is issued during server startup is an indication that the +initialization is proceeding normally. + + + + +AUTH_STATS_COMMS communication error in sending statistics data: %1 + +An error was encountered when the authoritative server tried to send data +to the statistics daemon. The message includes additional information +describing the reason for the failure. + + + + +AUTH_STATS_TIMEOUT timeout while sending statistics data: %1 + +The authoritative server sent data to the statistics daemon but received +no acknowledgement within the specified time. The message includes +additional information describing the reason for the failure. + + + + +AUTH_STATS_TIMER_DISABLED statistics timer has been disabled + +This is a debug message indicating that the statistics timer has been +disabled in the authoritative server and no statistics information is +being produced. + + + + +AUTH_STATS_TIMER_SET statistics timer set to %1 second(s) + +This is a debug message indicating that the statistics timer has been +enabled and that the authoritative server will produce statistics data +at the specified interval. + + + + +AUTH_UNSUPPORTED_OPCODE unsupported opcode: %1 + +This is a debug message, produced when a received DNS packet being +processed by the authoritative server has been found to contain an +unsupported opcode. (The opcode is included in the message.) The server +will return an error code of NOTIMPL to the sender. + + + + +AUTH_XFRIN_CHANNEL_CREATED XFRIN session channel created + +This is a debug message indicating that the authoritative server has +created a channel to the XFRIN (Transfer-in) process. It is issued +during server startup is an indication that the initialization is +proceeding normally. + + + + +AUTH_XFRIN_CHANNEL_ESTABLISHED XFRIN session channel established + +This is a debug message indicating that the authoritative server has +established communication over the previously-created channel to the +XFRIN (Transfer-in) process. It is issued during server startup is an +indication that the initialization is proceeding normally. + + + + +AUTH_ZONEMGR_COMMS error communicating with zone manager: %1 + +This is a debug message output during the processing of a NOTIFY request. +An error (listed in the message) has been encountered whilst communicating +with the zone manager. The NOTIFY request will not be honored. + + + + +AUTH_ZONEMGR_ERROR received error response from zone manager: %1 + +This is a debug message output during the processing of a NOTIFY +request. The zone manager component has been informed of the request, +but has returned an error response (which is included in the message). The +NOTIFY request will not be honored. + + + + +BIND10_CHECK_MSGQ_ALREADY_RUNNING checking if msgq is already running + +The boss process is starting up and will now check if the message bus +daemon is already running. If so, it will not be able to start, as it +needs a dedicated message bus. + + + + +BIND10_CONFIGURATION_START_AUTH start authoritative server: %1 + +This message shows whether or not the authoritative server should be +started according to the configuration. + + + + +BIND10_CONFIGURATION_START_RESOLVER start resolver: %1 + +This message shows whether or not the resolver should be +started according to the configuration. + + + + +BIND10_INVALID_USER invalid user: %1 + +The boss process was started with the -u option, to drop root privileges +and continue running as the specified user, but the user is unknown. + + + + +BIND10_KILLING_ALL_PROCESSES killing all started processes + +The boss module was not able to start every process it needed to start +during startup, and will now kill the processes that did get started. + + + + +BIND10_KILL_PROCESS killing process %1 + +The boss module is sending a kill signal to process with the given name, +as part of the process of killing all started processes during a failed +startup, as described for BIND10_KILLING_ALL_PROCESSES + + + + +BIND10_MSGQ_ALREADY_RUNNING msgq daemon already running, cannot start + +There already appears to be a message bus daemon running. Either an +old process was not shut down correctly, and needs to be killed, or +another instance of BIND10, with the same msgq domain socket, is +running, which needs to be stopped. + + + + +BIND10_MSGQ_DAEMON_ENDED b10-msgq process died, shutting down + +The message bus daemon has died. This is a fatal error, since it may +leave the system in an inconsistent state. BIND10 will now shut down. + + + + +BIND10_MSGQ_DISAPPEARED msgq channel disappeared + +While listening on the message bus channel for messages, it suddenly +disappeared. The msgq daemon may have died. This might lead to an +inconsistent state of the system, and BIND 10 will now shut down. + + + + +BIND10_PROCESS_ENDED_NO_EXIT_STATUS process %1 (PID %2) died: exit status not available + +The given process ended unexpectedly, but no exit status is +available. See BIND10_PROCESS_ENDED_WITH_EXIT_STATUS for a longer +description. + + + + +BIND10_PROCESS_ENDED_WITH_EXIT_STATUS process %1 (PID %2) terminated, exit status = %3 + +The given process ended unexpectedly with the given exit status. +Depending on which module it was, it may simply be restarted, or it +may be a problem that will cause the boss module to shut down too. +The latter happens if it was the message bus daemon, which, if it has +died suddenly, may leave the system in an inconsistent state. BIND10 +will also shut down now if it has been run with --brittle. + + + + +BIND10_READING_BOSS_CONFIGURATION reading boss configuration + +The boss process is starting up, and will now process the initial +configuration, as received from the configuration manager. + + + + +BIND10_RECEIVED_COMMAND received command: %1 + +The boss module received a command and shall now process it. The command +is printed. + + + + +BIND10_RECEIVED_NEW_CONFIGURATION received new configuration: %1 + +The boss module received a configuration update and is going to apply +it now. The new configuration is printed. + + + + +BIND10_RECEIVED_SIGNAL received signal %1 + +The boss module received the given signal. + + + + +BIND10_RESURRECTED_PROCESS resurrected %1 (PID %2) + +The given process has been restarted successfully, and is now running +with the given process id. + + + + +BIND10_RESURRECTING_PROCESS resurrecting dead %1 process... + +The given process has ended unexpectedly, and is now restarted. + + + + +BIND10_SELECT_ERROR error in select() call: %1 + +There was a fatal error in the call to select(), used to see if a child +process has ended or if there is a message on the message bus. This +should not happen under normal circumstances and is considered fatal, +so BIND 10 will now shut down. The specific error is printed. + + + + +BIND10_SEND_SIGKILL sending SIGKILL to %1 (PID %2) + +The boss module is sending a SIGKILL signal to the given process. + + + + +BIND10_SEND_SIGTERM sending SIGTERM to %1 (PID %2) + +The boss module is sending a SIGTERM signal to the given process. + + + + +BIND10_SHUTDOWN stopping the server + +The boss process received a command or signal telling it to shut down. +It will send a shutdown command to each process. The processes that do +not shut down will then receive a SIGTERM signal. If that doesn't work, +it shall send SIGKILL signals to the processes still alive. + + + + +BIND10_SHUTDOWN_COMPLETE all processes ended, shutdown complete + +All child processes have been stopped, and the boss process will now +stop itself. + + + + +BIND10_SOCKCREATOR_BAD_CAUSE unknown error cause from socket creator: %1 + +The socket creator reported an error when creating a socket. But the function +which failed is unknown (not one of 'S' for socket or 'B' for bind). + + + + +BIND10_SOCKCREATOR_BAD_RESPONSE unknown response for socket request: %1 + +The boss requested a socket from the creator, but the answer is unknown. This +looks like a programmer error. + + + + +BIND10_SOCKCREATOR_CRASHED the socket creator crashed + +The socket creator terminated unexpectedly. It is not possible to restart it +(because the boss already gave up root privileges), so the system is going +to terminate. + + + + +BIND10_SOCKCREATOR_EOF eof while expecting data from socket creator + +There should be more data from the socket creator, but it closed the socket. +It probably crashed. + + + + +BIND10_SOCKCREATOR_INIT initializing socket creator parser + +The boss module initializes routines for parsing the socket creator +protocol. + + + + +BIND10_SOCKCREATOR_KILL killing the socket creator + +The socket creator is being terminated the aggressive way, by sending it +sigkill. This should not happen usually. + + + + +BIND10_SOCKCREATOR_TERMINATE terminating socket creator + +The boss module sends a request to terminate to the socket creator. + + + + +BIND10_SOCKCREATOR_TRANSPORT_ERROR transport error when talking to the socket creator: %1 + +Either sending or receiving data from the socket creator failed with the given +error. The creator probably crashed or some serious OS-level problem happened, +as the communication happens only on local host. + + + + +BIND10_SOCKET_CREATED successfully created socket %1 + +The socket creator successfully created and sent a requested socket, it has +the given file number. + + + + +BIND10_SOCKET_ERROR error on %1 call in the creator: %2/%3 + +The socket creator failed to create the requested socket. It failed on the +indicated OS API function with given error. + + + + +BIND10_SOCKET_GET requesting socket [%1]:%2 of type %3 from the creator + +The boss forwards a request for a socket to the socket creator. + + + + +BIND10_STARTED_PROCESS started %1 + +The given process has successfully been started. + + + + +BIND10_STARTED_PROCESS_PID started %1 (PID %2) + +The given process has successfully been started, and has the given PID. + + + + +BIND10_STARTING starting BIND10: %1 + +Informational message on startup that shows the full version. + + + + +BIND10_STARTING_PROCESS starting process %1 + +The boss module is starting the given process. + + + + +BIND10_STARTING_PROCESS_PORT starting process %1 (to listen on port %2) + +The boss module is starting the given process, which will listen on the +given port number. + + + + +BIND10_STARTING_PROCESS_PORT_ADDRESS starting process %1 (to listen on %2#%3) + +The boss module is starting the given process, which will listen on the +given address and port number (written as <address>#<port>). + + + + +BIND10_STARTUP_COMPLETE BIND 10 started + +All modules have been successfully started, and BIND 10 is now running. + + + + +BIND10_STARTUP_ERROR error during startup: %1 + +There was a fatal error when BIND10 was trying to start. The error is +shown, and BIND10 will now shut down. + + + + +BIND10_START_AS_NON_ROOT starting %1 as a user, not root. This might fail. + +The given module is being started or restarted without root privileges. +If the module needs these privileges, it may have problems starting. +Note that this issue should be resolved by the pending 'socket-creator' +process; once that has been implemented, modules should not need root +privileges anymore. See tickets #800 and #801 for more information. + + + + +BIND10_STOP_PROCESS asking %1 to shut down + +The boss module is sending a shutdown command to the given module over +the message channel. + + + + +BIND10_UNKNOWN_CHILD_PROCESS_ENDED unknown child pid %1 exited + +An unknown child process has exited. The PID is printed, but no further +action will be taken by the boss process. + + + + +CACHE_ENTRY_MISSING_RRSET missing RRset to generate message for %1 + +The cache tried to generate the complete answer message. It knows the structure +of the message, but some of the RRsets to be put there are not in cache (they +probably expired already). Therefore it pretends the message was not found. + + + + +CACHE_LOCALZONE_FOUND found entry with key %1 in local zone data + +Debug message, noting that the requested data was successfully found in the +local zone data of the cache. + + + + +CACHE_LOCALZONE_UNKNOWN entry with key %1 not found in local zone data + +Debug message. The requested data was not found in the local zone data. + + + + +CACHE_LOCALZONE_UPDATE updating local zone element at key %1 + +Debug message issued when there's update to the local zone section of cache. + + + + +CACHE_MESSAGES_DEINIT deinitialized message cache + +Debug message. It is issued when the server deinitializes the message cache. + + + + +CACHE_MESSAGES_EXPIRED found an expired message entry for %1 in the message cache + +Debug message. The requested data was found in the message cache, but it +already expired. Therefore the cache removes the entry and pretends it found +nothing. + + + + +CACHE_MESSAGES_FOUND found a message entry for %1 in the message cache + +Debug message. We found the whole message in the cache, so it can be returned +to user without any other lookups. + + + + +CACHE_MESSAGES_INIT initialized message cache for %1 messages of class %2 + +Debug message issued when a new message cache is issued. It lists the class +of messages it can hold and the maximum size of the cache. + + + + +CACHE_MESSAGES_REMOVE removing old instance of %1/%2/%3 first + +Debug message. This may follow CACHE_MESSAGES_UPDATE and indicates that, while +updating, the old instance is being removed prior of inserting a new one. + + + + +CACHE_MESSAGES_UNCACHEABLE not inserting uncacheable message %1/%2/%3 + +Debug message, noting that the given message can not be cached. This is because +there's no SOA record in the message. See RFC 2308 section 5 for more +information. + + + + +CACHE_MESSAGES_UNKNOWN no entry for %1 found in the message cache + +Debug message. The message cache didn't find any entry for the given key. + + + + +CACHE_MESSAGES_UPDATE updating message entry %1/%2/%3 + +Debug message issued when the message cache is being updated with a new +message. Either the old instance is removed or, if none is found, new one +is created. + + + + +CACHE_RESOLVER_DEEPEST looking up deepest NS for %1/%2 + +Debug message. The resolver cache is looking up the deepest known nameserver, +so the resolution doesn't have to start from the root. + + + + +CACHE_RESOLVER_INIT initializing resolver cache for class %1 + +Debug message. The resolver cache is being created for this given class. + + + + +CACHE_RESOLVER_INIT_INFO initializing resolver cache for class %1 + +Debug message, the resolver cache is being created for this given class. The +difference from CACHE_RESOLVER_INIT is only in different format of passed +information, otherwise it does the same. + + + + +CACHE_RESOLVER_LOCAL_MSG message for %1/%2 found in local zone data + +Debug message. The resolver cache found a complete message for the user query +in the zone data. + + + + +CACHE_RESOLVER_LOCAL_RRSET RRset for %1/%2 found in local zone data + +Debug message. The resolver cache found a requested RRset in the local zone +data. + + + + +CACHE_RESOLVER_LOOKUP_MSG looking up message in resolver cache for %1/%2 + +Debug message. The resolver cache is trying to find a message to answer the +user query. + + + + +CACHE_RESOLVER_LOOKUP_RRSET looking up RRset in resolver cache for %1/%2 + +Debug message. The resolver cache is trying to find an RRset (which usually +originates as internally from resolver). + + + + +CACHE_RESOLVER_NO_QUESTION answer message for %1/%2 has empty question section + +The cache tried to fill in found data into the response message. But it +discovered the message contains no question section, which is invalid. +This is likely a programmer error, please submit a bug report. + + + + +CACHE_RESOLVER_UNKNOWN_CLASS_MSG no cache for class %1 + +Debug message. While trying to lookup a message in the resolver cache, it was +discovered there's no cache for this class at all. Therefore no message is +found. + + + + +CACHE_RESOLVER_UNKNOWN_CLASS_RRSET no cache for class %1 + +Debug message. While trying to lookup an RRset in the resolver cache, it was +discovered there's no cache for this class at all. Therefore no data is found. + + + + +CACHE_RESOLVER_UPDATE_MSG updating message for %1/%2/%3 + +Debug message. The resolver is updating a message in the cache. + + + + +CACHE_RESOLVER_UPDATE_RRSET updating RRset for %1/%2/%3 + +Debug message. The resolver is updating an RRset in the cache. + + + + +CACHE_RESOLVER_UPDATE_UNKNOWN_CLASS_MSG no cache for class %1 + +Debug message. While trying to insert a message into the cache, it was +discovered that there's no cache for the class of message. Therefore +the message will not be cached. + + + + +CACHE_RESOLVER_UPDATE_UNKNOWN_CLASS_RRSET no cache for class %1 + +Debug message. While trying to insert an RRset into the cache, it was +discovered that there's no cache for the class of the RRset. Therefore +the message will not be cached. + + + + +CACHE_RRSET_EXPIRED found expired RRset %1/%2/%3 + +Debug message. The requested data was found in the RRset cache. However, it is +expired, so the cache removed it and is going to pretend nothing was found. + + + + +CACHE_RRSET_INIT initializing RRset cache for %1 RRsets of class %2 + +Debug message. The RRset cache to hold at most this many RRsets for the given +class is being created. + + + + +CACHE_RRSET_LOOKUP looking up %1/%2/%3 in RRset cache + +Debug message. The resolver is trying to look up data in the RRset cache. + + + + +CACHE_RRSET_NOT_FOUND no RRset found for %1/%2/%3 + +Debug message which can follow CACHE_RRSET_LOOKUP. This means the data is not +in the cache. + + + + +CACHE_RRSET_REMOVE_OLD removing old RRset for %1/%2/%3 to make space for new one + +Debug message which can follow CACHE_RRSET_UPDATE. During the update, the cache +removed an old instance of the RRset to replace it with the new one. + + + + +CACHE_RRSET_UNTRUSTED not replacing old RRset for %1/%2/%3, it has higher trust level + +Debug message which can follow CACHE_RRSET_UPDATE. The cache already holds the +same RRset, but from more trusted source, so the old one is kept and new one +ignored. + + + + +CACHE_RRSET_UPDATE updating RRset %1/%2/%3 in the cache + +Debug message. The RRset is updating its data with this given RRset. + + + + +CC_ASYNC_READ_FAILED asynchronous read failed + +This marks a low level error, we tried to read data from the message queue +daemon asynchronously, but the ASIO library returned an error. + + + + +CC_CONN_ERROR error connecting to message queue (%1) + +It is impossible to reach the message queue daemon for the reason given. It +is unlikely there'll be reason for whatever program this currently is to +continue running, as the communication with the rest of BIND 10 is vital +for the components. + + + + +CC_DISCONNECT disconnecting from message queue daemon + +The library is disconnecting from the message queue daemon. This debug message +indicates that the program is trying to shut down gracefully. + + + + +CC_ESTABLISH trying to establish connection with message queue daemon at %1 + +This debug message indicates that the command channel library is about to +connect to the message queue daemon, which should be listening on the UNIX-domain +socket listed in the output. + + + + +CC_ESTABLISHED successfully connected to message queue daemon + +This debug message indicates that the connection was successfully made, this +should follow CC_ESTABLISH. + + + + +CC_GROUP_RECEIVE trying to receive a message + +Debug message, noting that a message is expected to come over the command +channel. + + + + +CC_GROUP_RECEIVED message arrived ('%1', '%2') + +Debug message, noting that we successfully received a message (its envelope and +payload listed). This follows CC_GROUP_RECEIVE, but might happen some time +later, depending if we waited for it or just polled. + + + + +CC_GROUP_SEND sending message '%1' to group '%2' + +Debug message, we're about to send a message over the command channel. + + + + +CC_INVALID_LENGTHS invalid length parameters (%1, %2) + +This happens when garbage comes over the command channel or some kind of +confusion happens in the program. The data received from the socket make no +sense if we interpret it as lengths of message. The first one is total length +of the message; the second is the length of the header. The header +and its length (2 bytes) is counted in the total length. + + + + +CC_LENGTH_NOT_READY length not ready + +There should be data representing the length of message on the socket, but it +is not there. + + + + +CC_NO_MESSAGE no message ready to be received yet + +The program polled for incoming messages, but there was no message waiting. +This is a debug message which may happen only after CC_GROUP_RECEIVE. + + + + +CC_NO_MSGQ unable to connect to message queue (%1) + +It isn't possible to connect to the message queue daemon, for reason listed. +It is unlikely any program will be able continue without the communication. + + + + +CC_READ_ERROR error reading data from command channel (%1) + +A low level error happened when the library tried to read data from the +command channel socket. The reason is listed. + + + + +CC_READ_EXCEPTION error reading data from command channel (%1) + +We received an exception while trying to read data from the command +channel socket. The reason is listed. + + + + +CC_REPLY replying to message from '%1' with '%2' + +Debug message, noting we're sending a response to the original message +with the given envelope. + + + + +CC_SET_TIMEOUT setting timeout to %1ms + +Debug message. A timeout for which the program is willing to wait for a reply +is being set. + + + + +CC_START_READ starting asynchronous read + +Debug message. From now on, when a message (or command) comes, it'll wake the +program and the library will automatically pass it over to correct place. + + + + +CC_SUBSCRIBE subscribing to communication group %1 + +Debug message. The program wants to receive messages addressed to this group. + + + + +CC_TIMEOUT timeout reading data from command channel + +The program waited too long for data from the command channel (usually when it +sent a query to different program and it didn't answer for whatever reason). + + + + +CC_UNSUBSCRIBE unsubscribing from communication group %1 + +Debug message. The program no longer wants to receive messages addressed to +this group. + + + + +CC_WRITE_ERROR error writing data to command channel (%1) + +A low level error happened when the library tried to write data to the command +channel socket. + + + + +CC_ZERO_LENGTH invalid message length (0) + +The library received a message length being zero, which makes no sense, since +all messages must contain at least the envelope. + + + + +CFGMGR_AUTOMATIC_CONFIG_DATABASE_UPDATE Updating configuration database from version %1 to %2 + +An older version of the configuration database has been found, from which +there was an automatic upgrade path to the current version. These changes +are now applied, and no action from the administrator is necessary. + + + + +CFGMGR_BAD_UPDATE_RESPONSE_FROM_MODULE Unable to parse response from module %1: %2 + +The configuration manager sent a configuration update to a module, but +the module responded with an answer that could not be parsed. The answer +message appears to be invalid JSON data, or not decodable to a string. +This is likely to be a problem in the module in question. The update is +assumed to have failed, and will not be stored. + + + + +CFGMGR_CC_SESSION_ERROR Error connecting to command channel: %1 + +The configuration manager daemon was unable to connect to the messaging +system. The most likely cause is that msgq is not running. + + + + +CFGMGR_DATA_READ_ERROR error reading configuration database from disk: %1 + +There was a problem reading the persistent configuration data as stored +on disk. The file may be corrupted, or it is of a version from where +there is no automatic upgrade path. The file needs to be repaired or +removed. The configuration manager daemon will now shut down. + + + + +CFGMGR_IOERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1 + +There was an IO error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the directory where +the file is stored does not exist, or is not writable. The updated +configuration is not stored. + + + + +CFGMGR_OSERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1 + +There was an OS error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the system does not have +write access to the configuration database file. The updated +configuration is not stored. + + + + +CFGMGR_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the cfgmgr daemon. The +daemon will now shut down. + + + + +CMDCTL_BAD_CONFIG_DATA error in config data: %1 + +There was an error reading the updated configuration data. The specific +error is printed. + + + + +CMDCTL_BAD_PASSWORD bad password for user: %1 + +A login attempt was made to b10-cmdctl, but the password was wrong. +Users can be managed with the tool b10-cmdctl-usermgr. + + + + +CMDCTL_CC_SESSION_ERROR error reading from cc channel: %1 + +There was a problem reading from the command and control channel. The +most likely cause is that the message bus daemon is not running. + + + + +CMDCTL_CC_SESSION_TIMEOUT timeout on cc channel + +A timeout occurred when waiting for essential data from the cc session. +This usually occurs when b10-cfgmgr is not running or not responding. +Since we are waiting for essential information, this is a fatal error, +and the cmdctl daemon will now shut down. + + + + +CMDCTL_COMMAND_ERROR error in command %1 to module %2: %3 + +An error was encountered sending the given command to the given module. +Either there was a communication problem with the module, or the module +was not able to process the command, and sent back an error. The +specific error is printed in the message. + + + + +CMDCTL_COMMAND_SENT command '%1' to module '%2' was sent + +This debug message indicates that the given command has been sent to +the given module. + + + + +CMDCTL_NO_SUCH_USER username not found in user database: %1 + +A login attempt was made to b10-cmdctl, but the username was not known. +Users can be added with the tool b10-cmdctl-usermgr. + + + + +CMDCTL_NO_USER_ENTRIES_READ failed to read user information, all users will be denied + +The b10-cmdctl daemon was unable to find any user data in the user +database file. Either it was unable to read the file (in which case +this message follows a message CMDCTL_USER_DATABASE_READ_ERROR +containing a specific error), or the file was empty. Users can be added +with the tool b10-cmdctl-usermgr. + + + + +CMDCTL_SEND_COMMAND sending command %1 to module %2 + +This debug message indicates that the given command is being sent to +the given module. + + + + +CMDCTL_SSL_SETUP_FAILURE_USER_DENIED failed to create an SSL connection (user denied): %1 + +The user was denied because the SSL connection could not successfully +be set up. The specific error is given in the log message. Possible +causes may be that the ssl request itself was bad, or the local key or +certificate file could not be read. + + + + +CMDCTL_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the cmdctl daemon. The +daemon will now shut down. + + + + +CMDCTL_UNCAUGHT_EXCEPTION uncaught exception: %1 + +The b10-cmdctl daemon encountered an uncaught exception and +will now shut down. This is indicative of a programming error and +should not happen under normal circumstances. The exception message +is printed. + + + + +CMDCTL_USER_DATABASE_READ_ERROR failed to read user database file %1: %2 + +The b10-cmdctl daemon was unable to read the user database file. The +file may be unreadable for the daemon, or it may be corrupted. In the +latter case, it can be recreated with b10-cmdctl-usermgr. The specific +error is printed in the log message. @@ -148,32 +1561,18 @@ The message itself is ignored by this module. CONFIG_CCSESSION_MSG_INTERNAL error handling CC session message: %1 -There was an internal problem handling an incoming message on the -command and control channel. An unexpected exception was thrown. This -most likely points to an internal inconsistency in the module code. The -exception message is appended to the log error, and the module will -continue to run, but will not send back an answer. +There was an internal problem handling an incoming message on the command +and control channel. An unexpected exception was thrown, details of +which are appended to the message. The module will continue to run, +but will not send back an answer. + +The most likely cause of this error is a programming error. Please raise +a bug report. - -CONFIG_FOPEN_ERR error opening %1: %2 - -There was an error opening the given file. - - - - -CONFIG_JSON_PARSE JSON parse error in %1: %2 - -There was a parse error in the JSON file. The given file does not appear -to be in valid JSON format. Please verify that the filename is correct -and that the contents are valid JSON. - - - - -CONFIG_MANAGER_CONFIG error getting configuration from cfgmgr: %1 + +CONFIG_GET_FAIL error getting configuration from cfgmgr: %1 The configuration manager returned an error when this module requested the configuration. The full error message answer from the configuration @@ -183,30 +1582,107 @@ running configuration manager. - -CONFIG_MANAGER_MOD_SPEC module specification not accepted by cfgmgr: %1 + +CONFIG_GET_FAILED error getting configuration from cfgmgr: %1 -The module specification file for this module was rejected by the -configuration manager. The full error message answer from the -configuration manager is appended to the log error. The most likely -cause is that the module is of a different (specification file) version -than the running configuration manager. +The configuration manager returned an error response when the module +requested its configuration. The full error message answer from the +configuration manager is appended to the log error. - -CONFIG_MODULE_SPEC module specification error in %1: %2 + +CONFIG_JSON_PARSE JSON parse error in %1: %2 -The given file does not appear to be a valid specification file. Please -verify that the filename is correct and that its contents are a valid -BIND10 module specification. +There was an error parsing the JSON file. The given file does not appear +to be in valid JSON format. Please verify that the filename is correct +and that the contents are valid JSON. + + + + +CONFIG_LOG_CONFIG_ERRORS error(s) in logging configuration: %1 + +There was a logging configuration update, but the internal validator +for logging configuration found that it contained errors. The errors +are shown, and the update is ignored. + + + + +CONFIG_LOG_EXPLICIT will use logging configuration for explicitly-named logger %1 + +This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found an entry for the named +logger that matches the logger specification for the program. The logging +configuration for the program will be updated with the information. + + + + +CONFIG_LOG_IGNORE_EXPLICIT ignoring logging configuration for explicitly-named logger %1 + +This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found an entry for the +named logger. As this does not match the logger specification for the +program, it has been ignored. + + + + +CONFIG_LOG_IGNORE_WILD ignoring logging configuration for wildcard logger %1 + +This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found the named wildcard +entry (one containing the "*" character) that matched a logger already +matched by an explicitly named entry. The configuration is ignored. + + + + +CONFIG_LOG_WILD_MATCH will use logging configuration for wildcard logger %1 + +This is a debug message. When processing the "loggers" part of +the configuration file, the configuration library found the named +wildcard entry (one containing the "*" character) that matches a logger +specification in the program. The logging configuration for the program +will be updated with the information. + + + + +CONFIG_MOD_SPEC_FORMAT module specification error in %1: %2 + +The given file does not appear to be a valid specification file: details +are included in the message. Please verify that the filename is correct +and that its contents are a valid BIND10 module specification. + + + + +CONFIG_MOD_SPEC_REJECT module specification rejected by cfgmgr: %1 + +The specification file for this module was rejected by the configuration +manager. The full error message answer from the configuration manager is +appended to the log error. The most likely cause is that the module is of +a different (specification file) version than the running configuration +manager. + + + + +CONFIG_OPEN_FAIL error opening %1: %2 + +There was an error opening the given file. The reason for the failure +is included in the message. DATASRC_CACHE_CREATE creating the hotspot cache -Debug information that the hotspot cache was created at startup. +This is a debug message issued during startup when the hotspot cache +is created. @@ -218,39 +1694,37 @@ Debug information. The hotspot cache is being destroyed. -DATASRC_CACHE_DISABLE disabling the cache +DATASRC_CACHE_DISABLE disabling the hotspot cache -The hotspot cache is disabled from now on. It is not going to store -information or return anything. +A debug message issued when the hotspot cache is disabled. -DATASRC_CACHE_ENABLE enabling the cache +DATASRC_CACHE_ENABLE enabling the hotspot cache -The hotspot cache is enabled from now on. +A debug message issued when the hotspot cache is enabled. -DATASRC_CACHE_EXPIRED the item '%1' is expired +DATASRC_CACHE_EXPIRED item '%1' in the hotspot cache has expired -Debug information. There was an attempt to look up an item in the hotspot -cache. And the item was actually there, but it was too old, so it was removed -instead and nothing is reported (the external behaviour is the same as with -CACHE_NOT_FOUND). +A debug message issued when a hotspot cache lookup located the item but it +had expired. The item was removed and the program proceeded as if the item +had not been found. DATASRC_CACHE_FOUND the item '%1' was found -Debug information. An item was successfully looked up in the hotspot cache. +Debug information. An item was successfully located in the hotspot cache. -DATASRC_CACHE_FULL cache is full, dropping oldest +DATASRC_CACHE_FULL hotspot cache is full, dropping oldest Debug information. After inserting an item into the hotspot cache, the maximum number of items was exceeded, so the least recently used item will @@ -259,39 +1733,39 @@ be dropped. This should be directly followed by CACHE_REMOVE. -DATASRC_CACHE_INSERT inserting item '%1' into the cache +DATASRC_CACHE_INSERT inserting item '%1' into the hotspot cache -Debug information. It means a new item is being inserted into the hotspot +A debug message indicating that a new item is being inserted into the hotspot cache. -DATASRC_CACHE_NOT_FOUND the item '%1' was not found +DATASRC_CACHE_NOT_FOUND the item '%1' was not found in the hotspot cache -Debug information. It was attempted to look up an item in the hotspot cache, -but it is not there. +A debug message issued when hotspot cache was searched for the specified +item but it was not found. -DATASRC_CACHE_OLD_FOUND older instance of cache item found, replacing +DATASRC_CACHE_OLD_FOUND older instance of hotspot cache item '%1' found, replacing Debug information. While inserting an item into the hotspot cache, an older -instance of an item with the same name was found. The old instance will be -removed. This should be directly followed by CACHE_REMOVE. +instance of an item with the same name was found; the old instance will be +removed. This will be directly followed by CACHE_REMOVE. -DATASRC_CACHE_REMOVE removing '%1' from the cache +DATASRC_CACHE_REMOVE removing '%1' from the hotspot cache Debug information. An item is being removed from the hotspot cache. -DATASRC_CACHE_SLOTS setting the cache size to '%1', dropping '%2' items +DATASRC_CACHE_SLOTS setting the hotspot cache size to '%1', dropping '%2' items The maximum allowed number of items of the hotspot cache is set to the given number. If there are too many, some of them will be dropped. The size of 0 @@ -299,11 +1773,109 @@ means no limit. + +DATASRC_DATABASE_FIND_ERROR error retrieving data from datasource %1: %2 + +This was an internal error while reading data from a datasource. This can either +mean the specific data source implementation is not behaving correctly, or the +data it provides is invalid. The current search is aborted. +The error message contains specific information about the error. + + + + +DATASRC_DATABASE_FIND_RECORDS looking in datasource %1 for record %2/%3 + +Debug information. The database data source is looking up records with the given +name and type in the database. + + + + +DATASRC_DATABASE_FIND_TTL_MISMATCH TTL values differ in %1 for elements of %2/%3/%4, setting to %5 + +The datasource backend provided resource records for the given RRset with +different TTL values. The TTL of the RRSET is set to the lowest value, which +is printed in the log message. + + + + +DATASRC_DATABASE_FIND_UNCAUGHT_ERROR uncaught general error retrieving data from datasource %1: %2 + +There was an uncaught general exception while reading data from a datasource. +This most likely points to a logic error in the code, and can be considered a +bug. The current search is aborted. Specific information about the exception is +printed in this error message. + + + + +DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR uncaught error retrieving data from datasource %1: %2 + +There was an uncaught ISC exception while reading data from a datasource. This +most likely points to a logic error in the code, and can be considered a bug. +The current search is aborted. Specific information about the exception is +printed in this error message. + + + + +DATASRC_DATABASE_FOUND_DELEGATION Found delegation at %2 in %1 + +When searching for a domain, the program met a delegation to a different zone +at the given domain name. It will return that one instead. + + + + +DATASRC_DATABASE_FOUND_DELEGATION_EXACT Found delegation at %2 (exact match) in %1 + +The program found the domain requested, but it is a delegation point to a +different zone, therefore it is not authoritative for this domain name. +It will return the NS record instead. + + + + +DATASRC_DATABASE_FOUND_DNAME Found DNAME at %2 in %1 + +When searching for a domain, the program met a DNAME redirection to a different +place in the domain space at the given domain name. It will return that one +instead. + + + + +DATASRC_DATABASE_FOUND_NXDOMAIN search in datasource %1 resulted in NXDOMAIN for %2/%3/%4 + +The data returned by the database backend did not contain any data for the given +domain name, class and type. + + + + +DATASRC_DATABASE_FOUND_NXRRSET search in datasource %1 resulted in NXRRSET for %2/%3/%4 + +The data returned by the database backend contained data for the given domain +name and class, but not for the given type. + + + + +DATASRC_DATABASE_FOUND_RRSET search in datasource %1 resulted in RRset %2 + +The data returned by the database backend contained data for the given domain +name, and it either matches the type or has a relevant type. The RRset that is +returned is printed. + + + DATASRC_DO_QUERY handling query for '%1/%2' -Debug information. We're processing some internal query for given name and -type. +A debug message indicating that a query for the given name and RR type is being +processed. @@ -317,8 +1889,9 @@ Debug information. An RRset is being added to the in-memory data source. DATASRC_MEM_ADD_WILDCARD adding wildcards for '%1' -Debug information. Some special marks above each * in wildcard name are needed. -They are being added now for this name. +This is a debug message issued during the processing of a wildcard +name. The internal domain name tree is scanned and some nodes are +specially marked to allow the wildcard lookup to succeed. @@ -349,7 +1922,7 @@ returning the CNAME instead. DATASRC_MEM_CNAME_COEXIST can't add data to CNAME in domain '%1' This is the same problem as in MEM_CNAME_TO_NONEMPTY, but it happened the -other way around -- adding some outher data to CNAME. +other way around -- adding some other data to CNAME. @@ -401,11 +1974,11 @@ Debug information. A DNAME was found instead of the requested information. -DATASRC_MEM_DNAME_NS dNAME and NS can't coexist in non-apex domain '%1' +DATASRC_MEM_DNAME_NS DNAME and NS can't coexist in non-apex domain '%1' -It was requested for DNAME and NS records to be put into the same domain -which is not the apex (the top of the zone). This is forbidden by RFC -2672, section 3. This indicates a problem with provided data. +A request was made for DNAME and NS records to be put into the same +domain which is not the apex (the top of the zone). This is forbidden +by RFC 2672 (section 3) and indicates a problem with provided data. @@ -457,8 +2030,8 @@ Debug information. The content of master file is being loaded into the memory. - -DATASRC_MEM_NOTFOUND requested domain '%1' not found + +DATASRC_MEM_NOT_FOUND requested domain '%1' not found Debug information. The requested domain does not exist. @@ -544,7 +2117,7 @@ behaviour is specified by RFC 1034, section 4.3.3 -DATASRC_MEM_WILDCARD_DNAME dNAME record in wildcard domain '%1' +DATASRC_MEM_WILDCARD_DNAME DNAME record in wildcard domain '%1' The software refuses to load DNAME records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should @@ -554,7 +2127,7 @@ different tools. -DATASRC_MEM_WILDCARD_NS nS record in wildcard domain '%1' +DATASRC_MEM_WILDCARD_NS NS record in wildcard domain '%1' The software refuses to load NS records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should @@ -566,15 +2139,15 @@ different tools. DATASRC_META_ADD adding a data source into meta data source -Debug information. Yet another data source is being added into the meta data -source. (probably at startup or reconfiguration) +This is a debug message issued during startup or reconfiguration. +Another data source is being added into the meta data source. DATASRC_META_ADD_CLASS_MISMATCH mismatch between classes '%1' and '%2' -It was attempted to add a data source into a meta data source. But their +It was attempted to add a data source into a meta data source, but their classes do not match. @@ -634,7 +2207,7 @@ information for it. -DATASRC_QUERY_CACHED data for %1/%2 found in cache +DATASRC_QUERY_CACHED data for %1/%2 found in hotspot cache Debug information. The requested data were found in the hotspot cache, so no query is sent to the real data source. @@ -642,7 +2215,7 @@ no query is sent to the real data source. -DATASRC_QUERY_CHECK_CACHE checking cache for '%1/%2' +DATASRC_QUERY_CHECK_CACHE checking hotspot cache for '%1/%2' Debug information. While processing a query, lookup to the hotspot cache is being made. @@ -666,12 +2239,11 @@ way down to the given domain. -DATASRC_QUERY_EMPTY_CNAME cNAME at '%1' is empty +DATASRC_QUERY_EMPTY_CNAME CNAME at '%1' is empty -There was an CNAME and it was being followed. But it contains no records, -so there's nowhere to go. There will be no answer. This indicates a problem -with supplied data. -We tried to follow +A CNAME chain was being followed and an entry was found that pointed +to a domain name that had no RRsets associated with it. As a result, +the query cannot be answered. This indicates a problem with supplied data. @@ -687,15 +2259,15 @@ DNAME is empty (it has no records). This indicates problem with supplied data. DATASRC_QUERY_FAIL query failed Some subtask of query processing failed. The reason should have been reported -already. We are returning SERVFAIL. +already and a SERVFAIL will be returned to the querying system. DATASRC_QUERY_FOLLOW_CNAME following CNAME at '%1' -Debug information. The domain is a CNAME (or a DNAME and we created a CNAME -for it already), so it's being followed. +Debug information. The domain is a CNAME (or a DNAME and a CNAME for it +has already been created) and the search is following this chain. @@ -744,14 +2316,14 @@ Debug information. The last DO_QUERY is an auth query. DATASRC_QUERY_IS_GLUE glue query (%1/%2) -Debug information. The last DO_QUERY is query for glue addresses. +Debug information. The last DO_QUERY is a query for glue addresses. DATASRC_QUERY_IS_NOGLUE query for non-glue addresses (%1/%2) -Debug information. The last DO_QUERY is query for addresses that are not +Debug information. The last DO_QUERY is a query for addresses that are not glue. @@ -759,7 +2331,7 @@ glue. DATASRC_QUERY_IS_REF query for referral (%1/%2) -Debug information. The last DO_QUERY is query for referral information. +Debug information. The last DO_QUERY is a query for referral information. @@ -806,7 +2378,7 @@ error already. -DATASRC_QUERY_NO_CACHE_ANY_AUTH ignoring cache for ANY query (%1/%2 in %3 class) +DATASRC_QUERY_NO_CACHE_ANY_AUTH ignoring hotspot cache for ANY query (%1/%2 in %3 class) Debug information. The hotspot cache is ignored for authoritative ANY queries for consistency reasons. @@ -814,7 +2386,7 @@ for consistency reasons. -DATASRC_QUERY_NO_CACHE_ANY_SIMPLE ignoring cache for ANY query (%1/%2 in %3 class) +DATASRC_QUERY_NO_CACHE_ANY_SIMPLE ignoring hotspot cache for ANY query (%1/%2 in %3 class) Debug information. The hotspot cache is ignored for ANY queries for consistency reasons. @@ -852,8 +2424,8 @@ Debug information. A sure query is being processed now. - -DATASRC_QUERY_PROVENX_FAIL unable to prove nonexistence of '%1' + +DATASRC_QUERY_PROVE_NX_FAIL unable to prove nonexistence of '%1' The user wants DNSSEC and we discovered the entity doesn't exist (either domain or the record). But there was an error getting NSEC/NSEC3 record @@ -890,9 +2462,9 @@ error already. DATASRC_QUERY_SYNTH_CNAME synthesizing CNAME from DNAME on '%1' -Debug information. While answering a query, a DNAME was met. The DNAME itself -will be returned, but along with it a CNAME for clients which don't understand -DNAMEs will be synthesized. +This is a debug message. While answering a query, a DNAME was encountered. The +DNAME itself will be returned, along with a synthesized CNAME for clients that +do not understand the DNAME RR. @@ -905,7 +2477,7 @@ already. The code is 1 for error, 2 for not implemented. -DATASRC_QUERY_TOO_MANY_CNAMES cNAME chain limit exceeded at '%1' +DATASRC_QUERY_TOO_MANY_CNAMES CNAME chain limit exceeded at '%1' A CNAME led to another CNAME and it led to another, and so on. After 16 CNAMEs, the software gave up. Long CNAME chains are discouraged, and this @@ -938,8 +2510,8 @@ exact kind was hopefully already reported. - -DATASRC_QUERY_WILDCARD_PROVENX_FAIL unable to prove nonexistence of '%1' (%2) + +DATASRC_QUERY_WILDCARD_PROVE_NX_FAIL unable to prove nonexistence of '%1' (%2) While processing a wildcard, it wasn't possible to prove nonexistence of the given domain or record. The code is 1 for error and 2 for not implemented. @@ -961,32 +2533,53 @@ Debug information. The SQLite data source is closing the database file. + +DATASRC_SQLITE_CONNCLOSE Closing sqlite database + +The database file is no longer needed and is being closed. + + + + +DATASRC_SQLITE_CONNOPEN Opening sqlite database file '%1' + +The database file is being opened so it can start providing data. + + + -DATASRC_SQLITE_CREATE sQLite data source created +DATASRC_SQLITE_CREATE SQLite data source created Debug information. An instance of SQLite data source is being created. -DATASRC_SQLITE_DESTROY sQLite data source destroyed +DATASRC_SQLITE_DESTROY SQLite data source destroyed Debug information. An instance of SQLite data source is being destroyed. + +DATASRC_SQLITE_DROPCONN SQLite3Database is being deinitialized + +The object around a database connection is being destroyed. + + + DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1' -Debug information. The SQLite data source is trying to identify, which zone +Debug information. The SQLite data source is trying to identify which zone should hold this domain. - -DATASRC_SQLITE_ENCLOSURE_NOTFOUND no zone contains it + +DATASRC_SQLITE_ENCLOSURE_NOT_FOUND no zone contains '%1' -Debug information. The last SQLITE_ENCLOSURE query was unsuccessful, there's +Debug information. The last SQLITE_ENCLOSURE query was unsuccessful; there's no such zone in our data. @@ -1050,7 +2643,7 @@ a referral and where it goes. DATASRC_SQLITE_FINDREF_BAD_CLASS class mismatch looking for referral ('%1' and '%2') -The SQLite data source was trying to identify, if there's a referral. But +The SQLite data source was trying to identify if there's a referral. But it contains different class than the query was for. @@ -1079,6 +2672,13 @@ But it doesn't contain that zone. + +DATASRC_SQLITE_NEWCONN SQLite3Database is being initialized + +A wrapper object to hold database connection is being initialized. + + + DATASRC_SQLITE_OPEN opening SQLite database '%1' @@ -1090,15 +2690,22 @@ the provided file. DATASRC_SQLITE_PREVIOUS looking for name previous to '%1' -Debug information. We're trying to look up name preceding the supplied one. +This is a debug message. The name given was not found, so the program +is searching for the next name higher up the hierarchy (e.g. if +www.example.com were queried for and not found, the software searches +for the "previous" name, example.com). DATASRC_SQLITE_PREVIOUS_NO_ZONE no zone containing '%1' -The SQLite data source tried to identify name preceding this one. But this -one is not contained in any zone in the data source. +The name given was not found, so the program is searching for the next +name higher up the hierarchy (e.g. if www.example.com were queried +for and not found, the software searches for the "previous" name, +example.com). However, this name is not contained in any zone in the +data source. This is an error since it indicates a problem in the earlier +processing of the query. @@ -1111,11 +2718,11 @@ no data, but it will be ready for use. - -DATASRC_STATIC_BAD_CLASS static data source can handle CH only + +DATASRC_STATIC_CLASS_NOT_CH static data source can handle CH class only -For some reason, someone asked the static data source a query that is not in -the CH class. +An error message indicating that a query requesting a RR for a class other +that CH was sent to the static data source (which only handles CH queries). @@ -1143,294 +2750,436 @@ generated. - -LOGIMPL_ABOVEDBGMAX debug level of %1 is too high and will be set to the maximum of %2 + +LOGIMPL_ABOVE_MAX_DEBUG debug level of %1 is too high and will be set to the maximum of %2 -A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is above the maximum allowed value and has -been reduced to that value. +A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is above the maximum allowed value and has +been reduced to that value. The appearance of this message may indicate +a programming error - please submit a bug report. - -LOGIMPL_BADDEBUG debug string is '%1': must be of the form DEBUGn + +LOGIMPL_BAD_DEBUG_STRING debug string '%1' has invalid format -The string indicating the extended logging level (used by the underlying -logger implementation code) is not of the stated form. In particular, -it starts DEBUG but does not end with an integer. +A message from the interface to the underlying logger implementation +reporting that an internally-created string used to set the debug level +is not of the correct format (it should be of the form DEBUGn, where n +is an integer, e.g. DEBUG22). The appearance of this message indicates +a programming error - please submit a bug report. - -LOGIMPL_BELOWDBGMIN debug level of %1 is too low and will be set to the minimum of %2 + +LOGIMPL_BELOW_MIN_DEBUG debug level of %1 is too low and will be set to the minimum of %2 -A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is below the minimum allowed value and has -been increased to that value. +A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is below the minimum allowed value and has +been increased to that value. The appearance of this message may indicate +a programming error - please submit a bug report. - -MSG_BADDESTINATION unrecognized log destination: %1 + +LOG_BAD_DESTINATION unrecognized log destination: %1 A logger destination value was given that was not recognized. The destination should be one of "console", "file", or "syslog". - -MSG_BADSEVERITY unrecognized log severity: %1 + +LOG_BAD_SEVERITY unrecognized log severity: %1 A logger severity value was given that was not recognized. The severity -should be one of "DEBUG", "INFO", "WARN", "ERROR", or "FATAL". +should be one of "DEBUG", "INFO", "WARN", "ERROR", "FATAL" or "NONE". - -MSG_BADSTREAM bad log console output stream: %1 + +LOG_BAD_STREAM bad log console output stream: %1 -A log console output stream was given that was not recognized. The -output stream should be one of "stdout", or "stderr" +Logging has been configured so that output is written to the terminal +(console) but the stream on which it is to be written is not recognised. +Allowed values are "stdout" and "stderr". - -MSG_DUPLNS line %1: duplicate $NAMESPACE directive found + +LOG_DUPLICATE_MESSAGE_ID duplicate message ID (%1) in compiled code -When reading a message file, more than one $NAMESPACE directive was found. In -this version of the code, such a condition is regarded as an error and the -read will be abandoned. +During start-up, BIND 10 detected that the given message identification +had been defined multiple times in the BIND 10 code. This indicates a +programming error; please submit a bug report. - -MSG_DUPMSGID duplicate message ID (%1) in compiled code + +LOG_DUPLICATE_NAMESPACE line %1: duplicate $NAMESPACE directive found -Indicative of a programming error, when it started up, BIND10 detected that -the given message ID had been registered by one or more modules. (All message -IDs should be unique throughout BIND10.) This has no impact on the operation -of the server other that erroneous messages may be logged. (When BIND10 loads -the message IDs (and their associated text), if a duplicate ID is found it is -discarded. However, when the module that supplied the duplicate ID logs that -particular message, the text supplied by the module that added the original -ID will be output - something that may bear no relation to the condition being -logged. +When reading a message file, more than one $NAMESPACE directive was found. +(This directive is used to set a C++ namespace when generating header +files during software development.) Such a condition is regarded as an +error and the read will be abandoned. - -MSG_IDNOTFND could not replace message text for '%1': no such message + +LOG_INPUT_OPEN_FAIL unable to open message file %1 for input: %2 + +The program was not able to open the specified input message file for +the reason given. + + + + +LOG_INVALID_MESSAGE_ID line %1: invalid message identification '%2' + +An invalid message identification (ID) has been found during the read of +a message file. Message IDs should comprise only alphanumeric characters +and the underscore, and should not start with a digit. + + + + +LOG_NAMESPACE_EXTRA_ARGS line %1: $NAMESPACE directive has too many arguments + +The $NAMESPACE directive in a message file takes a single argument, a +namespace in which all the generated symbol names are placed. This error +is generated when the compiler finds a $NAMESPACE directive with more +than one argument. + + + + +LOG_NAMESPACE_INVALID_ARG line %1: $NAMESPACE directive has an invalid argument ('%2') + +The $NAMESPACE argument in a message file should be a valid C++ namespace. +This message is output if the simple check on the syntax of the string +carried out by the reader fails. + + + + +LOG_NAMESPACE_NO_ARGS line %1: no arguments were given to the $NAMESPACE directive + +The $NAMESPACE directive in a message file takes a single argument, +a C++ namespace in which all the generated symbol names are placed. +This error is generated when the compiler finds a $NAMESPACE directive +with no arguments. + + + + +LOG_NO_MESSAGE_ID line %1: message definition line found without a message ID + +Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line in +the message file comprising just the "%" and nothing else. + + + + +LOG_NO_MESSAGE_TEXT line %1: line found containing a message ID ('%2') and no text + +Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line +in the message file comprising just the "%" and message identification, +but no text. + + + + +LOG_NO_SUCH_MESSAGE could not replace message text for '%1': no such message During start-up a local message file was read. A line with the listed -message identification was found in the file, but the identification is not -one contained in the compiled-in message dictionary. Either the message -identification has been mis-spelled in the file, or the local file was used -for an earlier version of the software and the message with that -identification has been removed. +message identification was found in the file, but the identification is +not one contained in the compiled-in message dictionary. This message +may appear a number of times in the file, once for every such unknown +message identification. -This message may appear a number of times in the file, once for every such -unknown message identification. +There may be several reasons why this message may appear: + +- The message ID has been mis-spelled in the local message file. + +- The program outputting the message may not use that particular message +(e.g. it originates in a module not used by the program.) + +- The local file was written for an earlier version of the BIND 10 software +and the later version no longer generates that message. + +Whatever the reason, there is no impact on the operation of BIND 10. - -MSG_INVMSGID line %1: invalid message identification '%2' + +LOG_OPEN_OUTPUT_FAIL unable to open %1 for output: %2 -The concatenation of the prefix and the message identification is used as -a symbol in the C++ module; as such it may only contain +Originating within the logging code, the program was not able to open +the specified output file for the reason given. - -MSG_NOMSGID line %1: message definition line found without a message ID + +LOG_PREFIX_EXTRA_ARGS line %1: $PREFIX directive has too many arguments -Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -indicates the message compiler found a line in the message file comprising -just the "%" and nothing else. +Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +This error is generated when the compiler finds a $PREFIX directive with +more than one argument. + +Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND 10. - -MSG_NOMSGTXT line %1: line found containing a message ID ('%2') and no text + +LOG_PREFIX_INVALID_ARG line %1: $PREFIX directive has an invalid argument ('%2') -Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -is generated when a line is found in the message file that contains the -leading "%" and the message identification but no text. +Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +As such, it must adhere to restrictions on C++ symbol names (e.g. may +only contain alphanumeric characters or underscores, and may nor start +with a digit). A $PREFIX directive was found with an argument (given +in the message) that violates those restrictions. + +Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND 10. - -MSG_NSEXTRARG line %1: $NAMESPACE directive has too many arguments + +LOG_READING_LOCAL_FILE reading local message file %1 -The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with more than one argument. +This is an informational message output by BIND 10 when it starts to read +a local message file. (A local message file may replace the text of +one of more messages; the ID of the message will not be changed though.) - -MSG_NSINVARG line %1: $NAMESPACE directive has an invalid argument ('%2') - -The $NAMESPACE argument should be a valid C++ namespace. The reader does a -cursory check on its validity, checking that the characters in the namespace -are correct. The error is generated when the reader finds an invalid -character. (Valid are alphanumeric characters, underscores and colons.) - - - - -MSG_NSNOARG line %1: no arguments were given to the $NAMESPACE directive - -The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with no arguments. - - - - -MSG_OPENIN unable to open message file %1 for input: %2 - -The program was not able to open the specified input message file for the -reason given. - - - - -MSG_OPENOUT unable to open %1 for output: %2 - -The program was not able to open the specified output file for the reason -given. - - - - -MSG_PRFEXTRARG line %1: $PREFIX directive has too many arguments - -The $PREFIX directive takes a single argument, a prefix to be added to the -symbol names when a C++ .h file is created. This error is generated when the -compiler finds a $PREFIX directive with more than one argument. - - - - -MSG_PRFINVARG line %1: $PREFIX directive has an invalid argument ('%2') - -The $PREFIX argument is used in a symbol name in a C++ header file. As such, -it must adhere to restrictions on C++ symbol names (e.g. may only contain -alphanumeric characters or underscores, and may nor start with a digit). -A $PREFIX directive was found with an argument (given in the message) that -violates those restictions. - - - - -MSG_RDLOCMES reading local message file %1 - -This is an informational message output by BIND10 when it starts to read a -local message file. (A local message file may replace the text of one of more -messages; the ID of the message will not be changed though.) - - - - -MSG_READERR error reading from message file %1: %2 + +LOG_READ_ERROR error reading from message file %1: %2 The specified error was encountered reading from the named message file. - -MSG_UNRECDIR line %1: unrecognised directive '%2' + +LOG_UNRECOGNISED_DIRECTIVE line %1: unrecognised directive '%2' -A line starting with a dollar symbol was found, but the first word on the line -(shown in the message) was not a recognised message compiler directive. +Within a message file, a line starting with a dollar symbol was found +(indicating the presence of a directive) but the first word on the line +(shown in the message) was not recognised. - -MSG_WRITERR error writing to %1: %2 + +LOG_WRITE_ERROR error writing to %1: %2 -The specified error was encountered by the message compiler when writing to -the named output file. +The specified error was encountered by the message compiler when writing +to the named output file. - -NSAS_INVRESPSTR queried for %1 but got invalid response + +NOTIFY_OUT_INVALID_ADDRESS invalid address %1#%2: %3 -This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for a RR for the -specified nameserver but received an invalid response. Either the success -function was called without a DNS message or the message was invalid on some -way. (In the latter case, the error should have been picked up elsewhere in -the processing logic, hence the raising of the error here.) +The notify_out library tried to send a notify message to the given +address, but it appears to be an invalid address. The configuration +for secondary nameservers might contain a typographic error, or a +different BIND 10 module has forgotten to validate its data before +sending this module a notify command. As such, this should normally +not happen, and points to an oversight in a different module. - -NSAS_INVRESPTC queried for %1 RR of type/class %2/%3, received response %4/%5 + +NOTIFY_OUT_REPLY_BAD_OPCODE bad opcode in notify reply from %1#%2: %3 -This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for the given RR -type and class, but instead received an answer with the given type and class. +The notify_out library sent a notify message to the nameserver at +the given address, but the response did not have the opcode set to +NOTIFY. The opcode in the response is printed. Since there was a +response, no more notifies will be sent to this server for this +notification event. - -NSAS_LOOKUPCANCEL lookup for zone %1 has been cancelled + +NOTIFY_OUT_REPLY_BAD_QID bad QID in notify reply from %1#%2: got %3, should be %4 -A debug message, this is output when a NSAS (nameserver address store - -part of the resolver) lookup for a zone has been cancelled. +The notify_out library sent a notify message to the nameserver at +the given address, but the query id in the response does not match +the one we sent. Since there was a response, no more notifies will +be sent to this server for this notification event. - -NSAS_LOOKUPZONE searching NSAS for nameservers for zone %1 + +NOTIFY_OUT_REPLY_BAD_QUERY_NAME bad query name in notify reply from %1#%2: got %3, should be %4 -A debug message, this is output when a call is made to the nameserver address -store (part of the resolver) to obtain the nameservers for the specified zone. +The notify_out library sent a notify message to the nameserver at +the given address, but the query name in the response does not match +the one we sent. Since there was a response, no more notifies will +be sent to this server for this notification event. - -NSAS_NSADDR asking resolver to obtain A and AAAA records for %1 + +NOTIFY_OUT_REPLY_QR_NOT_SET QR flags set to 0 in reply to notify from %1#%2 -A debug message, the NSAS (nameserver address store - part of the resolver) is -making a callback into the resolver to retrieve the address records for the -specified nameserver. +The notify_out library sent a notify message to the namesever at the +given address, but the reply did not have the QR bit set to one. +Since there was a response, no more notifies will be sent to this +server for this notification event. - -NSAS_NSLKUPFAIL failed to lookup any %1 for %2 + +NOTIFY_OUT_REPLY_UNCAUGHT_EXCEPTION uncaught exception: %1 -A debug message, the NSAS (nameserver address store - part of the resolver) -has been unable to retrieve the specified resource record for the specified -nameserver. This is not necessarily a problem - the nameserver may be -unreachable, in which case the NSAS will try other nameservers in the zone. +There was an uncaught exception in the handling of a notify reply +message, either in the message parser, or while trying to extract data +from the parsed message. The error is printed, and notify_out will +treat the response as a bad message, but this does point to a +programming error, since all exceptions should have been caught +explicitly. Please file a bug report. Since there was a response, +no more notifies will be sent to this server for this notification +event. - -NSAS_NSLKUPSUCC found address %1 for %2 + +NOTIFY_OUT_RETRY_EXCEEDED notify to %1#%2: number of retries (%3) exceeded -A debug message, the NSAS (nameserver address store - part of the resolver) -has retrieved the given address for the specified nameserver through an -external query. +The maximum number of retries for the notify target has been exceeded. +Either the address of the secondary nameserver is wrong, or it is not +responding. - -NSAS_SETRTT reporting RTT for %1 as %2; new value is now %3 + +NOTIFY_OUT_SENDING_NOTIFY sending notify to %1#%2 + +A notify message is sent to the secondary nameserver at the given +address. + + + + +NOTIFY_OUT_SOCKET_ERROR socket error sending notify to %1#%2: %3 + +There was a network error while trying to send a notify message to +the given address. The address might be unreachable. The socket +error is printed and should provide more information. + + + + +NOTIFY_OUT_SOCKET_RECV_ERROR socket error reading notify reply from %1#%2: %3 + +There was a network error while trying to read a notify reply +message from the given address. The socket error is printed and should +provide more information. + + + + +NOTIFY_OUT_TIMEOUT retry notify to %1#%2 + +The notify message to the given address (noted as address#port) has +timed out, and the message will be resent until the max retry limit +is reached. + + + + +NSAS_FIND_NS_ADDRESS asking resolver to obtain A and AAAA records for %1 + +A debug message issued when the NSAS (nameserver address store - part +of the resolver) is making a callback into the resolver to retrieve the +address records for the specified nameserver. + + + + +NSAS_FOUND_ADDRESS found address %1 for %2 + +A debug message issued when the NSAS (nameserver address store - part +of the resolver) has retrieved the given address for the specified +nameserver through an external query. + + + + +NSAS_INVALID_RESPONSE queried for %1 but got invalid response + +The NSAS (nameserver address store - part of the resolver) made a query +for a RR for the specified nameserver but received an invalid response. +Either the success function was called without a DNS message or the +message was invalid on some way. (In the latter case, the error should +have been picked up elsewhere in the processing logic, hence the raising +of the error here.) + +This message indicates an internal error in the NSAS. Please raise a +bug report. + + + + +NSAS_LOOKUP_CANCEL lookup for zone %1 has been canceled + +A debug message issued when an NSAS (nameserver address store - part of +the resolver) lookup for a zone has been canceled. + + + + +NSAS_NS_LOOKUP_FAIL failed to lookup any %1 for %2 + +A debug message issued when the NSAS (nameserver address store - part of +the resolver) has been unable to retrieve the specified resource record +for the specified nameserver. This is not necessarily a problem - the +nameserver may be unreachable, in which case the NSAS will try other +nameservers in the zone. + + + + +NSAS_SEARCH_ZONE_NS searching NSAS for nameservers for zone %1 + +A debug message output when a call is made to the NSAS (nameserver +address store - part of the resolver) to obtain the nameservers for +the specified zone. + + + + +NSAS_UPDATE_RTT update RTT for %1: was %2 ms, is now %3 ms A NSAS (nameserver address store - part of the resolver) debug message -reporting the round-trip time (RTT) for a query made to the specified -nameserver. The RTT has been updated using the value given and the new RTT is -displayed. (The RTT is subject to a calculation that damps out sudden -changes. As a result, the new RTT is not necessarily equal to the RTT -reported.) +reporting the update of a round-trip time (RTT) for a query made to the +specified nameserver. The RTT has been updated using the value given +and the new RTT is displayed. (The RTT is subject to a calculation that +damps out sudden changes. As a result, the new RTT used by the NSAS in +future decisions of which nameserver to use is not necessarily equal to +the RTT reported.) + + + + +NSAS_WRONG_ANSWER queried for %1 RR of type/class %2/%3, received response %4/%5 + +A NSAS (nameserver address store - part of the resolver) made a query for +a resource record of a particular type and class, but instead received +an answer with a different given type and class. + +This message indicates an internal error in the NSAS. Please raise a +bug report. @@ -1460,16 +3209,16 @@ type> tuple in the cache; instead, the deepest delegation found is indicated. - -RESLIB_FOLLOWCNAME following CNAME chain to <%1> + +RESLIB_FOLLOW_CNAME following CNAME chain to <%1> A debug message, a CNAME response was received and another query is being issued for the <name, class, type> tuple. - -RESLIB_LONGCHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded + +RESLIB_LONG_CHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded A debug message recording that a CNAME response has been received to an upstream query for the specified question (Previous debug messages will have indicated @@ -1479,26 +3228,26 @@ is where on CNAME points to another) and so an error is being returned. - -RESLIB_NONSRRSET no NS RRSet in referral response received to query for <%1> + +RESLIB_NO_NS_RRSET no NS RRSet in referral response received to query for <%1> A debug message, this indicates that a response was received for the specified -query and was categorised as a referral. However, the received message did +query and was categorized as a referral. However, the received message did not contain any NS RRsets. This may indicate a programming error in the response classification code. - -RESLIB_NSASLOOK looking up nameserver for zone %1 in the NSAS + +RESLIB_NSAS_LOOKUP looking up nameserver for zone %1 in the NSAS A debug message, the RunningQuery object is querying the NSAS for the nameservers for the specified zone. - -RESLIB_NXDOMRR NXDOMAIN/NXRRSET received in response to query for <%1> + +RESLIB_NXDOM_NXRR NXDOMAIN/NXRRSET received in response to query for <%1> A debug message recording that either a NXDOMAIN or an NXRRSET response has been received to an upstream query for the specified question. Previous debug @@ -1514,8 +3263,8 @@ are no retries left, an error will be reported. - -RESLIB_PROTOCOLRTRY protocol error in answer for %1: %2 (retries left: %3) + +RESLIB_PROTOCOL_RETRY protocol error in answer for %1: %2 (retries left: %3) A debug message indicating that a protocol error was received and that the resolver is repeating the query to the same nameserver. After this @@ -1523,14 +3272,35 @@ repeated query, there will be the indicated number of retries left. - -RESLIB_RCODERR RCODE indicates error in response to query for <%1> + +RESLIB_RCODE_ERR RCODE indicates error in response to query for <%1> A debug message, the response to the specified query indicated an error that is not covered by a specific code path. A SERVFAIL will be returned. + +RESLIB_RECQ_CACHE_FIND found <%1> in the cache (resolve() instance %2) + +This is a debug message and indicates that a RecursiveQuery object found the +the specified <name, class, type> tuple in the cache. The instance number +at the end of the message indicates which of the two resolve() methods has +been called. + + + + +RESLIB_RECQ_CACHE_NO_FIND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2) + +This is a debug message and indicates that the look in the cache made by the +RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery +object has been created to resolve the question. The instance number at +the end of the message indicates which of the two resolve() methods has +been called. + + + RESLIB_REFERRAL referral received in response to query for <%1> @@ -1540,35 +3310,14 @@ have indicated the server to which the question was sent. - -RESLIB_REFERZONE referred to zone %1 + +RESLIB_REFER_ZONE referred to zone %1 A debug message indicating that the last referral message was to the specified zone. - -RESLIB_RESCAFND found <%1> in the cache (resolve() instance %2) - -This is a debug message and indicates that a RecursiveQuery object found the -the specified <name, class, type> tuple in the cache. The instance number -at the end of the message indicates which of the two resolve() methods has -been called. - - - - -RESLIB_RESCANOTFND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2) - -This is a debug message and indicates that the look in the cache made by the -RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery -object has been created to resolve the question. The instance number at -the end of the message indicates which of the two resolve() methods has -been called. - - - RESLIB_RESOLVE asked to resolve <%1> (resolve() instance %2) @@ -1579,8 +3328,8 @@ message indicates which of the two resolve() methods has been called. - -RESLIB_RRSETFND found single RRset in the cache when querying for <%1> (resolve() instance %2) + +RESLIB_RRSET_FOUND found single RRset in the cache when querying for <%1> (resolve() instance %2) A debug message, indicating that when RecursiveQuery::resolve queried the cache, a single RRset was found which was put in the answer. The instance @@ -1596,16 +3345,16 @@ A debug message giving the round-trip time of the last query and response. - -RESLIB_RUNCAFND found <%1> in the cache + +RESLIB_RUNQ_CACHE_FIND found <%1> in the cache This is a debug message and indicates that a RunningQuery object found the specified <name, class, type> tuple in the cache. - -RESLIB_RUNCALOOK looking up up <%1> in the cache + +RESLIB_RUNQ_CACHE_LOOKUP looking up up <%1> in the cache This is a debug message and indicates that a RunningQuery object has made a call to its doLookup() method to look up the specified <name, class, type> @@ -1613,16 +3362,16 @@ tuple, the first action of which will be to examine the cache. - -RESLIB_RUNQUFAIL failure callback - nameservers are unreachable + +RESLIB_RUNQ_FAIL failure callback - nameservers are unreachable A debug message indicating that a RunningQuery's failure callback has been called because all nameservers for the zone in question are unreachable. - -RESLIB_RUNQUSUCC success callback - sending query to %1 + +RESLIB_RUNQ_SUCCESS success callback - sending query to %1 A debug message indicating that a RunningQuery's success callback has been called because a nameserver has been found, and that a query is being sent @@ -1630,19 +3379,19 @@ to the specified nameserver. - -RESLIB_TESTSERV setting test server to %1(%2) + +RESLIB_TEST_SERVER setting test server to %1(%2) -This is an internal debugging message and is only generated in unit tests. -It indicates that all upstream queries from the resolver are being routed to -the specified server, regardless of the address of the nameserver to which -the query would normally be routed. As it should never be seen in normal -operation, it is a warning message instead of a debug message. +This is a warning message only generated in unit tests. It indicates +that all upstream queries from the resolver are being routed to the +specified server, regardless of the address of the nameserver to which +the query would normally be routed. If seen during normal operation, +please submit a bug report. - -RESLIB_TESTUPSTR sending upstream query for <%1> to test server at %2 + +RESLIB_TEST_UPSTREAM sending upstream query for <%1> to test server at %2 This is a debug message and should only be seen in unit tests. A query for the specified <name, class, type> tuple is being sent to a test nameserver @@ -1653,13 +3402,13 @@ whose address is given in the message. RESLIB_TIMEOUT query <%1> to %2 timed out -A debug message indicating that the specified query has timed out and as -there are no retries left, an error will be reported. +A debug message indicating that the specified upstream query has timed out and +there are no retries left. - -RESLIB_TIMEOUTRTRY query <%1> to %2 timed out, re-trying (retries left: %3) + +RESLIB_TIMEOUT_RETRY query <%1> to %2 timed out, re-trying (retries left: %3) A debug message indicating that the specified query has timed out and that the resolver is repeating the query to the same nameserver. After this @@ -1685,308 +3434,374 @@ tuple is being sent to a nameserver whose address is given in the message. - -RESOLVER_AXFRTCP AXFR request received over TCP + +RESOLVER_AXFR_TCP AXFR request received over TCP -A debug message, the resolver received a NOTIFY message over TCP. The server -cannot process it and will return an error message to the sender with the -RCODE set to NOTIMP. +This is a debug message output when the resolver received a request for +an AXFR (full transfer of a zone) over TCP. Only authoritative servers +are able to handle AXFR requests, so the resolver will return an error +message to the sender with the RCODE set to NOTIMP. - -RESOLVER_AXFRUDP AXFR request received over UDP + +RESOLVER_AXFR_UDP AXFR request received over UDP -A debug message, the resolver received a NOTIFY message over UDP. The server -cannot process it (and in any case, an AXFR request should be sent over TCP) -and will return an error message to the sender with the RCODE set to FORMERR. +This is a debug message output when the resolver received a request for +an AXFR (full transfer of a zone) over UDP. Only authoritative servers +are able to handle AXFR requests (and in any case, an AXFR request should +be sent over TCP), so the resolver will return an error message to the +sender with the RCODE set to NOTIMP. - -RESOLVER_CLTMOSMALL client timeout of %1 is too small + +RESOLVER_CLIENT_TIME_SMALL client timeout of %1 is too small -An error indicating that the configuration value specified for the query -timeout is too small. +During the update of the resolver's configuration parameters, the value +of the client timeout was found to be too small. The configuration +update was abandoned and the parameters were not changed. - -RESOLVER_CONFIGCHAN configuration channel created + +RESOLVER_CONFIG_CHANNEL configuration channel created -A debug message, output when the resolver has successfully established a -connection to the configuration channel. +This is a debug message output when the resolver has successfully +established a connection to the configuration channel. - -RESOLVER_CONFIGERR error in configuration: %1 + +RESOLVER_CONFIG_ERROR error in configuration: %1 -An error was detected in a configuration update received by the resolver. This -may be in the format of the configuration message (in which case this is a -programming error) or it may be in the data supplied (in which case it is -a user error). The reason for the error, given as a parameter in the message, -will give more details. +An error was detected in a configuration update received by the +resolver. This may be in the format of the configuration message (in +which case this is a programming error) or it may be in the data supplied +(in which case it is a user error). The reason for the error, included +in the message, will give more details. The configuration update is +not applied and the resolver parameters were not changed. - -RESOLVER_CONFIGLOAD configuration loaded + +RESOLVER_CONFIG_LOADED configuration loaded -A debug message, output when the resolver configuration has been successfully -loaded. +This is a debug message output when the resolver configuration has been +successfully loaded. - -RESOLVER_CONFIGUPD configuration updated: %1 + +RESOLVER_CONFIG_UPDATED configuration updated: %1 -A debug message, the configuration has been updated with the specified -information. +This is a debug message output when the resolver configuration is being +updated with the specified information. RESOLVER_CREATED main resolver object created -A debug message, output when the Resolver() object has been created. +This is a debug message indicating that the main resolver object has +been created. - -RESOLVER_DNSMSGRCVD DNS message received: %1 + +RESOLVER_DNS_MESSAGE_RECEIVED DNS message received: %1 -A debug message, this always precedes some other logging message and is the -formatted contents of the DNS packet that the other message refers to. +This is a debug message from the resolver listing the contents of a +received DNS message. - -RESOLVER_DNSMSGSENT DNS message of %1 bytes sent: %2 + +RESOLVER_DNS_MESSAGE_SENT DNS message of %1 bytes sent: %2 -A debug message, this contains details of the response sent back to the querying -system. +This is a debug message containing details of the response returned by +the resolver to the querying system. RESOLVER_FAILED resolver failed, reason: %1 -This is an error message output when an unhandled exception is caught by the -resolver. All it can do is to shut down. +This is an error message output when an unhandled exception is caught +by the resolver. After this, the resolver will shut itself down. +Please submit a bug report. - -RESOLVER_FWDADDR setting forward address %1(%2) + +RESOLVER_FORWARD_ADDRESS setting forward address %1(%2) -This message may appear multiple times during startup, and it lists the -forward addresses used by the resolver when running in forwarding mode. +If the resolver is running in forward mode, this message will appear +during startup to list the forward address. If multiple addresses are +specified, it will appear once for each address. - -RESOLVER_FWDQUERY processing forward query + +RESOLVER_FORWARD_QUERY processing forward query -The received query has passed all checks and is being forwarded to upstream +This is a debug message indicating that a query received by the resolver +has passed a set of checks (message is well-formed, it is allowed by the +ACL, it is a supported opcode, etc.) and is being forwarded to upstream servers. - -RESOLVER_HDRERR message received, exception when processing header: %1 + +RESOLVER_HEADER_ERROR message received, exception when processing header: %1 -A debug message noting that an exception occurred during the processing of -a received packet. The packet has been dropped. +This is a debug message from the resolver noting that an exception +occurred during the processing of a received packet. The packet has +been dropped. RESOLVER_IXFR IXFR request received -The resolver received a NOTIFY message over TCP. The server cannot process it -and will return an error message to the sender with the RCODE set to NOTIMP. +This is a debug message indicating that the resolver received a request +for an IXFR (incremental transfer of a zone). Only authoritative servers +are able to handle IXFR requests, so the resolver will return an error +message to the sender with the RCODE set to NOTIMP. - -RESOLVER_LKTMOSMALL lookup timeout of %1 is too small + +RESOLVER_LOOKUP_TIME_SMALL lookup timeout of %1 is too small -An error indicating that the configuration value specified for the lookup -timeout is too small. +During the update of the resolver's configuration parameters, the value +of the lookup timeout was found to be too small. The configuration +update will not be applied. - -RESOLVER_NFYNOTAUTH NOTIFY arrived but server is not authoritative + +RESOLVER_MESSAGE_ERROR error parsing received message: %1 - returning %2 -The resolver received a NOTIFY message. As the server is not authoritative it -cannot process it, so it returns an error message to the sender with the RCODE -set to NOTAUTH. +This is a debug message noting that parsing of the body of a received +message by the resolver failed due to some error (although the parsing of +the header succeeded). The message parameters give a textual description +of the problem and the RCODE returned. - -RESOLVER_NORMQUERY processing normal query + +RESOLVER_NEGATIVE_RETRIES negative number of retries (%1) specified in the configuration -The received query has passed all checks and is being processed by the resolver. +This error is issued when a resolver configuration update has specified +a negative retry count: only zero or positive values are valid. The +configuration update was abandoned and the parameters were not changed. - -RESOLVER_NOROOTADDR no root addresses available + +RESOLVER_NON_IN_PACKET non-IN class request received, returning REFUSED message -A warning message during startup, indicates that no root addresses have been -set. This may be because the resolver will get them from a priming query. +This debug message is issued when resolver has received a DNS packet that +was not IN (Internet) class. The resolver cannot handle such packets, +so is returning a REFUSED response to the sender. - -RESOLVER_NOTIN non-IN class request received, returning REFUSED message + +RESOLVER_NORMAL_QUERY processing normal query -A debug message, the resolver has received a DNS packet that was not IN class. -The resolver cannot handle such packets, so is returning a REFUSED response to -the sender. +This is a debug message indicating that the query received by the resolver +has passed a set of checks (message is well-formed, it is allowed by the +ACL, it is a supported opcode, etc.) and is being processed by the resolver. - -RESOLVER_NOTONEQUES query contained %1 questions, exactly one question was expected + +RESOLVER_NOTIFY_RECEIVED NOTIFY arrived but server is not authoritative -A debug message, the resolver received a query that contained the number of -entires in the question section detailed in the message. This is a malformed -message, as a DNS query must contain only one question. The resolver will -return a message to the sender with the RCODE set to FORMERR. +The resolver has received a NOTIFY message. As the server is not +authoritative it cannot process it, so it returns an error message to +the sender with the RCODE set to NOTAUTH. - -RESOLVER_OPCODEUNS opcode %1 not supported by the resolver + +RESOLVER_NOT_ONE_QUESTION query contained %1 questions, exactly one question was expected -A debug message, the resolver received a message with an unsupported opcode -(it can only process QUERY opcodes). It will return a message to the sender -with the RCODE set to NOTIMP. +This debug message indicates that the resolver received a query that +contained the number of entries in the question section detailed in +the message. This is a malformed message, as a DNS query must contain +only one question. The resolver will return a message to the sender +with the RCODE set to FORMERR. - -RESOLVER_PARSEERR error parsing received message: %1 - returning %2 + +RESOLVER_NO_ROOT_ADDRESS no root addresses available -A debug message noting that the resolver received a message and the parsing -of the body of the message failed due to some non-protocol related reason -(although the parsing of the header succeeded). The message parameters give -a textual description of the problem and the RCODE returned. +A warning message issued during resolver startup, this indicates that +no root addresses have been set. This may be because the resolver will +get them from a priming query. - -RESOLVER_PRINTMSG print message command, aeguments are: %1 + +RESOLVER_PARSE_ERROR error parsing received message: %1 - returning %2 -This message is logged when a "print_message" command is received over the -command channel. +This is a debug message noting that the resolver received a message and +the parsing of the body of the message failed due to some non-protocol +related reason (although the parsing of the header succeeded). +The message parameters give a textual description of the problem and +the RCODE returned. - -RESOLVER_PROTERR protocol error parsing received message: %1 - returning %2 + +RESOLVER_PRINT_COMMAND print message command, arguments are: %1 -A debug message noting that the resolver received a message and the parsing -of the body of the message failed due to some protocol error (although the -parsing of the header succeeded). The message parameters give a textual -description of the problem and the RCODE returned. +This debug message is logged when a "print_message" command is received +by the resolver over the command channel. - -RESOLVER_QUSETUP query setup + +RESOLVER_PROTOCOL_ERROR protocol error parsing received message: %1 - returning %2 -A debug message noting that the resolver is creating a RecursiveQuery object. +This is a debug message noting that the resolver received a message and +the parsing of the body of the message failed due to some protocol error +(although the parsing of the header succeeded). The message parameters +give a textual description of the problem and the RCODE returned. - -RESOLVER_QUSHUT query shutdown + +RESOLVER_QUERY_ACCEPTED query accepted: '%1/%2/%3' from %4 -A debug message noting that the resolver is destroying a RecursiveQuery object. +This debug message is produced by the resolver when an incoming query +is accepted in terms of the query ACL. The log message shows the query +in the form of <query name>/<query type>/<query class>, and the client +that sends the query in the form of <Source IP address>#<source port>. - -RESOLVER_QUTMOSMALL query timeout of %1 is too small + +RESOLVER_QUERY_DROPPED query dropped: '%1/%2/%3' from %4 -An error indicating that the configuration value specified for the query -timeout is too small. +This is an informational message that indicates an incoming query has +been dropped by the resolver because of the query ACL. Unlike the +RESOLVER_QUERY_REJECTED case, the server does not return any response. +The log message shows the query in the form of <query name>/<query +type>/<query class>, and the client that sends the query in the form of +<Source IP address>#<source port>. + + + + +RESOLVER_QUERY_REJECTED query rejected: '%1/%2/%3' from %4 + +This is an informational message that indicates an incoming query has +been rejected by the resolver because of the query ACL. This results +in a response with an RCODE of REFUSED. The log message shows the query +in the form of <query name>/<query type>/<query class>, and the client +that sends the query in the form of <Source IP address>#<source port>. + + + + +RESOLVER_QUERY_SETUP query setup + +This is a debug message noting that the resolver is creating a +RecursiveQuery object. + + + + +RESOLVER_QUERY_SHUTDOWN query shutdown + +This is a debug message noting that the resolver is destroying a +RecursiveQuery object. + + + + +RESOLVER_QUERY_TIME_SMALL query timeout of %1 is too small + +During the update of the resolver's configuration parameters, the value +of the query timeout was found to be too small. The configuration +parameters were not changed. + + + + +RESOLVER_RECEIVED_MESSAGE resolver has received a DNS message + +This is a debug message indicating that the resolver has received a +DNS message. Depending on the debug settings, subsequent log output +will indicate the nature of the message. RESOLVER_RECURSIVE running in recursive mode -This is an informational message that appears at startup noting that the -resolver is running in recursive mode. +This is an informational message that appears at startup noting that +the resolver is running in recursive mode. - -RESOLVER_RECVMSG resolver has received a DNS message + +RESOLVER_SERVICE_CREATED service object created -A debug message indicating that the resolver has received a message. Depending -on the debug settings, subsequent log output will indicate the nature of the -message. +This debug message is output when resolver creates the main service object +(which handles the received queries). - -RESOLVER_RETRYNEG negative number of retries (%1) specified in the configuration + +RESOLVER_SET_PARAMS query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4 -An error message indicating that the resolver configuration has specified a -negative retry count. Only zero or positive values are valid. - - - - -RESOLVER_ROOTADDR setting root address %1(%2) - -This message may appear multiple times during startup; it lists the root -addresses used by the resolver. - - - - -RESOLVER_SERVICE service object created - -A debug message, output when the main service object (which handles the -received queries) is created. - - - - -RESOLVER_SETPARAM query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4 - -A debug message, lists the parameters associated with the message. These are: +This debug message lists the parameters being set for the resolver. These are: query timeout: the timeout (in ms) used for queries originated by the resolver -to upstream servers. Client timeout: the interval to resolver a query by +to upstream servers. Client timeout: the interval to resolve a query by a client: after this time, the resolver sends back a SERVFAIL to the client -whilst continuing to resolver the query. Lookup timeout: the time at which the +whilst continuing to resolve the query. Lookup timeout: the time at which the resolver gives up trying to resolve a query. Retry count: the number of times the resolver will retry a query to an upstream server if it gets a timeout. The client and lookup timeouts require a bit more explanation. The -resolution of the clent query might require a large number of queries to +resolution of the client query might require a large number of queries to upstream nameservers. Even if none of these queries timeout, the total time taken to perform all the queries may exceed the client timeout. When this happens, a SERVFAIL is returned to the client, but the resolver continues -with the resolution process. Data received is added to the cache. However, -there comes a time - the lookup timeout - when even the resolve gives up. +with the resolution process; data received is added to the cache. However, +there comes a time - the lookup timeout - when even the resolver gives up. At this point it will wait for pending upstream queries to complete or timeout and drop the query. + +RESOLVER_SET_QUERY_ACL query ACL is configured + +This debug message is generated when a new query ACL is configured for +the resolver. + + + + +RESOLVER_SET_ROOT_ADDRESS setting root address %1(%2) + +This message gives the address of one of the root servers used by the +resolver. It is output during startup and may appear multiple times, +once for each root server address. + + + RESOLVER_SHUTDOWN resolver shutdown complete -This information message is output when the resolver has shut down. +This informational message is output when the resolver has shut down. @@ -2005,11 +3820,982 @@ An informational message, this is output when the resolver starts up. - -RESOLVER_UNEXRESP received unexpected response, ignoring + +RESOLVER_UNEXPECTED_RESPONSE received unexpected response, ignoring -A debug message noting that the server has received a response instead of a -query and is ignoring it. +This is a debug message noting that the resolver received a DNS response +packet on the port on which is it listening for queries. The packet +has been ignored. + + + + +RESOLVER_UNSUPPORTED_OPCODE opcode %1 not supported by the resolver + +This is debug message output when the resolver received a message with an +unsupported opcode (it can only process QUERY opcodes). It will return +a message to the sender with the RCODE set to NOTIMP. + + + + +SRVCOMM_ADDRESSES_NOT_LIST the address and port specification is not a list in %1 + +This points to an error in configuration. What was supposed to be a list of +IP address - port pairs isn't a list at all but something else. + + + + +SRVCOMM_ADDRESS_FAIL failed to listen on addresses (%1) + +The server failed to bind to one of the address/port pair it should according +to configuration, for reason listed in the message (usually because that pair +is already used by other service or missing privileges). The server will try +to recover and bind the address/port pairs it was listening to before (if any). + + + + +SRVCOMM_ADDRESS_MISSING address specification is missing "address" or "port" element in %1 + +This points to an error in configuration. An address specification in the +configuration is missing either an address or port and so cannot be used. The +specification causing the error is given in the message. + + + + +SRVCOMM_ADDRESS_TYPE address specification type is invalid in %1 + +This points to an error in configuration. An address specification in the +configuration malformed. The specification causing the error is given in the +message. A valid specification contains an address part (which must be a string +and must represent a valid IPv4 or IPv6 address) and port (which must be an +integer in the range valid for TCP/UDP ports on your system). + + + + +SRVCOMM_ADDRESS_UNRECOVERABLE failed to recover original addresses also (%2) + +The recovery of old addresses after SRVCOMM_ADDRESS_FAIL also failed for +the reason listed. + +The condition indicates problems with the server and/or the system on +which it is running. The server will continue running to allow +reconfiguration, but will not be listening on any address or port until +an administrator does so. + + + + +SRVCOMM_ADDRESS_VALUE address to set: %1#%2 + +Debug message. This lists one address and port value of the set of +addresses we are going to listen on (eg. there will be one log message +per pair). This appears only after SRVCOMM_SET_LISTEN, but might +be hidden, as it has higher debug level. + + + + +SRVCOMM_KEYS_DEINIT deinitializing TSIG keyring + +Debug message indicating that the server is deinitializing the TSIG keyring. + + + + +SRVCOMM_KEYS_INIT initializing TSIG keyring + +Debug message indicating that the server is initializing the global TSIG +keyring. This should be seen only at server start. + + + + +SRVCOMM_KEYS_UPDATE updating TSIG keyring + +Debug message indicating new keyring is being loaded from configuration (either +on startup or as a result of configuration update). + + + + +SRVCOMM_PORT_RANGE port out of valid range (%1 in %2) + +This points to an error in configuration. The port in an address +specification is outside the valid range of 0 to 65535. + + + + +SRVCOMM_SET_LISTEN setting addresses to listen to + +Debug message, noting that the server is about to start listening on a +different set of IP addresses and ports than before. + + + + +STATHTTPD_BAD_OPTION_VALUE bad command line argument: %1 + +The stats-httpd module was called with a bad command-line argument +and will not start. + + + + +STATHTTPD_CC_SESSION_ERROR error connecting to message bus: %1 + +The stats-httpd module was unable to connect to the BIND 10 command +and control bus. A likely problem is that the message bus daemon +(b10-msgq) is not running. The stats-httpd module will now shut down. + + + + +STATHTTPD_CLOSING closing %1#%2 + +The stats-httpd daemon will stop listening for requests on the given +address and port number. + + + + +STATHTTPD_CLOSING_CC_SESSION stopping cc session + +Debug message indicating that the stats-httpd module is disconnecting +from the command and control bus. + + + + +STATHTTPD_HANDLE_CONFIG reading configuration: %1 + +The stats-httpd daemon has received new configuration data and will now +process it. The (changed) data is printed. + + + + +STATHTTPD_RECEIVED_SHUTDOWN_COMMAND shutdown command received + +A shutdown command was sent to the stats-httpd module, and it will +now shut down. + + + + +STATHTTPD_RECEIVED_STATUS_COMMAND received command to return status + +A status command was sent to the stats-httpd module, and it will +respond with 'Stats Httpd is up.' and its PID. + + + + +STATHTTPD_RECEIVED_UNKNOWN_COMMAND received unknown command: %1 + +An unknown command has been sent to the stats-httpd module. The +stats-httpd module will respond with an error, and the command will +be ignored. + + + + +STATHTTPD_SERVER_ERROR HTTP server error: %1 + +An internal error occurred while handling an HTTP request. An HTTP 500 +response will be sent back, and the specific error is printed. This +is an error condition that likely points to a module that is not +responding correctly to statistic requests. + + + + +STATHTTPD_SERVER_INIT_ERROR HTTP server initialization error: %1 + +There was a problem initializing the HTTP server in the stats-httpd +module upon receiving its configuration data. The most likely cause +is a port binding problem or a bad configuration value. The specific +error is printed in the message. The new configuration is ignored, +and an error is sent back. + + + + +STATHTTPD_SHUTDOWN shutting down + +The stats-httpd daemon is shutting down. + + + + +STATHTTPD_STARTED listening on %1#%2 + +The stats-httpd daemon will now start listening for requests on the +given address and port number. + + + + +STATHTTPD_STARTING_CC_SESSION starting cc session + +Debug message indicating that the stats-httpd module is connecting to +the command and control bus. + + + + +STATHTTPD_START_SERVER_INIT_ERROR HTTP server initialization error: %1 + +There was a problem initializing the HTTP server in the stats-httpd +module upon startup. The most likely cause is that it was not able +to bind to the listening port. The specific error is printed, and the +module will shut down. + + + + +STATHTTPD_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the stats-httpd +daemon. The daemon will now shut down. + + + + +STATHTTPD_UNKNOWN_CONFIG_ITEM unknown configuration item: %1 + +The stats-httpd daemon received a configuration update from the +configuration manager. However, one of the items in the +configuration is unknown. The new configuration is ignored, and an +error is sent back. As possible cause is that there was an upgrade +problem, and the stats-httpd version is out of sync with the rest of +the system. + + + + +STATS_BAD_OPTION_VALUE bad command line argument: %1 + +The stats module was called with a bad command-line argument and will +not start. + + + + +STATS_CC_SESSION_ERROR error connecting to message bus: %1 + +The stats module was unable to connect to the BIND 10 command and +control bus. A likely problem is that the message bus daemon +(b10-msgq) is not running. The stats module will now shut down. + + + + +STATS_RECEIVED_NEW_CONFIG received new configuration: %1 + +This debug message is printed when the stats module has received a +configuration update from the configuration manager. + + + + +STATS_RECEIVED_REMOVE_COMMAND received command to remove %1 + +A remove command for the given name was sent to the stats module, and +the given statistics value will now be removed. It will not appear in +statistics reports until it appears in a statistics update from a +module again. + + + + +STATS_RECEIVED_RESET_COMMAND received command to reset all statistics + +The stats module received a command to clear all collected statistics. +The data is cleared until it receives an update from the modules again. + + + + +STATS_RECEIVED_SHOW_ALL_COMMAND received command to show all statistics + +The stats module received a command to show all statistics that it has +collected. + + + + +STATS_RECEIVED_SHOW_NAME_COMMAND received command to show statistics for %1 + +The stats module received a command to show the statistics that it has +collected for the given item. + + + + +STATS_RECEIVED_SHUTDOWN_COMMAND shutdown command received + +A shutdown command was sent to the stats module and it will now shut down. + + + + +STATS_RECEIVED_STATUS_COMMAND received command to return status + +A status command was sent to the stats module. It will return a +response indicating that it is running normally. + + + + +STATS_RECEIVED_UNKNOWN_COMMAND received unknown command: %1 + +An unknown command has been sent to the stats module. The stats module +will respond with an error and the command will be ignored. + + + + +STATS_SEND_REQUEST_BOSS requesting boss to send statistics + +This debug message is printed when a request is sent to the boss module +to send its data to the stats module. + + + + +STATS_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the stats module. The +daemon will now shut down. + + + + +STATS_UNKNOWN_COMMAND_IN_SPEC unknown command in specification file: %1 + +The specification file for the stats module contains a command that +is unknown in the implementation. The most likely cause is an +installation problem, where the specification file stats.spec is +from a different version of BIND 10 than the stats module itself. +Please check your installation. + + + + +XFRIN_AXFR_DATABASE_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to a database problem. +The error is shown in the log message. + + + + +XFRIN_AXFR_INTERNAL_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to an internal +problem in the bind10 python wrapper library. +The error is shown in the log message. + + + + +XFRIN_AXFR_TRANSFER_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to a protocol error. +The error is shown in the log message. + + + + +XFRIN_AXFR_TRANSFER_STARTED AXFR transfer of zone %1 started + +A connection to the master server has been made, the serial value in +the SOA record has been checked, and a zone transfer has been started. + + + + +XFRIN_AXFR_TRANSFER_SUCCESS AXFR transfer of zone %1 succeeded + +The AXFR transfer of the given zone was successfully completed. + + + + +XFRIN_BAD_MASTER_ADDR_FORMAT bad format for master address: %1 + +The given master address is not a valid IP address. + + + + +XFRIN_BAD_MASTER_PORT_FORMAT bad format for master port: %1 + +The master port as read from the configuration is not a valid port number. + + + + +XFRIN_BAD_TSIG_KEY_STRING bad TSIG key string: %1 + +The TSIG key string as read from the configuration does not represent +a valid TSIG key. + + + + +XFRIN_BAD_ZONE_CLASS Invalid zone class: %1 + +The zone class as read from the configuration is not a valid DNS class. + + + + +XFRIN_CC_SESSION_ERROR error reading from cc channel: %1 + +There was a problem reading from the command and control channel. The +most likely cause is that xfrin the msgq daemon is not running. + + + + +XFRIN_COMMAND_ERROR error while executing command '%1': %2 + +There was an error while the given command was being processed. The +error is given in the log message. + + + + +XFRIN_CONNECT_MASTER error connecting to master at %1: %2 + +There was an error opening a connection to the master. The error is +shown in the log message. + + + + +XFRIN_IMPORT_DNS error importing python DNS module: %1 + +There was an error importing the python DNS module pydnspp. The most +likely cause is a PYTHONPATH problem. + + + + +XFRIN_MSGQ_SEND_ERROR error while contacting %1 and %2 + +There was a problem sending a message to the xfrout module or the +zone manager. This most likely means that the msgq daemon has quit or +was killed. + + + + +XFRIN_MSGQ_SEND_ERROR_ZONE_MANAGER error while contacting %1 + +There was a problem sending a message to the zone manager. This most +likely means that the msgq daemon has quit or was killed. + + + + +XFRIN_RETRANSFER_UNKNOWN_ZONE got notification to retransfer unknown zone %1 + +There was an internal command to retransfer the given zone, but the +zone is not known to the system. This may indicate that the configuration +for xfrin is incomplete, or there was a typographical error in the +zone name in the configuration. + + + + +XFRIN_STARTING starting resolver with command line '%1' + +An informational message, this is output when the resolver starts up. + + + + +XFRIN_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the xfrin daemon. The +daemon will now shut down. + + + + +XFRIN_UNKNOWN_ERROR unknown error: %1 + +An uncaught exception was raised while running the xfrin daemon. The +exception message is printed in the log message. + + + + +XFROUT_AXFR_TRANSFER_DONE transfer of %1/%2 complete + +The transfer of the given zone has been completed successfully, or was +aborted due to a shutdown event. + + + + +XFROUT_AXFR_TRANSFER_ERROR error transferring zone %1/%2: %3 + +An uncaught exception was encountered while sending the response to +an AXFR query. The error message of the exception is included in the +log message, but this error most likely points to incomplete exception +handling in the code. + + + + +XFROUT_AXFR_TRANSFER_FAILED transfer of %1/%2 failed, rcode: %3 + +A transfer out for the given zone failed. An error response is sent +to the client. The given rcode is the rcode that is set in the error +response. This is either NOTAUTH (we are not authoritative for the +zone), SERVFAIL (our internal database is missing the SOA record for +the zone), or REFUSED (the limit of simultaneous outgoing AXFR +transfers, as specified by the configuration value +Xfrout/max_transfers_out, has been reached). + + + + +XFROUT_AXFR_TRANSFER_STARTED transfer of zone %1/%2 has started + +A transfer out of the given zone has started. + + + + +XFROUT_BAD_TSIG_KEY_STRING bad TSIG key string: %1 + +The TSIG key string as read from the configuration does not represent +a valid TSIG key. + + + + +XFROUT_CC_SESSION_ERROR error reading from cc channel: %1 + +There was a problem reading from the command and control channel. The +most likely cause is that the msgq daemon is not running. + + + + +XFROUT_CC_SESSION_TIMEOUT_ERROR timeout waiting for cc response + +There was a problem reading a response from another module over the +command and control channel. The most likely cause is that the +configuration manager b10-cfgmgr is not running. + + + + +XFROUT_FETCH_REQUEST_ERROR socket error while fetching a request from the auth daemon + +There was a socket error while contacting the b10-auth daemon to +fetch a transfer request. The auth daemon may have shutdown. + + + + +XFROUT_HANDLE_QUERY_ERROR error while handling query: %1 + +There was a general error handling an xfrout query. The error is shown +in the message. In principle this error should not appear, and points +to an oversight catching exceptions in the right place. However, to +ensure the daemon keeps running, this error is caught and reported. + + + + +XFROUT_IMPORT error importing python module: %1 + +There was an error importing a python module. One of the modules needed +by xfrout could not be found. This suggests that either some libraries +are missing on the system, or the PYTHONPATH variable is not correct. +The specific place where this library needs to be depends on your +system and your specific installation. + + + + +XFROUT_NEW_CONFIG Update xfrout configuration + +New configuration settings have been sent from the configuration +manager. The xfrout daemon will now apply them. + + + + +XFROUT_NEW_CONFIG_DONE Update xfrout configuration done + +The xfrout daemon is now done reading the new configuration settings +received from the configuration manager. + + + + +XFROUT_NOTIFY_COMMAND received command to send notifies for %1/%2 + +The xfrout daemon received a command on the command channel that +NOTIFY packets should be sent for the given zone. + + + + +XFROUT_PARSE_QUERY_ERROR error parsing query: %1 + +There was a parse error while reading an incoming query. The parse +error is shown in the log message. A remote client sent a packet we +do not understand or support. The xfrout request will be ignored. +In general, this should only occur for unexpected problems like +memory allocation failures, as the query should already have been +parsed by the b10-auth daemon, before it was passed here. + + + + +XFROUT_PROCESS_REQUEST_ERROR error processing transfer request: %2 + +There was an error processing a transfer request. The error is included +in the log message, but at this point no specific information other +than that could be given. This points to incomplete exception handling +in the code. + + + + +XFROUT_QUERY_DROPPED request to transfer %1/%2 to [%3]:%4 dropped + +The xfrout process silently dropped a request to transfer zone to given host. +This is required by the ACLs. The %1 and %2 represent the zone name and class, +the %3 and %4 the IP address and port of the peer requesting the transfer. + + + + +XFROUT_QUERY_REJECTED request to transfer %1/%2 to [%3]:%4 rejected + +The xfrout process rejected (by REFUSED rcode) a request to transfer zone to +given host. This is because of ACLs. The %1 and %2 represent the zone name and +class, the %3 and %4 the IP address and port of the peer requesting the +transfer. + + + + +XFROUT_RECEIVED_SHUTDOWN_COMMAND shutdown command received + +The xfrout daemon received a shutdown command from the command channel +and will now shut down. + + + + +XFROUT_RECEIVE_FILE_DESCRIPTOR_ERROR error receiving the file descriptor for an XFR connection + +There was an error receiving the file descriptor for the transfer +request. Normally, the request is received by b10-auth, and passed on +to the xfrout daemon, so it can answer directly. However, there was a +problem receiving this file descriptor. The request will be ignored. + + + + +XFROUT_REMOVE_OLD_UNIX_SOCKET_FILE_ERROR error removing unix socket file %1: %2 + +The unix socket file xfrout needs for contact with the auth daemon +already exists, and needs to be removed first, but there is a problem +removing it. It is likely that we do not have permission to remove +this file. The specific error is show in the log message. The xfrout +daemon will shut down. + + + + +XFROUT_REMOVE_UNIX_SOCKET_FILE_ERROR error clearing unix socket file %1: %2 + +When shutting down, the xfrout daemon tried to clear the unix socket +file used for communication with the auth daemon. It failed to remove +the file. The reason for the failure is given in the error message. + + + + +XFROUT_SOCKET_SELECT_ERROR error while calling select() on request socket: %1 + +There was an error while calling select() on the socket that informs +the xfrout daemon that a new xfrout request has arrived. This should +be a result of rare local error such as memory allocation failure and +shouldn't happen under normal conditions. The error is included in the +log message. + + + + +XFROUT_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the xfrout daemon. The +daemon will now shut down. + + + + +XFROUT_STOPPING the xfrout daemon is shutting down + +The current transfer is aborted, as the xfrout daemon is shutting down. + + + + +XFROUT_UNIX_SOCKET_FILE_IN_USE another xfrout process seems to be using the unix socket file %1 + +While starting up, the xfrout daemon tried to clear the unix domain +socket needed for contacting the b10-auth daemon to pass requests +on, but the file is in use. The most likely cause is that another +xfrout daemon process is still running. This xfrout daemon (the one +printing this message) will not start. + + + + +ZONEMGR_CCSESSION_ERROR command channel session error: %1 + +An error was encountered on the command channel. The message indicates +the nature of the error. + + + + +ZONEMGR_JITTER_TOO_BIG refresh_jitter is too big, setting to 0.5 + +The value specified in the configuration for the refresh jitter is too large +so its value has been set to the maximum of 0.5. + + + + +ZONEMGR_KEYBOARD_INTERRUPT exiting zonemgr process as result of keyboard interrupt + +An informational message output when the zone manager was being run at a +terminal and it was terminated via a keyboard interrupt signal. + + + + +ZONEMGR_LOAD_ZONE loading zone %1 (class %2) + +This is a debug message indicating that the zone of the specified class +is being loaded. + + + + +ZONEMGR_NO_MASTER_ADDRESS internal BIND 10 command did not contain address of master + +A command received by the zone manager from the Auth module did not +contain the address of the master server from which a NOTIFY message +was received. This may be due to an internal programming error; please +submit a bug report. + + + + +ZONEMGR_NO_SOA zone %1 (class %2) does not have an SOA record + +When loading the named zone of the specified class the zone manager +discovered that the data did not contain an SOA record. The load has +been abandoned. + + + + +ZONEMGR_NO_TIMER_THREAD trying to stop zone timer thread but it is not running + +An attempt was made to stop the timer thread (used to track when zones +should be refreshed) but it was not running. This may indicate an +internal program error. Please submit a bug report. + + + + +ZONEMGR_NO_ZONE_CLASS internal BIND 10 command did not contain class of zone + +A command received by the zone manager from another BIND 10 module did +not contain the class of the zone on which the zone manager should act. +This may be due to an internal programming error; please submit a +bug report. + + + + +ZONEMGR_NO_ZONE_NAME internal BIND 10 command did not contain name of zone + +A command received by the zone manager from another BIND 10 module did +not contain the name of the zone on which the zone manager should act. +This may be due to an internal programming error; please submit a +bug report. + + + + +ZONEMGR_RECEIVE_NOTIFY received NOTIFY command for zone %1 (class %2) + +This is a debug message indicating that the zone manager has received a +NOTIFY command over the command channel. The command is sent by the Auth +process when it is acting as a slave server for the zone and causes the +zone manager to record the master server for the zone and start a timer; +when the timer expires, the master will be polled to see if it contains +new data. + + + + +ZONEMGR_RECEIVE_SHUTDOWN received SHUTDOWN command + +This is a debug message indicating that the zone manager has received +a SHUTDOWN command over the command channel from the Boss process. +It will act on this command and shut down. + + + + +ZONEMGR_RECEIVE_UNKNOWN received unknown command '%1' + +This is a warning message indicating that the zone manager has received +the stated command over the command channel. The command is not known +to the zone manager and although the command is ignored, its receipt +may indicate an internal error. Please submit a bug report. + + + + +ZONEMGR_RECEIVE_XFRIN_FAILED received XFRIN FAILED command for zone %1 (class %2) + +This is a debug message indicating that the zone manager has received +an XFRIN FAILED command over the command channel. The command is sent +by the Xfrin process when a transfer of zone data into the system has +failed, and causes the zone manager to schedule another transfer attempt. + + + + +ZONEMGR_RECEIVE_XFRIN_SUCCESS received XFRIN SUCCESS command for zone %1 (class %2) + +This is a debug message indicating that the zone manager has received +an XFRIN SUCCESS command over the command channel. The command is sent +by the Xfrin process when the transfer of zone data into the system has +succeeded, and causes the data to be loaded and served by BIND 10. + + + + +ZONEMGR_REFRESH_ZONE refreshing zone %1 (class %2) + +The zone manager is refreshing the named zone of the specified class +with updated information. + + + + +ZONEMGR_SELECT_ERROR error with select(): %1 + +An attempt to wait for input from a socket failed. The failing operation +is a call to the operating system's select() function, which failed for +the given reason. + + + + +ZONEMGR_SEND_FAIL failed to send command to %1, session has been closed + +The zone manager attempted to send a command to the named BIND 10 module, +but the send failed. The session between the modules has been closed. + + + + +ZONEMGR_SESSION_ERROR unable to establish session to command channel daemon + +The zonemgr process was not able to be started because it could not +connect to the command channel daemon. The most usual cause of this +problem is that the daemon is not running. + + + + +ZONEMGR_SESSION_TIMEOUT timeout on session to command channel daemon + +The zonemgr process was not able to be started because it timed out when +connecting to the command channel daemon. The most usual cause of this +problem is that the daemon is not running. + + + + +ZONEMGR_SHUTDOWN zone manager has shut down + +A debug message, output when the zone manager has shut down completely. + + + + +ZONEMGR_STARTING zone manager starting + +A debug message output when the zone manager starts up. + + + + +ZONEMGR_TIMER_THREAD_RUNNING trying to start timer thread but one is already running + +This message is issued when an attempt is made to start the timer +thread (which keeps track of when zones need a refresh) but one is +already running. It indicates either an error in the program logic or +a problem with stopping a previous instance of the timer. Please submit +a bug report. + + + + +ZONEMGR_UNKNOWN_ZONE_FAIL zone %1 (class %2) is not known to the zone manager + +An XFRIN operation has failed but the zone that was the subject of the +operation is not being managed by the zone manager. This may indicate +an error in the program (as the operation should not have been initiated +if this were the case). Please submit a bug report. + + + + +ZONEMGR_UNKNOWN_ZONE_NOTIFIED notified zone %1 (class %2) is not known to the zone manager + +A NOTIFY was received but the zone that was the subject of the operation +is not being managed by the zone manager. This may indicate an error +in the program (as the operation should not have been initiated if this +were the case). Please submit a bug report. + + + + +ZONEMGR_UNKNOWN_ZONE_SUCCESS zone %1 (class %2) is not known to the zone manager + +An XFRIN operation has succeeded but the zone received is not being +managed by the zone manager. This may indicate an error in the program +(as the operation should not have been initiated if this were the case). +Please submit a bug report. diff --git a/ext/asio/asio/impl/error_code.ipp b/ext/asio/asio/impl/error_code.ipp index ed37a17dd3..218c09ba41 100644 --- a/ext/asio/asio/impl/error_code.ipp +++ b/ext/asio/asio/impl/error_code.ipp @@ -11,6 +11,9 @@ #ifndef ASIO_IMPL_ERROR_CODE_IPP #define ASIO_IMPL_ERROR_CODE_IPP +// strerror() needs +#include + #if defined(_MSC_VER) && (_MSC_VER >= 1200) # pragma once #endif // defined(_MSC_VER) && (_MSC_VER >= 1200) diff --git a/src/bin/auth/Makefile.am b/src/bin/auth/Makefile.am index 64136c1f2b..4d8ec833bd 100644 --- a/src/bin/auth/Makefile.am +++ b/src/bin/auth/Makefile.am @@ -50,12 +50,19 @@ b10_auth_SOURCES += command.cc command.h b10_auth_SOURCES += common.h common.cc b10_auth_SOURCES += statistics.cc statistics.h b10_auth_SOURCES += main.cc +# This is a temporary workaround for #1206, where the InMemoryClient has been +# moved to an ldopened library. We could add that library to LDADD, but that +# is nonportable. When #1207 is done this becomes moot anyway, and the +# specific workaround is not needed anymore, so we can then remove this +# line again. +b10_auth_SOURCES += ${top_srcdir}/src/lib/datasrc/memory_datasrc.cc nodist_b10_auth_SOURCES = auth_messages.h auth_messages.cc EXTRA_DIST += auth_messages.mes b10_auth_LDADD = $(top_builddir)/src/lib/datasrc/libdatasrc.la b10_auth_LDADD += $(top_builddir)/src/lib/dns/libdns++.la +b10_auth_LDADD += $(top_builddir)/src/lib/util/libutil.la b10_auth_LDADD += $(top_builddir)/src/lib/config/libcfgclient.la b10_auth_LDADD += $(top_builddir)/src/lib/cc/libcc.la b10_auth_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/bin/auth/auth.spec.pre.in b/src/bin/auth/auth.spec.pre.in index d88ffb5e3e..2ce044e440 100644 --- a/src/bin/auth/auth.spec.pre.in +++ b/src/bin/auth/auth.spec.pre.in @@ -122,6 +122,24 @@ } ] } + ], + "statistics": [ + { + "item_name": "queries.tcp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Queries TCP ", + "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially" + }, + { + "item_name": "queries.udp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Queries UDP", + "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially" + } ] } } diff --git a/src/bin/auth/auth_config.cc b/src/bin/auth/auth_config.cc index 2943cb5bc5..d684c68611 100644 --- a/src/bin/auth/auth_config.cc +++ b/src/bin/auth/auth_config.cc @@ -107,7 +107,7 @@ DatasourcesConfig::commit() { // server implementation details, and isn't scalable wrt the number of // data source types, and should eventually be improved. // Currently memory data source for class IN is the only possibility. - server_.setMemoryDataSrc(RRClass::IN(), AuthSrv::MemoryDataSrcPtr()); + server_.setInMemoryClient(RRClass::IN(), AuthSrv::InMemoryClientPtr()); BOOST_FOREACH(shared_ptr datasrc_config, datasources_) { datasrc_config->commit(); @@ -125,12 +125,12 @@ public: {} virtual void build(ConstElementPtr config_value); virtual void commit() { - server_.setMemoryDataSrc(rrclass_, memory_datasrc_); + server_.setInMemoryClient(rrclass_, memory_client_); } private: AuthSrv& server_; RRClass rrclass_; - AuthSrv::MemoryDataSrcPtr memory_datasrc_; + AuthSrv::InMemoryClientPtr memory_client_; }; void @@ -143,8 +143,8 @@ MemoryDatasourceConfig::build(ConstElementPtr config_value) { // We'd eventually optimize building zones (in case of reloading) by // selectively loading fresh zones. Right now we simply check the // RR class is supported by the server implementation. - server_.getMemoryDataSrc(rrclass_); - memory_datasrc_ = AuthSrv::MemoryDataSrcPtr(new MemoryDataSrc()); + server_.getInMemoryClient(rrclass_); + memory_client_ = AuthSrv::InMemoryClientPtr(new InMemoryClient()); ConstElementPtr zones_config = config_value->get("zones"); if (!zones_config) { @@ -163,9 +163,10 @@ MemoryDatasourceConfig::build(ConstElementPtr config_value) { isc_throw(AuthConfigError, "Missing zone file for zone: " << origin->str()); } - shared_ptr new_zone(new MemoryZone(rrclass_, + shared_ptr zone_finder(new + InMemoryZoneFinder(rrclass_, Name(origin->stringValue()))); - const result::Result result = memory_datasrc_->addZone(new_zone); + const result::Result result = memory_client_->addZone(zone_finder); if (result == result::EXIST) { isc_throw(AuthConfigError, "zone "<< origin->str() << " already exists"); @@ -177,7 +178,7 @@ MemoryDatasourceConfig::build(ConstElementPtr config_value) { * need the load method to be split into some kind of build and * commit/abort parts. */ - new_zone->load(file->stringValue()); + zone_finder->load(file->stringValue()); } } diff --git a/src/bin/auth/auth_messages.mes b/src/bin/auth/auth_messages.mes index 2bb402cb20..1ffa6871ea 100644 --- a/src/bin/auth/auth_messages.mes +++ b/src/bin/auth/auth_messages.mes @@ -63,7 +63,7 @@ datebase data source, listing the file that is being accessed. % AUTH_DNS_SERVICES_CREATED DNS services created This is a debug message indicating that the component that will handling -incoming queries for the authoritiative server (DNSServices) has been +incoming queries for the authoritative server (DNSServices) has been successfully created. It is issued during server startup is an indication that the initialization is proceeding normally. @@ -74,7 +74,7 @@ reason for the failure is given in the message.) The server will drop the packet. % AUTH_LOAD_TSIG loading TSIG keys -This is a debug message indicating that the authoritiative server +This is a debug message indicating that the authoritative server has requested the keyring holding TSIG keys from the configuration database. It is issued during server startup is an indication that the initialization is proceeding normally. @@ -141,8 +141,8 @@ encountered an internal error whilst processing a received packet: the cause of the error is included in the message. The server will return a SERVFAIL error code to the sender of the packet. -However, this message indicates a potential error in the server. -Please open a bug ticket for this issue. +This message indicates a potential error in the server. Please open a +bug ticket for this issue. % AUTH_RECEIVED_COMMAND command '%1' received This is a debug message issued when the authoritative server has received @@ -209,7 +209,7 @@ channel. It is issued during server startup is an indication that the initialization is proceeding normally. % AUTH_STATS_COMMS communication error in sending statistics data: %1 -An error was encountered when the authoritiative server tried to send data +An error was encountered when the authoritative server tried to send data to the statistics daemon. The message includes additional information describing the reason for the failure. @@ -257,4 +257,7 @@ request. The zone manager component has been informed of the request, but has returned an error response (which is included in the message). The NOTIFY request will not be honored. +% AUTH_INVALID_STATISTICS_DATA invalid specification of statistics data specified +An error was encountered when the authoritiative server specified +statistics data which is invalid for the auth specification file. diff --git a/src/bin/auth/auth_srv.cc b/src/bin/auth/auth_srv.cc index f29fd05e83..c9dac88e99 100644 --- a/src/bin/auth/auth_srv.cc +++ b/src/bin/auth/auth_srv.cc @@ -108,8 +108,8 @@ public: AbstractSession* xfrin_session_; /// In-memory data source. Currently class IN only for simplicity. - const RRClass memory_datasrc_class_; - AuthSrv::MemoryDataSrcPtr memory_datasrc_; + const RRClass memory_client_class_; + AuthSrv::InMemoryClientPtr memory_client_; /// Hot spot cache isc::datasrc::HotCache cache_; @@ -125,6 +125,10 @@ public: /// The TSIG keyring const shared_ptr* keyring_; + + /// Bind the ModuleSpec object in config_session_ with + /// isc:config::ModuleSpec::validateStatistics. + void registerStatisticsValidator(); private: std::string db_file_; @@ -139,13 +143,16 @@ private: /// Increment query counter void incCounter(const int protocol); + + // validateStatistics + bool validateStatistics(isc::data::ConstElementPtr data) const; }; AuthSrvImpl::AuthSrvImpl(const bool use_cache, AbstractXfroutClient& xfrout_client) : config_session_(NULL), xfrin_session_(NULL), - memory_datasrc_class_(RRClass::IN()), + memory_client_class_(RRClass::IN()), statistics_timer_(io_service_), counters_(), keyring_(NULL), @@ -290,7 +297,7 @@ makeErrorMessage(MessagePtr message, OutputBufferPtr buffer, message->toWire(renderer); } LOG_DEBUG(auth_logger, DBG_AUTH_MESSAGES, AUTH_SEND_ERROR_RESPONSE) - .arg(message->toText()); + .arg(renderer.getLength()).arg(*message); } } @@ -317,6 +324,7 @@ AuthSrv::setXfrinSession(AbstractSession* xfrin_session) { void AuthSrv::setConfigSession(ModuleCCSession* config_session) { impl_->config_session_ = config_session; + impl_->registerStatisticsValidator(); } void @@ -329,34 +337,34 @@ AuthSrv::getConfigSession() const { return (impl_->config_session_); } -AuthSrv::MemoryDataSrcPtr -AuthSrv::getMemoryDataSrc(const RRClass& rrclass) { +AuthSrv::InMemoryClientPtr +AuthSrv::getInMemoryClient(const RRClass& rrclass) { // XXX: for simplicity, we only support the IN class right now. - if (rrclass != impl_->memory_datasrc_class_) { + if (rrclass != impl_->memory_client_class_) { isc_throw(InvalidParameter, "Memory data source is not supported for RR class " << rrclass); } - return (impl_->memory_datasrc_); + return (impl_->memory_client_); } void -AuthSrv::setMemoryDataSrc(const isc::dns::RRClass& rrclass, - MemoryDataSrcPtr memory_datasrc) +AuthSrv::setInMemoryClient(const isc::dns::RRClass& rrclass, + InMemoryClientPtr memory_client) { // XXX: see above - if (rrclass != impl_->memory_datasrc_class_) { + if (rrclass != impl_->memory_client_class_) { isc_throw(InvalidParameter, "Memory data source is not supported for RR class " << rrclass); - } else if (!impl_->memory_datasrc_ && memory_datasrc) { + } else if (!impl_->memory_client_ && memory_client) { LOG_DEBUG(auth_logger, DBG_AUTH_OPS, AUTH_MEM_DATASRC_ENABLED) .arg(rrclass); - } else if (impl_->memory_datasrc_ && !memory_datasrc) { + } else if (impl_->memory_client_ && !memory_client) { LOG_DEBUG(auth_logger, DBG_AUTH_OPS, AUTH_MEM_DATASRC_DISABLED) .arg(rrclass); } - impl_->memory_datasrc_ = memory_datasrc; + impl_->memory_client_ = memory_client; } uint32_t @@ -505,10 +513,10 @@ AuthSrvImpl::processNormalQuery(const IOMessage& io_message, MessagePtr message, // If a memory data source is configured call the separate // Query::process() const ConstQuestionPtr question = *message->beginQuestion(); - if (memory_datasrc_ && memory_datasrc_class_ == question->getClass()) { + if (memory_client_ && memory_client_class_ == question->getClass()) { const RRType& qtype = question->getType(); const Name& qname = question->getName(); - auth::Query(*memory_datasrc_, qname, qtype, *message).process(); + auth::Query(*memory_client_, qname, qtype, *message).process(); } else { datasrc::Query query(*message, cache_, dnssec_ok); data_sources_.doQuery(query); @@ -670,6 +678,22 @@ AuthSrvImpl::incCounter(const int protocol) { } } +void +AuthSrvImpl::registerStatisticsValidator() { + counters_.registerStatisticsValidator( + boost::bind(&AuthSrvImpl::validateStatistics, this, _1)); +} + +bool +AuthSrvImpl::validateStatistics(isc::data::ConstElementPtr data) const { + if (config_session_ == NULL) { + return (false); + } + return ( + config_session_->getModuleSpec().validateStatistics( + data, true)); +} + ConstElementPtr AuthSrvImpl::setDbFile(ConstElementPtr config) { ConstElementPtr answer = isc::config::createAnswer(); diff --git a/src/bin/auth/auth_srv.h b/src/bin/auth/auth_srv.h index 7eede97cd1..f2259a2994 100644 --- a/src/bin/auth/auth_srv.h +++ b/src/bin/auth/auth_srv.h @@ -17,7 +17,7 @@ #include -// For MemoryDataSrcPtr below. This should be a temporary definition until +// For InMemoryClientPtr below. This should be a temporary definition until // we reorganize the data source framework. #include @@ -39,7 +39,7 @@ namespace isc { namespace datasrc { -class MemoryDataSrc; +class InMemoryClient; } namespace xfr { class AbstractXfroutClient; @@ -133,7 +133,7 @@ public: /// If there is a data source installed, it will be replaced with the /// new one. /// - /// In the current implementation, the SQLite data source and MemoryDataSrc + /// In the current implementation, the SQLite data source and InMemoryClient /// are assumed. /// We can enable memory data source and get the path of SQLite database by /// the \c config parameter. If we disabled memory data source, the SQLite @@ -233,16 +233,16 @@ public: /// void setXfrinSession(isc::cc::AbstractSession* xfrin_session); - /// A shared pointer type for \c MemoryDataSrc. + /// A shared pointer type for \c InMemoryClient. /// /// This is defined inside the \c AuthSrv class as it's supposed to be /// a short term interface until we integrate the in-memory and other /// data source frameworks. - typedef boost::shared_ptr MemoryDataSrcPtr; + typedef boost::shared_ptr InMemoryClientPtr; - /// An immutable shared pointer type for \c MemoryDataSrc. - typedef boost::shared_ptr - ConstMemoryDataSrcPtr; + /// An immutable shared pointer type for \c InMemoryClient. + typedef boost::shared_ptr + ConstInMemoryClientPtr; /// Returns the in-memory data source configured for the \c AuthSrv, /// if any. @@ -260,11 +260,11 @@ public: /// \param rrclass The RR class of the requested in-memory data source. /// \return A pointer to the in-memory data source, if configured; /// otherwise NULL. - MemoryDataSrcPtr getMemoryDataSrc(const isc::dns::RRClass& rrclass); + InMemoryClientPtr getInMemoryClient(const isc::dns::RRClass& rrclass); /// Sets or replaces the in-memory data source of the specified RR class. /// - /// As noted in \c getMemoryDataSrc(), some RR classes may not be + /// As noted in \c getInMemoryClient(), some RR classes may not be /// supported, in which case an exception of class \c InvalidParameter /// will be thrown. /// This method never throws an exception otherwise. @@ -275,9 +275,9 @@ public: /// in-memory data source. /// /// \param rrclass The RR class of the in-memory data source to be set. - /// \param memory_datasrc A (shared) pointer to \c MemoryDataSrc to be set. - void setMemoryDataSrc(const isc::dns::RRClass& rrclass, - MemoryDataSrcPtr memory_datasrc); + /// \param memory_datasrc A (shared) pointer to \c InMemoryClient to be set. + void setInMemoryClient(const isc::dns::RRClass& rrclass, + InMemoryClientPtr memory_client); /// \brief Set the communication session with Statistics. /// diff --git a/src/bin/auth/b10-auth.8 b/src/bin/auth/b10-auth.8 index 0356683b11..aedadeefb0 100644 --- a/src/bin/auth/b10-auth.8 +++ b/src/bin/auth/b10-auth.8 @@ -2,12 +2,12 @@ .\" Title: b10-auth .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] .\" Generator: DocBook XSL Stylesheets v1.75.2 -.\" Date: March 8, 2011 +.\" Date: August 11, 2011 .\" Manual: BIND10 .\" Source: BIND10 .\" Language: English .\" -.TH "B10\-AUTH" "8" "March 8, 2011" "BIND10" "BIND10" +.TH "B10\-AUTH" "8" "August 11, 2011" "BIND10" "BIND10" .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- @@ -70,18 +70,6 @@ defines the path to the SQLite3 zone file when using the sqlite datasource\&. Th /usr/local/var/bind10\-devel/zone\&.sqlite3\&. .PP -\fIlisten_on\fR -is a list of addresses and ports for -\fBb10\-auth\fR -to listen on\&. The list items are the -\fIaddress\fR -string and -\fIport\fR -number\&. By default, -\fBb10\-auth\fR -listens on port 53 on the IPv6 (::) and IPv4 (0\&.0\&.0\&.0) wildcard addresses\&. -.PP - \fIdatasources\fR configures data sources\&. The list items include: \fItype\fR @@ -114,6 +102,18 @@ In this development version, currently this is only used for the memory data sou .RE .PP +\fIlisten_on\fR +is a list of addresses and ports for +\fBb10\-auth\fR +to listen on\&. The list items are the +\fIaddress\fR +string and +\fIport\fR +number\&. By default, +\fBb10\-auth\fR +listens on port 53 on the IPv6 (::) and IPv4 (0\&.0\&.0\&.0) wildcard addresses\&. +.PP + \fIstatistics\-interval\fR is the timer interval in seconds for \fBb10\-auth\fR @@ -164,6 +164,25 @@ immediately\&. \fBshutdown\fR exits \fBb10\-auth\fR\&. (Note that the BIND 10 boss process will restart this service\&.) +.SH "STATISTICS DATA" +.PP +The statistics data collected by the +\fBb10\-stats\fR +daemon include: +.PP +auth\&.queries\&.tcp +.RS 4 +Total count of queries received by the +\fBb10\-auth\fR +server over TCP since startup\&. +.RE +.PP +auth\&.queries\&.udp +.RS 4 +Total count of queries received by the +\fBb10\-auth\fR +server over UDP since startup\&. +.RE .SH "FILES" .PP diff --git a/src/bin/auth/b10-auth.xml b/src/bin/auth/b10-auth.xml index 2b533947d1..636f437993 100644 --- a/src/bin/auth/b10-auth.xml +++ b/src/bin/auth/b10-auth.xml @@ -20,7 +20,7 @@ - March 8, 2011 + August 11, 2011 @@ -131,15 +131,6 @@ /usr/local/var/bind10-devel/zone.sqlite3. - - listen_on is a list of addresses and ports for - b10-auth to listen on. - The list items are the address string - and port number. - By default, b10-auth listens on port 53 - on the IPv6 (::) and IPv4 (0.0.0.0) wildcard addresses. - - datasources configures data sources. The list items include: @@ -164,6 +155,15 @@ + + listen_on is a list of addresses and ports for + b10-auth to listen on. + The list items are the address string + and port number. + By default, b10-auth listens on port 53 + on the IPv6 (::) and IPv4 (0.0.0.0) wildcard addresses. + + statistics-interval is the timer interval in seconds for b10-auth to share its @@ -208,6 +208,34 @@ + + STATISTICS DATA + + + The statistics data collected by the b10-stats + daemon include: + + + + + + auth.queries.tcp + Total count of queries received by the + b10-auth server over TCP since startup. + + + + + auth.queries.udp + Total count of queries received by the + b10-auth server over UDP since startup. + + + + + + + FILES diff --git a/src/bin/auth/benchmarks/Makefile.am b/src/bin/auth/benchmarks/Makefile.am index cf3fe4aed8..53c019fbaa 100644 --- a/src/bin/auth/benchmarks/Makefile.am +++ b/src/bin/auth/benchmarks/Makefile.am @@ -13,10 +13,17 @@ query_bench_SOURCES += ../auth_srv.h ../auth_srv.cc query_bench_SOURCES += ../auth_config.h ../auth_config.cc query_bench_SOURCES += ../statistics.h ../statistics.cc query_bench_SOURCES += ../auth_log.h ../auth_log.cc +# This is a temporary workaround for #1206, where the InMemoryClient has been +# moved to an ldopened library. We could add that library to LDADD, but that +# is nonportable. When #1207 is done this becomes moot anyway, and the +# specific workaround is not needed anymore, so we can then remove this +# line again. +query_bench_SOURCES += ${top_srcdir}/src/lib/datasrc/memory_datasrc.cc nodist_query_bench_SOURCES = ../auth_messages.h ../auth_messages.cc query_bench_LDADD = $(top_builddir)/src/lib/dns/libdns++.la +query_bench_LDADD += $(top_builddir)/src/lib/util/libutil.la query_bench_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la query_bench_LDADD += $(top_builddir)/src/lib/bench/libbench.la query_bench_LDADD += $(top_builddir)/src/lib/datasrc/libdatasrc.la diff --git a/src/bin/auth/command.cc b/src/bin/auth/command.cc index fe3d7291c7..940d57bbb4 100644 --- a/src/bin/auth/command.cc +++ b/src/bin/auth/command.cc @@ -136,19 +136,21 @@ public: // that doesn't block other server operations. // TODO: we may (should?) want to check the "last load time" and // the timestamp of the file and skip loading if the file isn't newer. - shared_ptr newzone(new MemoryZone(oldzone->getClass(), - oldzone->getOrigin())); - newzone->load(oldzone->getFileName()); - oldzone->swap(*newzone); + shared_ptr zone_finder( + new InMemoryZoneFinder(old_zone_finder->getClass(), + old_zone_finder->getOrigin())); + zone_finder->load(old_zone_finder->getFileName()); + old_zone_finder->swap(*zone_finder); LOG_DEBUG(auth_logger, DBG_AUTH_OPS, AUTH_LOAD_ZONE) - .arg(newzone->getOrigin()).arg(newzone->getClass()); + .arg(zone_finder->getOrigin()).arg(zone_finder->getClass()); } private: - shared_ptr oldzone; // zone to be updated with the new file. + // zone finder to be updated with the new file. + shared_ptr old_zone_finder; // A helper private method to parse and validate command parameters. - // On success, it sets 'oldzone' to the zone to be updated. + // On success, it sets 'old_zone_finder' to the zone to be updated. // It returns true if everything is okay; and false if the command is // valid but there's no need for further process. bool validate(AuthSrv& server, isc::data::ConstElementPtr args) { @@ -176,7 +178,7 @@ private: const RRClass zone_class = class_elem ? RRClass(class_elem->stringValue()) : RRClass::IN(); - AuthSrv::MemoryDataSrcPtr datasrc(server.getMemoryDataSrc(zone_class)); + AuthSrv::InMemoryClientPtr datasrc(server.getInMemoryClient(zone_class)); if (datasrc == NULL) { isc_throw(AuthCommandError, "Memory data source is disabled"); } @@ -188,13 +190,14 @@ private: const Name origin(origin_elem->stringValue()); // Get the current zone - const MemoryDataSrc::FindResult result = datasrc->findZone(origin); + const InMemoryClient::FindResult result = datasrc->findZone(origin); if (result.code != result::SUCCESS) { isc_throw(AuthCommandError, "Zone " << origin << " is not found in data source"); } - oldzone = boost::dynamic_pointer_cast(result.zone); + old_zone_finder = boost::dynamic_pointer_cast( + result.zone_finder); return (true); } diff --git a/src/bin/auth/query.cc b/src/bin/auth/query.cc index 323f89077c..ab6404efad 100644 --- a/src/bin/auth/query.cc +++ b/src/bin/auth/query.cc @@ -19,7 +19,7 @@ #include #include -#include +#include #include @@ -31,14 +31,14 @@ namespace isc { namespace auth { void -Query::getAdditional(const Zone& zone, const RRset& rrset) const { +Query::getAdditional(ZoneFinder& zone, const RRset& rrset) const { RdataIteratorPtr rdata_iterator(rrset.getRdataIterator()); for (; !rdata_iterator->isLast(); rdata_iterator->next()) { const Rdata& rdata(rdata_iterator->getCurrent()); if (rrset.getType() == RRType::NS()) { // Need to perform the search in the "GLUE OK" mode. const generic::NS& ns = dynamic_cast(rdata); - findAddrs(zone, ns.getNSName(), Zone::FIND_GLUE_OK); + findAddrs(zone, ns.getNSName(), ZoneFinder::FIND_GLUE_OK); } else if (rrset.getType() == RRType::MX()) { const generic::MX& mx(dynamic_cast(rdata)); findAddrs(zone, mx.getMXName()); @@ -47,8 +47,8 @@ Query::getAdditional(const Zone& zone, const RRset& rrset) const { } void -Query::findAddrs(const Zone& zone, const Name& qname, - const Zone::FindOptions options) const +Query::findAddrs(ZoneFinder& zone, const Name& qname, + const ZoneFinder::FindOptions options) const { // Out of zone name NameComparisonResult result = zone.getOrigin().compare(qname); @@ -66,30 +66,31 @@ Query::findAddrs(const Zone& zone, const Name& qname, // Find A rrset if (qname_ != qname || qtype_ != RRType::A()) { - Zone::FindResult a_result = zone.find(qname, RRType::A(), NULL, - options); - if (a_result.code == Zone::SUCCESS) { + ZoneFinder::FindResult a_result = zone.find(qname, RRType::A(), NULL, + options | dnssec_opt_); + if (a_result.code == ZoneFinder::SUCCESS) { response_.addRRset(Message::SECTION_ADDITIONAL, - boost::const_pointer_cast(a_result.rrset)); + boost::const_pointer_cast(a_result.rrset), dnssec_); } } // Find AAAA rrset if (qname_ != qname || qtype_ != RRType::AAAA()) { - Zone::FindResult aaaa_result = - zone.find(qname, RRType::AAAA(), NULL, options); - if (aaaa_result.code == Zone::SUCCESS) { + ZoneFinder::FindResult aaaa_result = + zone.find(qname, RRType::AAAA(), NULL, options | dnssec_opt_); + if (aaaa_result.code == ZoneFinder::SUCCESS) { response_.addRRset(Message::SECTION_ADDITIONAL, - boost::const_pointer_cast(aaaa_result.rrset)); + boost::const_pointer_cast(aaaa_result.rrset), + dnssec_); } } } void -Query::putSOA(const Zone& zone) const { - Zone::FindResult soa_result(zone.find(zone.getOrigin(), - RRType::SOA())); - if (soa_result.code != Zone::SUCCESS) { +Query::putSOA(ZoneFinder& zone) const { + ZoneFinder::FindResult soa_result(zone.find(zone.getOrigin(), + RRType::SOA(), NULL, dnssec_opt_)); + if (soa_result.code != ZoneFinder::SUCCESS) { isc_throw(NoSOA, "There's no SOA record in zone " << zone.getOrigin().toText()); } else { @@ -99,21 +100,23 @@ Query::putSOA(const Zone& zone) const { * to insist. */ response_.addRRset(Message::SECTION_AUTHORITY, - boost::const_pointer_cast(soa_result.rrset)); + boost::const_pointer_cast(soa_result.rrset), dnssec_); } } void -Query::getAuthAdditional(const Zone& zone) const { +Query::getAuthAdditional(ZoneFinder& zone) const { // Fill in authority and addtional sections. - Zone::FindResult ns_result = zone.find(zone.getOrigin(), RRType::NS()); + ZoneFinder::FindResult ns_result = zone.find(zone.getOrigin(), + RRType::NS(), NULL, + dnssec_opt_); // zone origin name should have NS records - if (ns_result.code != Zone::SUCCESS) { + if (ns_result.code != ZoneFinder::SUCCESS) { isc_throw(NoApexNS, "There's no apex NS records in zone " << zone.getOrigin().toText()); } else { response_.addRRset(Message::SECTION_AUTHORITY, - boost::const_pointer_cast(ns_result.rrset)); + boost::const_pointer_cast(ns_result.rrset), dnssec_); // Handle additional for authority section getAdditional(zone, *ns_result.rrset); } @@ -125,8 +128,8 @@ Query::process() const { const bool qtype_is_any = (qtype_ == RRType::ANY()); response_.setHeaderFlag(Message::HEADERFLAG_AA, false); - const MemoryDataSrc::FindResult result = - memory_datasrc_.findZone(qname_); + const DataSourceClient::FindResult result = + datasrc_client_.findZone(qname_); // If we have no matching authoritative zone for the query name, return // REFUSED. In short, this is to be compatible with BIND 9, but the @@ -145,14 +148,15 @@ Query::process() const { while (keep_doing) { keep_doing = false; std::auto_ptr target(qtype_is_any ? new RRsetList : NULL); - const Zone::FindResult db_result(result.zone->find(qname_, qtype_, - target.get())); - + const ZoneFinder::FindResult db_result( + result.zone_finder->find(qname_, qtype_, target.get(), + dnssec_opt_)); switch (db_result.code) { - case Zone::DNAME: { + case ZoneFinder::DNAME: { // First, put the dname into the answer response_.addRRset(Message::SECTION_ANSWER, - boost::const_pointer_cast(db_result.rrset)); + boost::const_pointer_cast(db_result.rrset), + dnssec_); /* * Empty DNAME should never get in, as it is impossible to * create one in master file. @@ -188,10 +192,10 @@ Query::process() const { qname_.getLabelCount() - db_result.rrset->getName().getLabelCount()). concatenate(dname.getDname()))); - response_.addRRset(Message::SECTION_ANSWER, cname); + response_.addRRset(Message::SECTION_ANSWER, cname, dnssec_); break; } - case Zone::CNAME: + case ZoneFinder::CNAME: /* * We don't do chaining yet. Therefore handling a CNAME is * mostly the same as handling SUCCESS, but we didn't get @@ -202,48 +206,59 @@ Query::process() const { * So, just put it there. */ response_.addRRset(Message::SECTION_ANSWER, - boost::const_pointer_cast(db_result.rrset)); + boost::const_pointer_cast(db_result.rrset), + dnssec_); break; - case Zone::SUCCESS: + case ZoneFinder::SUCCESS: if (qtype_is_any) { // If quety type is ANY, insert all RRs under the domain // into answer section. BOOST_FOREACH(RRsetPtr rrset, *target) { - response_.addRRset(Message::SECTION_ANSWER, rrset); + response_.addRRset(Message::SECTION_ANSWER, rrset, + dnssec_); // Handle additional for answer section - getAdditional(*result.zone, *rrset.get()); + getAdditional(*result.zone_finder, *rrset.get()); } } else { response_.addRRset(Message::SECTION_ANSWER, - boost::const_pointer_cast(db_result.rrset)); + boost::const_pointer_cast(db_result.rrset), + dnssec_); // Handle additional for answer section - getAdditional(*result.zone, *db_result.rrset); + getAdditional(*result.zone_finder, *db_result.rrset); } // If apex NS records haven't been provided in the answer // section, insert apex NS records into the authority section // and AAAA/A RRS of each of the NS RDATA into the additional // section. - if (qname_ != result.zone->getOrigin() || - db_result.code != Zone::SUCCESS || + if (qname_ != result.zone_finder->getOrigin() || + db_result.code != ZoneFinder::SUCCESS || (qtype_ != RRType::NS() && !qtype_is_any)) { - getAuthAdditional(*result.zone); + getAuthAdditional(*result.zone_finder); } break; - case Zone::DELEGATION: + case ZoneFinder::DELEGATION: response_.setHeaderFlag(Message::HEADERFLAG_AA, false); response_.addRRset(Message::SECTION_AUTHORITY, - boost::const_pointer_cast(db_result.rrset)); - getAdditional(*result.zone, *db_result.rrset); + boost::const_pointer_cast(db_result.rrset), + dnssec_); + getAdditional(*result.zone_finder, *db_result.rrset); break; - case Zone::NXDOMAIN: + case ZoneFinder::NXDOMAIN: // Just empty answer with SOA in authority section response_.setRcode(Rcode::NXDOMAIN()); - putSOA(*result.zone); + putSOA(*result.zone_finder); break; - case Zone::NXRRSET: + case ZoneFinder::NXRRSET: // Just empty answer with SOA in authority section - putSOA(*result.zone); + putSOA(*result.zone_finder); + break; + default: + // These are new result codes (WILDCARD and WILDCARD_NXRRSET) + // They should not happen from the in-memory and the database + // backend isn't used yet. + // TODO: Implement before letting the database backends in + isc_throw(isc::NotImplemented, "Unknown result code"); break; } } diff --git a/src/bin/auth/query.h b/src/bin/auth/query.h index e0c6323e7b..0ebbed8a15 100644 --- a/src/bin/auth/query.h +++ b/src/bin/auth/query.h @@ -26,7 +26,7 @@ class RRset; } namespace datasrc { -class MemoryDataSrc; +class DataSourceClient; } namespace auth { @@ -36,10 +36,8 @@ namespace auth { /// /// Many of the design details for this class are still in flux. /// We'll revisit and update them as we add more functionality, for example: -/// - memory_datasrc parameter of the constructor. It is a data source that -/// uses in memory dedicated backend. /// - as a related point, we may have to pass the RR class of the query. -/// in the initial implementation the RR class is an attribute of memory +/// in the initial implementation the RR class is an attribute of /// datasource and omitted. It's not clear if this assumption holds with /// generic data sources. On the other hand, it will help keep /// implementation simpler, and we might rather want to modify the design @@ -51,7 +49,7 @@ namespace auth { /// separate attribute setter. /// - likewise, we'll eventually need to do per zone access control, for which /// we need querier's information such as its IP address. -/// - memory_datasrc and response may better be parameters to process() instead +/// - datasrc_client and response may better be parameters to process() instead /// of the constructor. /// /// Note: The class name is intentionally the same as the one used in @@ -71,7 +69,7 @@ private: /// Adds a SOA of the zone into the authority zone of response_. /// Can throw NoSOA. /// - void putSOA(const isc::datasrc::Zone& zone) const; + void putSOA(isc::datasrc::ZoneFinder& zone) const; /// \brief Look up additional data (i.e., address records for the names /// included in NS or MX records). @@ -83,11 +81,11 @@ private: /// This method may throw a exception because its underlying methods may /// throw exceptions. /// - /// \param zone The Zone wherein the additional data to the query is bo be - /// found. + /// \param zone The ZoneFinder through which the additional data for the + /// query is to be found. /// \param rrset The RRset (i.e., NS or MX rrset) which require additional /// processing. - void getAdditional(const isc::datasrc::Zone& zone, + void getAdditional(isc::datasrc::ZoneFinder& zone, const isc::dns::RRset& rrset) const; /// \brief Find address records for a specified name. @@ -102,18 +100,19 @@ private: /// The glue records must exactly match the name in the NS RDATA, without /// CNAME or wildcard processing. /// - /// \param zone The \c Zone wherein the address records is to be found. + /// \param zone The \c ZoneFinder through which the address records is to + /// be found. /// \param qname The name in rrset RDATA. /// \param options The search options. - void findAddrs(const isc::datasrc::Zone& zone, + void findAddrs(isc::datasrc::ZoneFinder& zone, const isc::dns::Name& qname, - const isc::datasrc::Zone::FindOptions options - = isc::datasrc::Zone::FIND_DEFAULT) const; + const isc::datasrc::ZoneFinder::FindOptions options + = isc::datasrc::ZoneFinder::FIND_DEFAULT) const; - /// \brief Look up \c Zone's NS and address records for the NS RDATA - /// (domain name) for authoritative answer. + /// \brief Look up a zone's NS RRset and their address records for an + /// authoritative answer. /// - /// On returning an authoritative answer, insert the \c Zone's NS into the + /// On returning an authoritative answer, insert a zone's NS into the /// authority section and AAAA/A RRs of each of the NS RDATA into the /// additional section. /// @@ -126,25 +125,29 @@ private: /// include AAAA/A RRs under a zone cut in additional section. (BIND 9 /// excludes under-cut RRs; NSD include them.) /// - /// \param zone The \c Zone wherein the additional data to the query is to - /// be found. - void getAuthAdditional(const isc::datasrc::Zone& zone) const; + /// \param zone The \c ZoneFinder through which the NS and additional data + /// for the query are to be found. + void getAuthAdditional(isc::datasrc::ZoneFinder& zone) const; public: /// Constructor from query parameters. /// /// This constructor never throws an exception. /// - /// \param memory_datasrc The memory datasource wherein the answer to the query is + /// \param datasrc_client The datasource wherein the answer to the query is /// to be found. /// \param qname The query name /// \param qtype The RR type of the query /// \param response The response message to store the answer to the query. - Query(const isc::datasrc::MemoryDataSrc& memory_datasrc, + /// \param dnssec If the answer should include signatures and NSEC/NSEC3 if + /// possible. + Query(const isc::datasrc::DataSourceClient& datasrc_client, const isc::dns::Name& qname, const isc::dns::RRType& qtype, - isc::dns::Message& response) : - memory_datasrc_(memory_datasrc), qname_(qname), qtype_(qtype), - response_(response) + isc::dns::Message& response, bool dnssec = false) : + datasrc_client_(datasrc_client), qname_(qname), qtype_(qtype), + response_(response), dnssec_(dnssec), + dnssec_opt_(dnssec ? isc::datasrc::ZoneFinder::FIND_DNSSEC : + isc::datasrc::ZoneFinder::FIND_DEFAULT) {} /// Process the query. @@ -157,7 +160,7 @@ public: /// successful search would result in adding a corresponding RRset to /// the answer section of the response. /// - /// If no matching zone is found in the memory datasource, the RCODE of + /// If no matching zone is found in the datasource, the RCODE of /// SERVFAIL will be set in the response. /// Note: this is different from the error code that BIND 9 returns /// by default when it's configured as an authoritative-only server (and @@ -208,10 +211,12 @@ public: }; private: - const isc::datasrc::MemoryDataSrc& memory_datasrc_; + const isc::datasrc::DataSourceClient& datasrc_client_; const isc::dns::Name& qname_; const isc::dns::RRType& qtype_; isc::dns::Message& response_; + const bool dnssec_; + const isc::datasrc::ZoneFinder::FindOptions dnssec_opt_; }; } diff --git a/src/bin/auth/statistics.cc b/src/bin/auth/statistics.cc index 76e50074fc..e62719f7e2 100644 --- a/src/bin/auth/statistics.cc +++ b/src/bin/auth/statistics.cc @@ -37,11 +37,14 @@ public: void inc(const AuthCounters::CounterType type); bool submitStatistics() const; void setStatisticsSession(isc::cc::AbstractSession* statistics_session); + void registerStatisticsValidator + (AuthCounters::validator_type validator); // Currently for testing purpose only uint64_t getCounter(const AuthCounters::CounterType type) const; private: std::vector counters_; isc::cc::AbstractSession* statistics_session_; + AuthCounters::validator_type validator_; }; AuthCountersImpl::AuthCountersImpl() : @@ -67,16 +70,25 @@ AuthCountersImpl::submitStatistics() const { } std::stringstream statistics_string; statistics_string << "{\"command\": [\"set\"," - << "{ \"stats_data\": " - << "{ \"auth.queries.udp\": " + << "{ \"owner\": \"Auth\"," + << " \"data\":" + << "{ \"queries.udp\": " << counters_.at(AuthCounters::COUNTER_UDP_QUERY) - << ", \"auth.queries.tcp\": " + << ", \"queries.tcp\": " << counters_.at(AuthCounters::COUNTER_TCP_QUERY) << " }" << "}" << "]}"; isc::data::ConstElementPtr statistics_element = isc::data::Element::fromJSON(statistics_string); + // validate the statistics data before send + if (validator_) { + if (!validator_( + statistics_element->get("command")->get(1)->get("data"))) { + LOG_ERROR(auth_logger, AUTH_INVALID_STATISTICS_DATA); + return (false); + } + } try { // group_{send,recv}msg() can throw an exception when encountering // an error, and group_recvmsg() will throw an exception on timeout. @@ -105,6 +117,13 @@ AuthCountersImpl::setStatisticsSession statistics_session_ = statistics_session; } +void +AuthCountersImpl::registerStatisticsValidator + (AuthCounters::validator_type validator) +{ + validator_ = validator; +} + // Currently for testing purpose only uint64_t AuthCountersImpl::getCounter(const AuthCounters::CounterType type) const { @@ -139,3 +158,10 @@ uint64_t AuthCounters::getCounter(const AuthCounters::CounterType type) const { return (impl_->getCounter(type)); } + +void +AuthCounters::registerStatisticsValidator + (AuthCounters::validator_type validator) const +{ + return (impl_->registerStatisticsValidator(validator)); +} diff --git a/src/bin/auth/statistics.h b/src/bin/auth/statistics.h index 5bf643656d..c930414c65 100644 --- a/src/bin/auth/statistics.h +++ b/src/bin/auth/statistics.h @@ -131,6 +131,26 @@ public: /// \return the value of the counter specified by \a type. /// uint64_t getCounter(const AuthCounters::CounterType type) const; + + /// \brief A type of validation function for the specification in + /// isc::config::ModuleSpec. + /// + /// This type might be useful for not only statistics + /// specificatoin but also for config_data specification and for + /// commnad. + /// + typedef boost::function + validator_type; + + /// \brief Register a function type of the statistics validation + /// function for AuthCounters. + /// + /// This method never throws an exception. + /// + /// \param validator A function type of the validation of + /// statistics specification. + /// + void registerStatisticsValidator(AuthCounters::validator_type validator) const; }; #endif // __STATISTICS_H diff --git a/src/bin/auth/tests/Makefile.am b/src/bin/auth/tests/Makefile.am index 71520c287f..d27386e62e 100644 --- a/src/bin/auth/tests/Makefile.am +++ b/src/bin/auth/tests/Makefile.am @@ -37,6 +37,13 @@ run_unittests_SOURCES += query_unittest.cc run_unittests_SOURCES += change_user_unittest.cc run_unittests_SOURCES += statistics_unittest.cc run_unittests_SOURCES += run_unittests.cc +# This is a temporary workaround for #1206, where the InMemoryClient has been +# moved to an ldopened library. We could add that library to LDADD, but that +# is nonportable. When #1207 is done this becomes moot anyway, and the +# specific workaround is not needed anymore, so we can then remove this +# line again. +run_unittests_SOURCES += ${top_srcdir}/src/lib/datasrc/memory_datasrc.cc + nodist_run_unittests_SOURCES = ../auth_messages.h ../auth_messages.cc @@ -47,6 +54,7 @@ run_unittests_LDADD += $(SQLITE_LIBS) run_unittests_LDADD += $(top_builddir)/src/lib/testutils/libtestutils.la run_unittests_LDADD += $(top_builddir)/src/lib/datasrc/libdatasrc.la run_unittests_LDADD += $(top_builddir)/src/lib/dns/libdns++.la +run_unittests_LDADD += $(top_builddir)/src/lib/util/libutil.la run_unittests_LDADD += $(top_builddir)/src/lib/asiodns/libasiodns.la run_unittests_LDADD += $(top_builddir)/src/lib/asiolink/libasiolink.la run_unittests_LDADD += $(top_builddir)/src/lib/config/libcfgclient.la diff --git a/src/bin/auth/tests/auth_srv_unittest.cc b/src/bin/auth/tests/auth_srv_unittest.cc index 2b20d65a4d..469858857d 100644 --- a/src/bin/auth/tests/auth_srv_unittest.cc +++ b/src/bin/auth/tests/auth_srv_unittest.cc @@ -651,17 +651,17 @@ TEST_F(AuthSrvTest, updateConfigFail) { QR_FLAG | AA_FLAG, 1, 1, 1, 0); } -TEST_F(AuthSrvTest, updateWithMemoryDataSrc) { +TEST_F(AuthSrvTest, updateWithInMemoryClient) { // Test configuring memory data source. Detailed test cases are covered // in the configuration tests. We only check the AuthSrv interface here. // By default memory data source isn't enabled - EXPECT_EQ(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); + EXPECT_EQ(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); updateConfig(&server, "{\"datasources\": [{\"type\": \"memory\"}]}", true); // after successful configuration, we should have one (with empty zoneset). - ASSERT_NE(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); - EXPECT_EQ(0, server.getMemoryDataSrc(rrclass)->getZoneCount()); + ASSERT_NE(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); + EXPECT_EQ(0, server.getInMemoryClient(rrclass)->getZoneCount()); // The memory data source is empty, should return REFUSED rcode. createDataFromFile("examplequery_fromWire.wire"); @@ -672,7 +672,7 @@ TEST_F(AuthSrvTest, updateWithMemoryDataSrc) { opcode.getCode(), QR_FLAG, 1, 0, 0, 0); } -TEST_F(AuthSrvTest, chQueryWithMemoryDataSrc) { +TEST_F(AuthSrvTest, chQueryWithInMemoryClient) { // Configure memory data source for class IN updateConfig(&server, "{\"datasources\": " "[{\"class\": \"IN\", \"type\": \"memory\"}]}", true); diff --git a/src/bin/auth/tests/command_unittest.cc b/src/bin/auth/tests/command_unittest.cc index 3fdd08601e..8a82367aea 100644 --- a/src/bin/auth/tests/command_unittest.cc +++ b/src/bin/auth/tests/command_unittest.cc @@ -48,9 +48,9 @@ using namespace isc::datasrc; using namespace isc::config; namespace { -class AuthConmmandTest : public ::testing::Test { +class AuthCommandTest : public ::testing::Test { protected: - AuthConmmandTest() : server(false, xfrout), rcode(-1) { + AuthCommandTest() : server(false, xfrout), rcode(-1) { server.setStatisticsSession(&statistics_session); } void checkAnswer(const int expected_code) { @@ -60,21 +60,20 @@ protected: MockSession statistics_session; MockXfroutClient xfrout; AuthSrv server; - AuthSrv::ConstMemoryDataSrcPtr memory_datasrc; ConstElementPtr result; int rcode; public: void stopServer(); // need to be public for boost::bind }; -TEST_F(AuthConmmandTest, unknownCommand) { +TEST_F(AuthCommandTest, unknownCommand) { result = execAuthServerCommand(server, "no_such_command", ConstElementPtr()); parseAnswer(rcode, result); EXPECT_EQ(1, rcode); } -TEST_F(AuthConmmandTest, DISABLED_unexpectedException) { +TEST_F(AuthCommandTest, DISABLED_unexpectedException) { // execAuthServerCommand() won't catch standard exceptions. // Skip this test for now: ModuleCCSession doesn't seem to validate // commands. @@ -83,7 +82,7 @@ TEST_F(AuthConmmandTest, DISABLED_unexpectedException) { runtime_error); } -TEST_F(AuthConmmandTest, sendStatistics) { +TEST_F(AuthCommandTest, sendStatistics) { result = execAuthServerCommand(server, "sendstats", ConstElementPtr()); // Just check some message has been sent. Detailed tests specific to // statistics are done in its own tests. @@ -92,15 +91,15 @@ TEST_F(AuthConmmandTest, sendStatistics) { } void -AuthConmmandTest::stopServer() { +AuthCommandTest::stopServer() { result = execAuthServerCommand(server, "shutdown", ConstElementPtr()); parseAnswer(rcode, result); assert(rcode == 0); // make sure the test stops when something is wrong } -TEST_F(AuthConmmandTest, shutdown) { +TEST_F(AuthCommandTest, shutdown) { isc::asiolink::IntervalTimer itimer(server.getIOService()); - itimer.setup(boost::bind(&AuthConmmandTest::stopServer, this), 1); + itimer.setup(boost::bind(&AuthCommandTest::stopServer, this), 1); server.getIOService().run(); EXPECT_EQ(0, rcode); } @@ -110,18 +109,18 @@ TEST_F(AuthConmmandTest, shutdown) { // zones, and checks the zones are correctly loaded. void zoneChecks(AuthSrv& server) { - EXPECT_TRUE(server.getMemoryDataSrc(RRClass::IN())); - EXPECT_EQ(Zone::SUCCESS, server.getMemoryDataSrc(RRClass::IN())-> - findZone(Name("ns.test1.example")).zone-> + EXPECT_TRUE(server.getInMemoryClient(RRClass::IN())); + EXPECT_EQ(ZoneFinder::SUCCESS, server.getInMemoryClient(RRClass::IN())-> + findZone(Name("ns.test1.example")).zone_finder-> find(Name("ns.test1.example"), RRType::A()).code); - EXPECT_EQ(Zone::NXRRSET, server.getMemoryDataSrc(RRClass::IN())-> - findZone(Name("ns.test1.example")).zone-> + EXPECT_EQ(ZoneFinder::NXRRSET, server.getInMemoryClient(RRClass::IN())-> + findZone(Name("ns.test1.example")).zone_finder-> find(Name("ns.test1.example"), RRType::AAAA()).code); - EXPECT_EQ(Zone::SUCCESS, server.getMemoryDataSrc(RRClass::IN())-> - findZone(Name("ns.test2.example")).zone-> + EXPECT_EQ(ZoneFinder::SUCCESS, server.getInMemoryClient(RRClass::IN())-> + findZone(Name("ns.test2.example")).zone_finder-> find(Name("ns.test2.example"), RRType::A()).code); - EXPECT_EQ(Zone::NXRRSET, server.getMemoryDataSrc(RRClass::IN())-> - findZone(Name("ns.test2.example")).zone-> + EXPECT_EQ(ZoneFinder::NXRRSET, server.getInMemoryClient(RRClass::IN())-> + findZone(Name("ns.test2.example")).zone_finder-> find(Name("ns.test2.example"), RRType::AAAA()).code); } @@ -147,25 +146,25 @@ configureZones(AuthSrv& server) { void newZoneChecks(AuthSrv& server) { - EXPECT_TRUE(server.getMemoryDataSrc(RRClass::IN())); - EXPECT_EQ(Zone::SUCCESS, server.getMemoryDataSrc(RRClass::IN())-> - findZone(Name("ns.test1.example")).zone-> + EXPECT_TRUE(server.getInMemoryClient(RRClass::IN())); + EXPECT_EQ(ZoneFinder::SUCCESS, server.getInMemoryClient(RRClass::IN())-> + findZone(Name("ns.test1.example")).zone_finder-> find(Name("ns.test1.example"), RRType::A()).code); // now test1.example should have ns/AAAA - EXPECT_EQ(Zone::SUCCESS, server.getMemoryDataSrc(RRClass::IN())-> - findZone(Name("ns.test1.example")).zone-> + EXPECT_EQ(ZoneFinder::SUCCESS, server.getInMemoryClient(RRClass::IN())-> + findZone(Name("ns.test1.example")).zone_finder-> find(Name("ns.test1.example"), RRType::AAAA()).code); // test2.example shouldn't change - EXPECT_EQ(Zone::SUCCESS, server.getMemoryDataSrc(RRClass::IN())-> - findZone(Name("ns.test2.example")).zone-> + EXPECT_EQ(ZoneFinder::SUCCESS, server.getInMemoryClient(RRClass::IN())-> + findZone(Name("ns.test2.example")).zone_finder-> find(Name("ns.test2.example"), RRType::A()).code); - EXPECT_EQ(Zone::NXRRSET, server.getMemoryDataSrc(RRClass::IN())-> - findZone(Name("ns.test2.example")).zone-> + EXPECT_EQ(ZoneFinder::NXRRSET, server.getInMemoryClient(RRClass::IN())-> + findZone(Name("ns.test2.example")).zone_finder-> find(Name("ns.test2.example"), RRType::AAAA()).code); } -TEST_F(AuthConmmandTest, loadZone) { +TEST_F(AuthCommandTest, loadZone) { configureZones(server); ASSERT_EQ(0, system(INSTALL_PROG " " TEST_DATA_DIR @@ -182,7 +181,7 @@ TEST_F(AuthConmmandTest, loadZone) { newZoneChecks(server); } -TEST_F(AuthConmmandTest, loadBrokenZone) { +TEST_F(AuthCommandTest, loadBrokenZone) { configureZones(server); ASSERT_EQ(0, system(INSTALL_PROG " " TEST_DATA_DIR @@ -195,7 +194,7 @@ TEST_F(AuthConmmandTest, loadBrokenZone) { zoneChecks(server); // zone shouldn't be replaced } -TEST_F(AuthConmmandTest, loadUnreadableZone) { +TEST_F(AuthCommandTest, loadUnreadableZone) { configureZones(server); // install the zone file as unreadable @@ -209,7 +208,7 @@ TEST_F(AuthConmmandTest, loadUnreadableZone) { zoneChecks(server); // zone shouldn't be replaced } -TEST_F(AuthConmmandTest, loadZoneWithoutDataSrc) { +TEST_F(AuthCommandTest, loadZoneWithoutDataSrc) { // try to execute load command without configuring the zone beforehand. // it should fail. result = execAuthServerCommand(server, "loadzone", @@ -218,7 +217,7 @@ TEST_F(AuthConmmandTest, loadZoneWithoutDataSrc) { checkAnswer(1); } -TEST_F(AuthConmmandTest, loadSqlite3DataSrc) { +TEST_F(AuthCommandTest, loadSqlite3DataSrc) { // For sqlite3 data source we don't have to do anything (the data source // (re)loads itself automatically) result = execAuthServerCommand(server, "loadzone", @@ -228,7 +227,7 @@ TEST_F(AuthConmmandTest, loadSqlite3DataSrc) { checkAnswer(0); } -TEST_F(AuthConmmandTest, loadZoneInvalidParams) { +TEST_F(AuthCommandTest, loadZoneInvalidParams) { configureZones(server); // null arg diff --git a/src/bin/auth/tests/config_unittest.cc b/src/bin/auth/tests/config_unittest.cc index 0890c55a02..dadb0ee390 100644 --- a/src/bin/auth/tests/config_unittest.cc +++ b/src/bin/auth/tests/config_unittest.cc @@ -57,12 +57,12 @@ protected: TEST_F(AuthConfigTest, datasourceConfig) { // By default, we don't have any in-memory data source. - EXPECT_EQ(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); + EXPECT_EQ(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); configureAuthServer(server, Element::fromJSON( "{\"datasources\": [{\"type\": \"memory\"}]}")); // after successful configuration, we should have one (with empty zoneset). - ASSERT_NE(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); - EXPECT_EQ(0, server.getMemoryDataSrc(rrclass)->getZoneCount()); + ASSERT_NE(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); + EXPECT_EQ(0, server.getInMemoryClient(rrclass)->getZoneCount()); } TEST_F(AuthConfigTest, databaseConfig) { @@ -82,7 +82,7 @@ TEST_F(AuthConfigTest, versionConfig) { } TEST_F(AuthConfigTest, exceptionGuarantee) { - EXPECT_EQ(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); + EXPECT_EQ(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); // This configuration contains an invalid item, which will trigger // an exception. EXPECT_THROW(configureAuthServer( @@ -92,7 +92,7 @@ TEST_F(AuthConfigTest, exceptionGuarantee) { " \"no_such_config_var\": 1}")), AuthConfigError); // The server state shouldn't change - EXPECT_EQ(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); + EXPECT_EQ(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); } TEST_F(AuthConfigTest, exceptionConversion) { @@ -154,22 +154,22 @@ protected: TEST_F(MemoryDatasrcConfigTest, addZeroDataSrc) { parser->build(Element::fromJSON("[]")); parser->commit(); - EXPECT_EQ(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); + EXPECT_EQ(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); } TEST_F(MemoryDatasrcConfigTest, addEmpty) { // By default, we don't have any in-memory data source. - EXPECT_EQ(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); + EXPECT_EQ(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); parser->build(Element::fromJSON("[{\"type\": \"memory\"}]")); parser->commit(); - EXPECT_EQ(0, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(0, server.getInMemoryClient(rrclass)->getZoneCount()); } TEST_F(MemoryDatasrcConfigTest, addZeroZone) { parser->build(Element::fromJSON("[{\"type\": \"memory\"," " \"zones\": []}]")); parser->commit(); - EXPECT_EQ(0, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(0, server.getInMemoryClient(rrclass)->getZoneCount()); } TEST_F(MemoryDatasrcConfigTest, addOneZone) { @@ -179,10 +179,10 @@ TEST_F(MemoryDatasrcConfigTest, addOneZone) { " \"file\": \"" TEST_DATA_DIR "/example.zone\"}]}]"))); EXPECT_NO_THROW(parser->commit()); - EXPECT_EQ(1, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(1, server.getInMemoryClient(rrclass)->getZoneCount()); // Check it actually loaded something - EXPECT_EQ(Zone::SUCCESS, server.getMemoryDataSrc(rrclass)->findZone( - Name("ns.example.com.")).zone->find(Name("ns.example.com."), + EXPECT_EQ(ZoneFinder::SUCCESS, server.getInMemoryClient(rrclass)->findZone( + Name("ns.example.com.")).zone_finder->find(Name("ns.example.com."), RRType::A()).code); } @@ -199,7 +199,7 @@ TEST_F(MemoryDatasrcConfigTest, addMultiZones) { " \"file\": \"" TEST_DATA_DIR "/example.net.zone\"}]}]"))); EXPECT_NO_THROW(parser->commit()); - EXPECT_EQ(3, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(3, server.getInMemoryClient(rrclass)->getZoneCount()); } TEST_F(MemoryDatasrcConfigTest, replace) { @@ -209,9 +209,9 @@ TEST_F(MemoryDatasrcConfigTest, replace) { " \"file\": \"" TEST_DATA_DIR "/example.zone\"}]}]"))); EXPECT_NO_THROW(parser->commit()); - EXPECT_EQ(1, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(1, server.getInMemoryClient(rrclass)->getZoneCount()); EXPECT_EQ(isc::datasrc::result::SUCCESS, - server.getMemoryDataSrc(rrclass)->findZone( + server.getInMemoryClient(rrclass)->findZone( Name("example.com")).code); // create a new parser, and install a new set of configuration. It @@ -227,9 +227,9 @@ TEST_F(MemoryDatasrcConfigTest, replace) { " \"file\": \"" TEST_DATA_DIR "/example.net.zone\"}]}]"))); EXPECT_NO_THROW(parser->commit()); - EXPECT_EQ(2, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(2, server.getInMemoryClient(rrclass)->getZoneCount()); EXPECT_EQ(isc::datasrc::result::NOTFOUND, - server.getMemoryDataSrc(rrclass)->findZone( + server.getInMemoryClient(rrclass)->findZone( Name("example.com")).code); } @@ -241,9 +241,9 @@ TEST_F(MemoryDatasrcConfigTest, exception) { " \"file\": \"" TEST_DATA_DIR "/example.zone\"}]}]"))); EXPECT_NO_THROW(parser->commit()); - EXPECT_EQ(1, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(1, server.getInMemoryClient(rrclass)->getZoneCount()); EXPECT_EQ(isc::datasrc::result::SUCCESS, - server.getMemoryDataSrc(rrclass)->findZone( + server.getInMemoryClient(rrclass)->findZone( Name("example.com")).code); // create a new parser, and try to load something. It will throw, @@ -262,9 +262,9 @@ TEST_F(MemoryDatasrcConfigTest, exception) { // commit it // The original should be untouched - EXPECT_EQ(1, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(1, server.getInMemoryClient(rrclass)->getZoneCount()); EXPECT_EQ(isc::datasrc::result::SUCCESS, - server.getMemoryDataSrc(rrclass)->findZone( + server.getInMemoryClient(rrclass)->findZone( Name("example.com")).code); } @@ -275,13 +275,13 @@ TEST_F(MemoryDatasrcConfigTest, remove) { " \"file\": \"" TEST_DATA_DIR "/example.zone\"}]}]"))); EXPECT_NO_THROW(parser->commit()); - EXPECT_EQ(1, server.getMemoryDataSrc(rrclass)->getZoneCount()); + EXPECT_EQ(1, server.getInMemoryClient(rrclass)->getZoneCount()); delete parser; parser = createAuthConfigParser(server, "datasources"); EXPECT_NO_THROW(parser->build(Element::fromJSON("[]"))); EXPECT_NO_THROW(parser->commit()); - EXPECT_EQ(AuthSrv::MemoryDataSrcPtr(), server.getMemoryDataSrc(rrclass)); + EXPECT_EQ(AuthSrv::InMemoryClientPtr(), server.getInMemoryClient(rrclass)); } TEST_F(MemoryDatasrcConfigTest, adDuplicateZones) { diff --git a/src/bin/auth/tests/query_unittest.cc b/src/bin/auth/tests/query_unittest.cc index c68b672c8a..b2d1094b9d 100644 --- a/src/bin/auth/tests/query_unittest.cc +++ b/src/bin/auth/tests/query_unittest.cc @@ -93,9 +93,9 @@ const char* const other_zone_rrs = "mx.delegation.example.com. 3600 IN A 192.0.2.100\n"; // This is a mock Zone class for testing. -// It is a derived class of Zone for the convenient of tests. +// It is a derived class of ZoneFinder for the convenient of tests. // Its find() method emulates the common behavior of protocol compliant -// zone classes, but simplifies some minor cases and also supports broken +// ZoneFinder classes, but simplifies some minor cases and also supports broken // behavior. // For simplicity, most names are assumed to be "in zone"; there's only // one zone cut at the point of name "delegation.example.com". @@ -103,15 +103,16 @@ const char* const other_zone_rrs = // will result in DNAME. // This mock zone doesn't handle empty non terminal nodes (if we need to test // such cases find() should have specialized code for it). -class MockZone : public Zone { +class MockZoneFinder : public ZoneFinder { public: - MockZone() : + MockZoneFinder() : origin_(Name("example.com")), delegation_name_("delegation.example.com"), dname_name_("dname.example.com"), has_SOA_(true), has_apex_NS_(true), - rrclass_(RRClass::IN()) + rrclass_(RRClass::IN()), + include_rrsig_anyway_(false) { stringstream zone_stream; zone_stream << soa_txt << zone_ns_txt << ns_addrs_txt << @@ -120,14 +121,14 @@ public: other_zone_rrs; masterLoad(zone_stream, origin_, rrclass_, - boost::bind(&MockZone::loadRRset, this, _1)); + boost::bind(&MockZoneFinder::loadRRset, this, _1)); } - virtual const isc::dns::Name& getOrigin() const { return (origin_); } - virtual const isc::dns::RRClass& getClass() const { return (rrclass_); } + virtual isc::dns::Name getOrigin() const { return (origin_); } + virtual isc::dns::RRClass getClass() const { return (rrclass_); } virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, RRsetList* target = NULL, - const FindOptions options = FIND_DEFAULT) const; + const FindOptions options = FIND_DEFAULT); // If false is passed, it makes the zone broken as if it didn't have the // SOA. @@ -137,11 +138,18 @@ public: // the apex NS. void setApexNSFlag(bool on) { has_apex_NS_ = on; } + // Turn this on if you want it to return RRSIGs regardless of FIND_GLUE_OK + void setIncludeRRSIGAnyway(bool on) { include_rrsig_anyway_ = on; } + + Name findPreviousName(const Name&) const { + isc_throw(isc::NotImplemented, "Mock doesn't support previous name"); + } + private: typedef map RRsetStore; typedef map Domains; Domains domains_; - void loadRRset(ConstRRsetPtr rrset) { + void loadRRset(RRsetPtr rrset) { domains_[rrset->getName()][rrset->getType()] = rrset; if (rrset->getName() == delegation_name_ && rrset->getType() == RRType::NS()) { @@ -149,6 +157,26 @@ private: } else if (rrset->getName() == dname_name_ && rrset->getType() == RRType::DNAME()) { dname_rrset_ = rrset; + // Add some signatures + } else if (rrset->getName() == Name("example.com.") && + rrset->getType() == RRType::NS()) { + rrset->addRRsig(RdataPtr(new generic::RRSIG("NS 5 3 3600 " + "20000101000000 " + "20000201000000 " + "12345 example.com. " + "FAKEFAKEFAKE"))); + } else if (rrset->getType() == RRType::A()) { + rrset->addRRsig(RdataPtr(new generic::RRSIG("A 5 3 3600 " + "20000101000000 " + "20000201000000 " + "12345 example.com. " + "FAKEFAKEFAKE"))); + } else if (rrset->getType() == RRType::AAAA()) { + rrset->addRRsig(RdataPtr(new generic::RRSIG("AAAA 5 3 3600 " + "20000101000000 " + "20000201000000 " + "12345 example.com. " + "FAKEFAKEFAKE"))); } } @@ -161,11 +189,12 @@ private: ConstRRsetPtr delegation_rrset_; ConstRRsetPtr dname_rrset_; const RRClass rrclass_; + bool include_rrsig_anyway_; }; -Zone::FindResult -MockZone::find(const Name& name, const RRType& type, - RRsetList* target, const FindOptions options) const +ZoneFinder::FindResult +MockZoneFinder::find(const Name& name, const RRType& type, + RRsetList* target, const FindOptions options) { // Emulating a broken zone: mandatory apex RRs are missing if specifically // configured so (which are rare cases). @@ -195,7 +224,26 @@ MockZone::find(const Name& name, const RRType& type, RRsetStore::const_iterator found_rrset = found_domain->second.find(type); if (found_rrset != found_domain->second.end()) { - return (FindResult(SUCCESS, found_rrset->second)); + ConstRRsetPtr rrset; + // Strip whatever signature there is in case DNSSEC is not required + // Just to make sure the Query asks for it when it is needed + if (options & ZoneFinder::FIND_DNSSEC || + include_rrsig_anyway_ || + !found_rrset->second->getRRsig()) { + rrset = found_rrset->second; + } else { + RRsetPtr noconst(new RRset(found_rrset->second->getName(), + found_rrset->second->getClass(), + found_rrset->second->getType(), + found_rrset->second->getTTL())); + for (RdataIteratorPtr + i(found_rrset->second->getRdataIterator()); + !i->isLast(); i->next()) { + noconst->addRdata(i->getCurrent()); + } + rrset = noconst; + } + return (FindResult(SUCCESS, rrset)); } // If not found but we have a target, fill it with all RRsets here @@ -233,11 +281,15 @@ protected: response.setRcode(Rcode::NOERROR()); response.setOpcode(Opcode::QUERY()); // create and add a matching zone. - mock_zone = new MockZone(); - memory_datasrc.addZone(ZonePtr(mock_zone)); + mock_finder = new MockZoneFinder(); + memory_client.addZone(ZoneFinderPtr(mock_finder)); } - MockZone* mock_zone; - MemoryDataSrc memory_datasrc; + MockZoneFinder* mock_finder; + // We use InMemoryClient here. We could have some kind of mock client + // here, but historically, the Query supported only InMemoryClient + // (originally named MemoryDataSrc) and was tested with it, so we keep + // it like this for now. + InMemoryClient memory_client; const Name qname; const RRClass qclass; const RRType qtype; @@ -286,24 +338,76 @@ responseCheck(Message& response, const isc::dns::Rcode& rcode, TEST_F(QueryTest, noZone) { // There's no zone in the memory datasource. So the response should have // REFUSED. - MemoryDataSrc empty_memory_datasrc; - Query nozone_query(empty_memory_datasrc, qname, qtype, response); + InMemoryClient empty_memory_client; + Query nozone_query(empty_memory_client, qname, qtype, response); EXPECT_NO_THROW(nozone_query.process()); EXPECT_EQ(Rcode::REFUSED(), response.getRcode()); } TEST_F(QueryTest, exactMatch) { - Query query(memory_datasrc, qname, qtype, response); + Query query(memory_client, qname, qtype, response); EXPECT_NO_THROW(query.process()); // find match rrset responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 3, www_a_txt, zone_ns_txt, ns_addrs_txt); } +TEST_F(QueryTest, exactMatchIgnoreSIG) { + // Check that we do not include the RRSIG when not requested even when + // we receive it from the data source. + mock_finder->setIncludeRRSIGAnyway(true); + Query query(memory_client, qname, qtype, response); + EXPECT_NO_THROW(query.process()); + // find match rrset + responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 3, + www_a_txt, zone_ns_txt, ns_addrs_txt); +} + +TEST_F(QueryTest, dnssecPositive) { + // Just like exactMatch, but the signatures should be included as well + Query query(memory_client, qname, qtype, response, true); + EXPECT_NO_THROW(query.process()); + // find match rrset + // We can't let responseCheck to check the additional section as well, + // it gets confused by the two RRs for glue.delegation.../RRSIG due + // to it's design and fixing it would be hard. Therefore we simply + // check manually this one time. + responseCheck(response, Rcode::NOERROR(), AA_FLAG, 2, 4, 6, + (www_a_txt + std::string("www.example.com. 3600 IN RRSIG " + "A 5 3 3600 20000101000000 " + "20000201000000 12345 example.com. " + "FAKEFAKEFAKE\n")).c_str(), + (zone_ns_txt + std::string("example.com. 3600 IN RRSIG NS 5 " + "3 3600 20000101000000 " + "20000201000000 12345 " + "example.com. FAKEFAKEFAKE\n")). + c_str(), NULL); + RRsetIterator iterator(response.beginSection(Message::SECTION_ADDITIONAL)); + const char* additional[] = { + "glue.delegation.example.com. 3600 IN A 192.0.2.153\n", + "glue.delegation.example.com. 3600 IN RRSIG A 5 3 3600 20000101000000 " + "20000201000000 12345 example.com. FAKEFAKEFAKE\n", + "glue.delegation.example.com. 3600 IN AAAA 2001:db8::53\n", + "glue.delegation.example.com. 3600 IN RRSIG AAAA 5 3 3600 " + "20000101000000 20000201000000 12345 example.com. FAKEFAKEFAKE\n", + "noglue.example.com. 3600 IN A 192.0.2.53\n", + "noglue.example.com. 3600 IN RRSIG A 5 3 3600 20000101000000 " + "20000201000000 12345 example.com. FAKEFAKEFAKE\n", + NULL + }; + for (const char** rr(additional); *rr != NULL; ++ rr) { + ASSERT_FALSE(iterator == + response.endSection(Message::SECTION_ADDITIONAL)); + EXPECT_EQ(*rr, (*iterator)->toText()); + iterator ++; + } + EXPECT_TRUE(iterator == response.endSection(Message::SECTION_ADDITIONAL)); +} + TEST_F(QueryTest, exactAddrMatch) { // find match rrset, omit additional data which has already been provided // in the answer section from the additional. - EXPECT_NO_THROW(Query(memory_datasrc, Name("noglue.example.com"), qtype, + EXPECT_NO_THROW(Query(memory_client, Name("noglue.example.com"), qtype, response).process()); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 2, @@ -315,7 +419,7 @@ TEST_F(QueryTest, exactAddrMatch) { TEST_F(QueryTest, apexNSMatch) { // find match rrset, omit authority data which has already been provided // in the answer section from the authority section. - EXPECT_NO_THROW(Query(memory_datasrc, Name("example.com"), RRType::NS(), + EXPECT_NO_THROW(Query(memory_client, Name("example.com"), RRType::NS(), response).process()); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 3, 0, 3, @@ -326,7 +430,7 @@ TEST_F(QueryTest, apexNSMatch) { TEST_F(QueryTest, exactAnyMatch) { // find match rrset, omit additional data which has already been provided // in the answer section from the additional. - EXPECT_NO_THROW(Query(memory_datasrc, Name("noglue.example.com"), + EXPECT_NO_THROW(Query(memory_client, Name("noglue.example.com"), RRType::ANY(), response).process()); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 2, @@ -339,18 +443,18 @@ TEST_F(QueryTest, exactAnyMatch) { TEST_F(QueryTest, apexAnyMatch) { // find match rrset, omit additional data which has already been provided // in the answer section from the additional. - EXPECT_NO_THROW(Query(memory_datasrc, Name("example.com"), + EXPECT_NO_THROW(Query(memory_client, Name("example.com"), RRType::ANY(), response).process()); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 4, 0, 3, "example.com. 3600 IN SOA . . 0 0 0 0 0\n" "example.com. 3600 IN NS glue.delegation.example.com.\n" "example.com. 3600 IN NS noglue.example.com.\n" "example.com. 3600 IN NS example.net.\n", - NULL, ns_addrs_txt, mock_zone->getOrigin()); + NULL, ns_addrs_txt, mock_finder->getOrigin()); } TEST_F(QueryTest, mxANYMatch) { - EXPECT_NO_THROW(Query(memory_datasrc, Name("mx.example.com"), + EXPECT_NO_THROW(Query(memory_client, Name("mx.example.com"), RRType::ANY(), response).process()); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 3, 3, 4, mx_txt, zone_ns_txt, @@ -358,17 +462,17 @@ TEST_F(QueryTest, mxANYMatch) { } TEST_F(QueryTest, glueANYMatch) { - EXPECT_NO_THROW(Query(memory_datasrc, Name("delegation.example.com"), + EXPECT_NO_THROW(Query(memory_client, Name("delegation.example.com"), RRType::ANY(), response).process()); responseCheck(response, Rcode::NOERROR(), 0, 0, 4, 3, NULL, delegation_txt, ns_addrs_txt); } TEST_F(QueryTest, nodomainANY) { - EXPECT_NO_THROW(Query(memory_datasrc, Name("nxdomain.example.com"), + EXPECT_NO_THROW(Query(memory_client, Name("nxdomain.example.com"), RRType::ANY(), response).process()); responseCheck(response, Rcode::NXDOMAIN(), AA_FLAG, 0, 1, 0, - NULL, soa_txt, NULL, mock_zone->getOrigin()); + NULL, soa_txt, NULL, mock_finder->getOrigin()); } // This tests that when we need to look up Zone's apex NS records for @@ -376,15 +480,15 @@ TEST_F(QueryTest, nodomainANY) { // throw in that case. TEST_F(QueryTest, noApexNS) { // Disable apex NS record - mock_zone->setApexNSFlag(false); + mock_finder->setApexNSFlag(false); - EXPECT_THROW(Query(memory_datasrc, Name("noglue.example.com"), qtype, + EXPECT_THROW(Query(memory_client, Name("noglue.example.com"), qtype, response).process(), Query::NoApexNS); // We don't look into the response, as it threw } TEST_F(QueryTest, delegation) { - EXPECT_NO_THROW(Query(memory_datasrc, Name("delegation.example.com"), + EXPECT_NO_THROW(Query(memory_client, Name("delegation.example.com"), qtype, response).process()); responseCheck(response, Rcode::NOERROR(), 0, 0, 4, 3, @@ -392,18 +496,18 @@ TEST_F(QueryTest, delegation) { } TEST_F(QueryTest, nxdomain) { - EXPECT_NO_THROW(Query(memory_datasrc, Name("nxdomain.example.com"), qtype, + EXPECT_NO_THROW(Query(memory_client, Name("nxdomain.example.com"), qtype, response).process()); responseCheck(response, Rcode::NXDOMAIN(), AA_FLAG, 0, 1, 0, - NULL, soa_txt, NULL, mock_zone->getOrigin()); + NULL, soa_txt, NULL, mock_finder->getOrigin()); } TEST_F(QueryTest, nxrrset) { - EXPECT_NO_THROW(Query(memory_datasrc, Name("www.example.com"), + EXPECT_NO_THROW(Query(memory_client, Name("www.example.com"), RRType::TXT(), response).process()); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 0, 1, 0, - NULL, soa_txt, NULL, mock_zone->getOrigin()); + NULL, soa_txt, NULL, mock_finder->getOrigin()); } /* @@ -412,22 +516,22 @@ TEST_F(QueryTest, nxrrset) { */ TEST_F(QueryTest, noSOA) { // disable zone's SOA RR. - mock_zone->setSOAFlag(false); + mock_finder->setSOAFlag(false); // The NX Domain - EXPECT_THROW(Query(memory_datasrc, Name("nxdomain.example.com"), + EXPECT_THROW(Query(memory_client, Name("nxdomain.example.com"), qtype, response).process(), Query::NoSOA); // Of course, we don't look into the response, as it throwed // NXRRSET - EXPECT_THROW(Query(memory_datasrc, Name("nxrrset.example.com"), + EXPECT_THROW(Query(memory_client, Name("nxrrset.example.com"), qtype, response).process(), Query::NoSOA); } TEST_F(QueryTest, noMatchZone) { // there's a zone in the memory datasource but it doesn't match the qname. // should result in REFUSED. - Query(memory_datasrc, Name("example.org"), qtype, response).process(); + Query(memory_client, Name("example.org"), qtype, response).process(); EXPECT_EQ(Rcode::REFUSED(), response.getRcode()); } @@ -438,7 +542,7 @@ TEST_F(QueryTest, noMatchZone) { * A record, other to unknown out of zone one. */ TEST_F(QueryTest, MX) { - Query(memory_datasrc, Name("mx.example.com"), RRType::MX(), + Query(memory_client, Name("mx.example.com"), RRType::MX(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 3, 3, 4, @@ -452,7 +556,7 @@ TEST_F(QueryTest, MX) { * This should not trigger the additional processing for the exchange. */ TEST_F(QueryTest, MXAlias) { - Query(memory_datasrc, Name("cnamemx.example.com"), RRType::MX(), + Query(memory_client, Name("cnamemx.example.com"), RRType::MX(), response).process(); // there shouldn't be no additional RRs for the exchanges (we have 3 @@ -472,7 +576,7 @@ TEST_F(QueryTest, MXAlias) { * returned. */ TEST_F(QueryTest, CNAME) { - Query(memory_datasrc, Name("cname.example.com"), RRType::A(), + Query(memory_client, Name("cname.example.com"), RRType::A(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 0, 0, @@ -482,7 +586,7 @@ TEST_F(QueryTest, CNAME) { TEST_F(QueryTest, explicitCNAME) { // same owner name as the CNAME test but explicitly query for CNAME RR. // expect the same response as we don't provide a full chain yet. - Query(memory_datasrc, Name("cname.example.com"), RRType::CNAME(), + Query(memory_client, Name("cname.example.com"), RRType::CNAME(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 3, @@ -494,7 +598,7 @@ TEST_F(QueryTest, CNAME_NX_RRSET) { // note: with chaining, what should be expected is not trivial: // BIND 9 returns the CNAME in answer and SOA in authority, no additional. // NSD returns the CNAME, NS in authority, A/AAAA for NS in additional. - Query(memory_datasrc, Name("cname.example.com"), RRType::TXT(), + Query(memory_client, Name("cname.example.com"), RRType::TXT(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 0, 0, @@ -503,7 +607,7 @@ TEST_F(QueryTest, CNAME_NX_RRSET) { TEST_F(QueryTest, explicitCNAME_NX_RRSET) { // same owner name as the NXRRSET test but explicitly query for CNAME RR. - Query(memory_datasrc, Name("cname.example.com"), RRType::CNAME(), + Query(memory_client, Name("cname.example.com"), RRType::CNAME(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 3, @@ -517,7 +621,7 @@ TEST_F(QueryTest, CNAME_NX_DOMAIN) { // RCODE being NXDOMAIN. // NSD returns the CNAME, NS in authority, A/AAAA for NS in additional, // RCODE being NOERROR. - Query(memory_datasrc, Name("cnamenxdom.example.com"), RRType::A(), + Query(memory_client, Name("cnamenxdom.example.com"), RRType::A(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 0, 0, @@ -526,7 +630,7 @@ TEST_F(QueryTest, CNAME_NX_DOMAIN) { TEST_F(QueryTest, explicitCNAME_NX_DOMAIN) { // same owner name as the NXDOMAIN test but explicitly query for CNAME RR. - Query(memory_datasrc, Name("cnamenxdom.example.com"), RRType::CNAME(), + Query(memory_client, Name("cnamenxdom.example.com"), RRType::CNAME(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 3, @@ -542,7 +646,7 @@ TEST_F(QueryTest, CNAME_OUT) { * Then the same test should be done with .org included there and * see what it does (depends on what we want to do) */ - Query(memory_datasrc, Name("cnameout.example.com"), RRType::A(), + Query(memory_client, Name("cnameout.example.com"), RRType::A(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 0, 0, @@ -551,7 +655,7 @@ TEST_F(QueryTest, CNAME_OUT) { TEST_F(QueryTest, explicitCNAME_OUT) { // same owner name as the OUT test but explicitly query for CNAME RR. - Query(memory_datasrc, Name("cnameout.example.com"), RRType::CNAME(), + Query(memory_client, Name("cnameout.example.com"), RRType::CNAME(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 3, @@ -567,7 +671,7 @@ TEST_F(QueryTest, explicitCNAME_OUT) { * pointing to NXRRSET and NXDOMAIN cases (similarly as with CNAME). */ TEST_F(QueryTest, DNAME) { - Query(memory_datasrc, Name("www.dname.example.com"), RRType::A(), + Query(memory_client, Name("www.dname.example.com"), RRType::A(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 2, 0, 0, @@ -583,7 +687,7 @@ TEST_F(QueryTest, DNAME) { * DNAME. */ TEST_F(QueryTest, DNAME_ANY) { - Query(memory_datasrc, Name("www.dname.example.com"), RRType::ANY(), + Query(memory_client, Name("www.dname.example.com"), RRType::ANY(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 2, 0, 0, @@ -592,7 +696,7 @@ TEST_F(QueryTest, DNAME_ANY) { // Test when we ask for DNAME explicitly, it does no synthetizing. TEST_F(QueryTest, explicitDNAME) { - Query(memory_datasrc, Name("dname.example.com"), RRType::DNAME(), + Query(memory_client, Name("dname.example.com"), RRType::DNAME(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 3, @@ -604,7 +708,7 @@ TEST_F(QueryTest, explicitDNAME) { * the CNAME, it should return the RRset. */ TEST_F(QueryTest, DNAME_A) { - Query(memory_datasrc, Name("dname.example.com"), RRType::A(), + Query(memory_client, Name("dname.example.com"), RRType::A(), response).process(); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 1, 3, 3, @@ -616,11 +720,11 @@ TEST_F(QueryTest, DNAME_A) { * It should not synthetize the CNAME. */ TEST_F(QueryTest, DNAME_NX_RRSET) { - EXPECT_NO_THROW(Query(memory_datasrc, Name("dname.example.com"), + EXPECT_NO_THROW(Query(memory_client, Name("dname.example.com"), RRType::TXT(), response).process()); responseCheck(response, Rcode::NOERROR(), AA_FLAG, 0, 1, 0, - NULL, soa_txt, NULL, mock_zone->getOrigin()); + NULL, soa_txt, NULL, mock_finder->getOrigin()); } /* @@ -636,7 +740,7 @@ TEST_F(QueryTest, LongDNAME) { "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa." "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa." "dname.example.com."); - EXPECT_NO_THROW(Query(memory_datasrc, longname, RRType::A(), + EXPECT_NO_THROW(Query(memory_client, longname, RRType::A(), response).process()); responseCheck(response, Rcode::YXDOMAIN(), AA_FLAG, 1, 0, 0, @@ -655,7 +759,7 @@ TEST_F(QueryTest, MaxLenDNAME) { "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa." "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa." "dname.example.com."); - EXPECT_NO_THROW(Query(memory_datasrc, longname, RRType::A(), + EXPECT_NO_THROW(Query(memory_client, longname, RRType::A(), response).process()); // Check the answer is OK diff --git a/src/bin/auth/tests/statistics_unittest.cc b/src/bin/auth/tests/statistics_unittest.cc index 9a3dded837..98e573b495 100644 --- a/src/bin/auth/tests/statistics_unittest.cc +++ b/src/bin/auth/tests/statistics_unittest.cc @@ -16,6 +16,8 @@ #include +#include + #include #include @@ -76,6 +78,13 @@ protected: } MockSession statistics_session_; AuthCounters counters; + // no need to be inherited from the original class here. + class MockModuleSpec { + public: + bool validateStatistics(ConstElementPtr, const bool valid) const + { return (valid); } + }; + MockModuleSpec module_spec_; }; void @@ -181,7 +190,7 @@ TEST_F(AuthCountersTest, submitStatisticsWithException) { statistics_session_.setThrowSessionTimeout(false); } -TEST_F(AuthCountersTest, submitStatistics) { +TEST_F(AuthCountersTest, submitStatisticsWithoutValidator) { // Submit statistics data. // Validate if it submits correct data. @@ -201,12 +210,69 @@ TEST_F(AuthCountersTest, submitStatistics) { // Command is "set". EXPECT_EQ("set", statistics_session_.sent_msg->get("command") ->get(0)->stringValue()); + EXPECT_EQ("Auth", statistics_session_.sent_msg->get("command") + ->get(1)->get("owner")->stringValue()); ConstElementPtr statistics_data = statistics_session_.sent_msg ->get("command")->get(1) - ->get("stats_data"); + ->get("data"); // UDP query counter is 2 and TCP query counter is 1. - EXPECT_EQ(2, statistics_data->get("auth.queries.udp")->intValue()); - EXPECT_EQ(1, statistics_data->get("auth.queries.tcp")->intValue()); + EXPECT_EQ(2, statistics_data->get("queries.udp")->intValue()); + EXPECT_EQ(1, statistics_data->get("queries.tcp")->intValue()); } +TEST_F(AuthCountersTest, submitStatisticsWithValidator) { + + //a validator for the unittest + AuthCounters::validator_type validator; + ConstElementPtr el; + + // Submit statistics data with correct statistics validator. + validator = boost::bind( + &AuthCountersTest::MockModuleSpec::validateStatistics, + &module_spec_, _1, true); + + EXPECT_TRUE(validator(el)); + + // register validator to AuthCounters + counters.registerStatisticsValidator(validator); + + // Counters should be initialized to 0. + EXPECT_EQ(0, counters.getCounter(AuthCounters::COUNTER_UDP_QUERY)); + EXPECT_EQ(0, counters.getCounter(AuthCounters::COUNTER_TCP_QUERY)); + + // UDP query counter is set to 2. + counters.inc(AuthCounters::COUNTER_UDP_QUERY); + counters.inc(AuthCounters::COUNTER_UDP_QUERY); + // TCP query counter is set to 1. + counters.inc(AuthCounters::COUNTER_TCP_QUERY); + + // checks the value returned by submitStatistics + EXPECT_TRUE(counters.submitStatistics()); + + // Destination is "Stats". + EXPECT_EQ("Stats", statistics_session_.msg_destination); + // Command is "set". + EXPECT_EQ("set", statistics_session_.sent_msg->get("command") + ->get(0)->stringValue()); + EXPECT_EQ("Auth", statistics_session_.sent_msg->get("command") + ->get(1)->get("owner")->stringValue()); + ConstElementPtr statistics_data = statistics_session_.sent_msg + ->get("command")->get(1) + ->get("data"); + // UDP query counter is 2 and TCP query counter is 1. + EXPECT_EQ(2, statistics_data->get("queries.udp")->intValue()); + EXPECT_EQ(1, statistics_data->get("queries.tcp")->intValue()); + + // Submit statistics data with incorrect statistics validator. + validator = boost::bind( + &AuthCountersTest::MockModuleSpec::validateStatistics, + &module_spec_, _1, false); + + EXPECT_FALSE(validator(el)); + + counters.registerStatisticsValidator(validator); + + // checks the value returned by submitStatistics + EXPECT_FALSE(counters.submitStatistics()); +} } diff --git a/src/bin/auth/tests/testdata/Makefile.am b/src/bin/auth/tests/testdata/Makefile.am index f6f1f27a81..c86722f81d 100644 --- a/src/bin/auth/tests/testdata/Makefile.am +++ b/src/bin/auth/tests/testdata/Makefile.am @@ -23,4 +23,4 @@ EXTRA_DIST += example.com EXTRA_DIST += example.sqlite3 .spec.wire: - $(abs_top_builddir)/src/lib/dns/tests/testdata/gen-wiredata.py -o $@ $< + $(PYTHON) $(top_builddir)/src/lib/util/python/gen_wiredata.py -o $@ $< diff --git a/src/bin/bind10/Makefile.am b/src/bin/bind10/Makefile.am index cca4a53797..5ec0c9f4a6 100644 --- a/src/bin/bind10/Makefile.am +++ b/src/bin/bind10/Makefile.am @@ -1,16 +1,23 @@ SUBDIRS = . tests sbin_SCRIPTS = bind10 -CLEANFILES = bind10 bind10.pyc +CLEANFILES = bind10 bind10_src.pyc +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/bind10_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/bind10_messages.pyc pkglibexecdir = $(libexecdir)/@PACKAGE@ +nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/bind10_messages.py +pylogmessagedir = $(pyexecdir)/isc/log_messages/ + +noinst_SCRIPTS = run_bind10.sh + bind10dir = $(pkgdatadir) bind10_DATA = bob.spec EXTRA_DIST = bob.spec man_MANS = bind10.8 -EXTRA_DIST += $(man_MANS) bind10.xml +EXTRA_DIST += $(man_MANS) bind10.xml bind10_messages.mes if ENABLE_MAN @@ -19,10 +26,14 @@ bind10.8: bind10.xml endif +$(PYTHON_LOGMSGPKG_DIR)/work/bind10_messages.py : bind10_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/bind10_messages.mes + # this is done here since configure.ac AC_OUTPUT doesn't expand exec_prefix -bind10: bind10.py +bind10: bind10_src.py $(PYTHON_LOGMSGPKG_DIR)/work/bind10_messages.py $(SED) -e "s|@@PYTHONPATH@@|@pyexecdir@|" \ - -e "s|@@LIBEXECDIR@@|$(pkglibexecdir)|" bind10.py >$@ + -e "s|@@LIBEXECDIR@@|$(pkglibexecdir)|" bind10_src.py >$@ chmod a+x $@ pytest: diff --git a/src/bin/bind10/bind10.8 b/src/bin/bind10/bind10.8 index d5ab9053b3..1af4f14848 100644 --- a/src/bin/bind10/bind10.8 +++ b/src/bin/bind10/bind10.8 @@ -2,12 +2,12 @@ .\" Title: bind10 .\" Author: [see the "AUTHORS" section] .\" Generator: DocBook XSL Stylesheets v1.75.2 -.\" Date: March 31, 2011 +.\" Date: August 11, 2011 .\" Manual: BIND10 .\" Source: BIND10 .\" Language: English .\" -.TH "BIND10" "8" "March 31, 2011" "BIND10" "BIND10" +.TH "BIND10" "8" "August 11, 2011" "BIND10" "BIND10" .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- @@ -107,6 +107,18 @@ Display more about what is going on for \fBbind10\fR and its child processes\&. .RE +.SH "STATISTICS DATA" +.PP +The statistics data collected by the +\fBb10\-stats\fR +daemon include: +.PP +bind10\&.boot_time +.RS 4 +The date and time that the +\fBbind10\fR +process started\&. This is represented in ISO 8601 format\&. +.RE .SH "SEE ALSO" .PP diff --git a/src/bin/bind10/bind10.xml b/src/bin/bind10/bind10.xml index 1128264ece..b101ba8227 100644 --- a/src/bin/bind10/bind10.xml +++ b/src/bin/bind10/bind10.xml @@ -2,7 +2,7 @@ "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd" []> + + + STATISTICS DATA + + + The statistics data collected by the b10-stats + daemon include: + + + + + + bind10.boot_time + + The date and time that the bind10 + process started. + This is represented in ISO 8601 format. + + + + + + + + - Enabled verbose mode. This enables diagnostic messages to - STDERR. + Enable verbose mode. + This sets logging to the maximum debugging level. @@ -146,6 +149,22 @@ once that is merged you can for instance do 'config add Resolver/forward_address + + + + + + + query_acl is a list of query access control + rules. The list items are the action string + and the from or key strings. + The possible actions are ACCEPT, REJECT and DROP. + The from is a remote (source) IPv4 or IPv6 + address or special keyword. + The key is a TSIG key name. + The default configuration accepts queries from 127.0.0.1 and ::1. + + retries is the number of times to retry (resend query) after a query timeout @@ -159,8 +178,10 @@ once that is merged you can for instance do 'config add Resolver/forward_address root servers to start resolving. The list items are the address string and port number. - If empty, a hardcoded address for F-root (192.5.5.241) is used. + By default, a hardcoded address for l.root-servers.net + (199.7.83.42 or 2001:500:3::42) is used. + timeout_client is the number of milliseconds @@ -234,7 +255,8 @@ once that is merged you can for instance do 'config add Resolver/forward_address The b10-resolver daemon was first coded in September 2010. The initial implementation only provided forwarding. Iteration was introduced in January 2011. - + Caching was implemented in February 2011. + Access control was introduced in June 2011. diff --git a/src/bin/resolver/main.cc b/src/bin/resolver/main.cc index d9c30b9495..79146da928 100644 --- a/src/bin/resolver/main.cc +++ b/src/bin/resolver/main.cc @@ -208,8 +208,7 @@ main(int argc, char* argv[]) { cc_session = new Session(io_service.get_io_service()); config_session = new ModuleCCSession(specfile, *cc_session, my_config_handler, - my_command_handler, - true, true); + my_command_handler); LOG_DEBUG(resolver_logger, RESOLVER_DBG_INIT, RESOLVER_CONFIG_CHANNEL); // FIXME: This does not belong here, but inside Boss diff --git a/src/bin/resolver/resolver.cc b/src/bin/resolver/resolver.cc index be254b7332..6af383ad08 100644 --- a/src/bin/resolver/resolver.cc +++ b/src/bin/resolver/resolver.cc @@ -26,7 +26,7 @@ #include -#include +#include #include #include @@ -62,6 +62,7 @@ using boost::shared_ptr; using namespace isc; using namespace isc::util; using namespace isc::acl; +using isc::acl::dns::RequestACL; using namespace isc::dns; using namespace isc::data; using namespace isc::config; @@ -82,7 +83,9 @@ public: client_timeout_(4000), lookup_timeout_(30000), retries_(3), - query_acl_(new Resolver::ClientACL(REJECT)), + // we apply "reject all" (implicit default of the loader) ACL by + // default: + query_acl_(acl::dns::getRequestLoader().load(Element::fromJSON("[]"))), rec_query_(NULL) {} @@ -160,11 +163,11 @@ public: OutputBufferPtr buffer, DNSServer* server); - const Resolver::ClientACL& getQueryACL() const { + const RequestACL& getQueryACL() const { return (*query_acl_); } - void setQueryACL(shared_ptr new_acl) { + void setQueryACL(shared_ptr new_acl) { query_acl_ = new_acl; } @@ -192,7 +195,7 @@ public: private: /// ACL on incoming queries - shared_ptr query_acl_; + shared_ptr query_acl_; /// Object to handle upstream queries RecursiveQuery* rec_query_; @@ -514,8 +517,11 @@ ResolverImpl::processNormalQuery(const IOMessage& io_message, const RRClass qclass = question->getClass(); // Apply query ACL - Client client(io_message); - const BasicAction query_action(getQueryACL().execute(client)); + const Client client(io_message); + const BasicAction query_action( + getQueryACL().execute(acl::dns::RequestContext( + client.getRequestSourceIPAddress(), + query_message->getTSIGRecord()))); if (query_action == isc::acl::REJECT) { LOG_INFO(resolver_logger, RESOLVER_QUERY_REJECTED) .arg(question->getName()).arg(qtype).arg(qclass).arg(client); @@ -574,32 +580,6 @@ ResolverImpl::processNormalQuery(const IOMessage& io_message, return (RECURSION); } -namespace { -// This is a simplified ACL parser for the initial implementation with minimal -// external dependency. For a longer term we'll switch to a more generic -// loader with allowing more complicated ACL syntax. -shared_ptr -createQueryACL(isc::data::ConstElementPtr acl_config) { - if (!acl_config) { - return (shared_ptr()); - } - - shared_ptr new_acl( - new Resolver::ClientACL(REJECT)); - BOOST_FOREACH(ConstElementPtr rule, acl_config->listValue()) { - ConstElementPtr action = rule->get("action"); - ConstElementPtr from = rule->get("from"); - if (!action || !from) { - isc_throw(BadValue, "query ACL misses mandatory parameter"); - } - new_acl->append(shared_ptr >( - new IPCheck(from->stringValue())), - defaultActionLoader(action)); - } - return (new_acl); -} -} - ConstElementPtr Resolver::updateConfig(ConstElementPtr config) { LOG_DEBUG(resolver_logger, RESOLVER_DBG_CONFIG, RESOLVER_CONFIG_UPDATED) @@ -616,8 +596,10 @@ Resolver::updateConfig(ConstElementPtr config) { ConstElementPtr listenAddressesE(config->get("listen_on")); AddressList listenAddresses(parseAddresses(listenAddressesE, "listen_on")); - shared_ptr query_acl(createQueryACL( - config->get("query_acl"))); + const ConstElementPtr query_acl_cfg(config->get("query_acl")); + const shared_ptr query_acl = + query_acl_cfg ? acl::dns::getRequestLoader().load(query_acl_cfg) : + shared_ptr(); bool set_timeouts(false); int qtimeout = impl_->query_timeout_; int ctimeout = impl_->client_timeout_; @@ -777,13 +759,13 @@ Resolver::getListenAddresses() const { return (impl_->listen_); } -const Resolver::ClientACL& +const RequestACL& Resolver::getQueryACL() const { return (impl_->getQueryACL()); } void -Resolver::setQueryACL(shared_ptr new_acl) { +Resolver::setQueryACL(shared_ptr new_acl) { if (!new_acl) { isc_throw(InvalidParameter, "NULL pointer is passed to setQueryACL"); } diff --git a/src/bin/resolver/resolver.h b/src/bin/resolver/resolver.h index 9c7812682c..4b9c773c7f 100644 --- a/src/bin/resolver/resolver.h +++ b/src/bin/resolver/resolver.h @@ -21,10 +21,9 @@ #include -#include - #include #include +#include #include #include @@ -41,12 +40,6 @@ #include -namespace isc { -namespace server_common { -class Client; -} -} - class ResolverImpl; /** @@ -246,13 +239,10 @@ public: */ int getRetries() const; - // Shortcut typedef used for query ACL. - typedef isc::acl::ACL ClientACL; - /// Get the query ACL. /// /// \exception None - const ClientACL& getQueryACL() const; + const isc::acl::dns::RequestACL& getQueryACL() const; /// Set the new query ACL. /// @@ -265,7 +255,8 @@ public: /// \exception InvalidParameter The given pointer is NULL /// /// \param new_acl The new ACL to replace the existing one. - void setQueryACL(boost::shared_ptr new_acl); + void setQueryACL(boost::shared_ptr + new_acl); private: ResolverImpl* impl_; diff --git a/src/bin/resolver/resolver_messages.mes b/src/bin/resolver/resolver_messages.mes index 6c5be642d6..7930c52a84 100644 --- a/src/bin/resolver/resolver_messages.mes +++ b/src/bin/resolver/resolver_messages.mes @@ -16,151 +16,174 @@ # along with the resolver methods. % RESOLVER_AXFR_TCP AXFR request received over TCP -A debug message, the resolver received a NOTIFY message over TCP. The server -cannot process it and will return an error message to the sender with the -RCODE set to NOTIMP. +This is a debug message output when the resolver received a request for +an AXFR (full transfer of a zone) over TCP. Only authoritative servers +are able to handle AXFR requests, so the resolver will return an error +message to the sender with the RCODE set to NOTIMP. % RESOLVER_AXFR_UDP AXFR request received over UDP -A debug message, the resolver received a NOTIFY message over UDP. The server -cannot process it (and in any case, an AXFR request should be sent over TCP) -and will return an error message to the sender with the RCODE set to FORMERR. +This is a debug message output when the resolver received a request for +an AXFR (full transfer of a zone) over UDP. Only authoritative servers +are able to handle AXFR requests (and in any case, an AXFR request should +be sent over TCP), so the resolver will return an error message to the +sender with the RCODE set to NOTIMP. % RESOLVER_CLIENT_TIME_SMALL client timeout of %1 is too small -An error indicating that the configuration value specified for the query -timeout is too small. +During the update of the resolver's configuration parameters, the value +of the client timeout was found to be too small. The configuration +update was abandoned and the parameters were not changed. % RESOLVER_CONFIG_CHANNEL configuration channel created -A debug message, output when the resolver has successfully established a -connection to the configuration channel. +This is a debug message output when the resolver has successfully +established a connection to the configuration channel. % RESOLVER_CONFIG_ERROR error in configuration: %1 -An error was detected in a configuration update received by the resolver. This -may be in the format of the configuration message (in which case this is a -programming error) or it may be in the data supplied (in which case it is -a user error). The reason for the error, given as a parameter in the message, -will give more details. +An error was detected in a configuration update received by the +resolver. This may be in the format of the configuration message (in +which case this is a programming error) or it may be in the data supplied +(in which case it is a user error). The reason for the error, included +in the message, will give more details. The configuration update is +not applied and the resolver parameters were not changed. % RESOLVER_CONFIG_LOADED configuration loaded -A debug message, output when the resolver configuration has been successfully -loaded. +This is a debug message output when the resolver configuration has been +successfully loaded. % RESOLVER_CONFIG_UPDATED configuration updated: %1 -A debug message, the configuration has been updated with the specified -information. +This is a debug message output when the resolver configuration is being +updated with the specified information. % RESOLVER_CREATED main resolver object created -A debug message, output when the Resolver() object has been created. +This is a debug message indicating that the main resolver object has +been created. % RESOLVER_DNS_MESSAGE_RECEIVED DNS message received: %1 -A debug message, this always precedes some other logging message and is the -formatted contents of the DNS packet that the other message refers to. +This is a debug message from the resolver listing the contents of a +received DNS message. % RESOLVER_DNS_MESSAGE_SENT DNS message of %1 bytes sent: %2 -A debug message, this contains details of the response sent back to the querying -system. +This is a debug message containing details of the response returned by +the resolver to the querying system. % RESOLVER_FAILED resolver failed, reason: %1 -This is an error message output when an unhandled exception is caught by the -resolver. All it can do is to shut down. +This is an error message output when an unhandled exception is caught +by the resolver. After this, the resolver will shut itself down. +Please submit a bug report. % RESOLVER_FORWARD_ADDRESS setting forward address %1(%2) -This message may appear multiple times during startup, and it lists the -forward addresses used by the resolver when running in forwarding mode. +If the resolver is running in forward mode, this message will appear +during startup to list the forward address. If multiple addresses are +specified, it will appear once for each address. % RESOLVER_FORWARD_QUERY processing forward query -The received query has passed all checks and is being forwarded to upstream +This is a debug message indicating that a query received by the resolver +has passed a set of checks (message is well-formed, it is allowed by the +ACL, it is a supported opcode, etc.) and is being forwarded to upstream servers. % RESOLVER_HEADER_ERROR message received, exception when processing header: %1 -A debug message noting that an exception occurred during the processing of -a received packet. The packet has been dropped. +This is a debug message from the resolver noting that an exception +occurred during the processing of a received packet. The packet has +been dropped. % RESOLVER_IXFR IXFR request received -The resolver received a NOTIFY message over TCP. The server cannot process it -and will return an error message to the sender with the RCODE set to NOTIMP. +This is a debug message indicating that the resolver received a request +for an IXFR (incremental transfer of a zone). Only authoritative servers +are able to handle IXFR requests, so the resolver will return an error +message to the sender with the RCODE set to NOTIMP. % RESOLVER_LOOKUP_TIME_SMALL lookup timeout of %1 is too small -An error indicating that the configuration value specified for the lookup -timeout is too small. +During the update of the resolver's configuration parameters, the value +of the lookup timeout was found to be too small. The configuration +update will not be applied. % RESOLVER_MESSAGE_ERROR error parsing received message: %1 - returning %2 -A debug message noting that the resolver received a message and the -parsing of the body of the message failed due to some error (although -the parsing of the header succeeded). The message parameters give a -textual description of the problem and the RCODE returned. +This is a debug message noting that parsing of the body of a received +message by the resolver failed due to some error (although the parsing of +the header succeeded). The message parameters give a textual description +of the problem and the RCODE returned. % RESOLVER_NEGATIVE_RETRIES negative number of retries (%1) specified in the configuration -An error message indicating that the resolver configuration has specified a -negative retry count. Only zero or positive values are valid. +This error is issued when a resolver configuration update has specified +a negative retry count: only zero or positive values are valid. The +configuration update was abandoned and the parameters were not changed. % RESOLVER_NON_IN_PACKET non-IN class request received, returning REFUSED message -A debug message, the resolver has received a DNS packet that was not IN class. -The resolver cannot handle such packets, so is returning a REFUSED response to -the sender. +This debug message is issued when resolver has received a DNS packet that +was not IN (Internet) class. The resolver cannot handle such packets, +so is returning a REFUSED response to the sender. % RESOLVER_NORMAL_QUERY processing normal query -The received query has passed all checks and is being processed by the resolver. +This is a debug message indicating that the query received by the resolver +has passed a set of checks (message is well-formed, it is allowed by the +ACL, it is a supported opcode, etc.) and is being processed by the resolver. % RESOLVER_NOTIFY_RECEIVED NOTIFY arrived but server is not authoritative -The resolver received a NOTIFY message. As the server is not authoritative it -cannot process it, so it returns an error message to the sender with the RCODE -set to NOTAUTH. +The resolver has received a NOTIFY message. As the server is not +authoritative it cannot process it, so it returns an error message to +the sender with the RCODE set to NOTAUTH. % RESOLVER_NOT_ONE_QUESTION query contained %1 questions, exactly one question was expected -A debug message, the resolver received a query that contained the number of -entires in the question section detailed in the message. This is a malformed -message, as a DNS query must contain only one question. The resolver will -return a message to the sender with the RCODE set to FORMERR. +This debug message indicates that the resolver received a query that +contained the number of entries in the question section detailed in +the message. This is a malformed message, as a DNS query must contain +only one question. The resolver will return a message to the sender +with the RCODE set to FORMERR. % RESOLVER_NO_ROOT_ADDRESS no root addresses available -A warning message during startup, indicates that no root addresses have been -set. This may be because the resolver will get them from a priming query. +A warning message issued during resolver startup, this indicates that +no root addresses have been set. This may be because the resolver will +get them from a priming query. % RESOLVER_PARSE_ERROR error parsing received message: %1 - returning %2 -A debug message noting that the resolver received a message and the parsing -of the body of the message failed due to some non-protocol related reason -(although the parsing of the header succeeded). The message parameters give -a textual description of the problem and the RCODE returned. +This is a debug message noting that the resolver received a message and +the parsing of the body of the message failed due to some non-protocol +related reason (although the parsing of the header succeeded). +The message parameters give a textual description of the problem and +the RCODE returned. % RESOLVER_PRINT_COMMAND print message command, arguments are: %1 -This message is logged when a "print_message" command is received over the -command channel. +This debug message is logged when a "print_message" command is received +by the resolver over the command channel. % RESOLVER_PROTOCOL_ERROR protocol error parsing received message: %1 - returning %2 -A debug message noting that the resolver received a message and the parsing -of the body of the message failed due to some protocol error (although the -parsing of the header succeeded). The message parameters give a textual -description of the problem and the RCODE returned. +This is a debug message noting that the resolver received a message and +the parsing of the body of the message failed due to some protocol error +(although the parsing of the header succeeded). The message parameters +give a textual description of the problem and the RCODE returned. % RESOLVER_QUERY_SETUP query setup -A debug message noting that the resolver is creating a RecursiveQuery object. +This is a debug message noting that the resolver is creating a +RecursiveQuery object. % RESOLVER_QUERY_SHUTDOWN query shutdown -A debug message noting that the resolver is destroying a RecursiveQuery object. +This is a debug message noting that the resolver is destroying a +RecursiveQuery object. % RESOLVER_QUERY_TIME_SMALL query timeout of %1 is too small -An error indicating that the configuration value specified for the query -timeout is too small. +During the update of the resolver's configuration parameters, the value +of the query timeout was found to be too small. The configuration +parameters were not changed. % RESOLVER_RECEIVED_MESSAGE resolver has received a DNS message -A debug message indicating that the resolver has received a message. Depending -on the debug settings, subsequent log output will indicate the nature of the -message. +This is a debug message indicating that the resolver has received a +DNS message. Depending on the debug settings, subsequent log output +will indicate the nature of the message. % RESOLVER_RECURSIVE running in recursive mode -This is an informational message that appears at startup noting that the -resolver is running in recursive mode. +This is an informational message that appears at startup noting that +the resolver is running in recursive mode. % RESOLVER_SERVICE_CREATED service object created -A debug message, output when the main service object (which handles the -received queries) is created. +This debug message is output when resolver creates the main service object +(which handles the received queries). % RESOLVER_SET_PARAMS query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4 -A debug message, lists the parameters being set for the resolver. These are: +This debug message lists the parameters being set for the resolver. These are: query timeout: the timeout (in ms) used for queries originated by the resolver -to upstream servers. Client timeout: the interval to resolver a query by +to upstream servers. Client timeout: the interval to resolve a query by a client: after this time, the resolver sends back a SERVFAIL to the client -whilst continuing to resolver the query. Lookup timeout: the time at which the +whilst continuing to resolve the query. Lookup timeout: the time at which the resolver gives up trying to resolve a query. Retry count: the number of times the resolver will retry a query to an upstream server if it gets a timeout. @@ -169,17 +192,18 @@ resolution of the client query might require a large number of queries to upstream nameservers. Even if none of these queries timeout, the total time taken to perform all the queries may exceed the client timeout. When this happens, a SERVFAIL is returned to the client, but the resolver continues -with the resolution process. Data received is added to the cache. However, +with the resolution process; data received is added to the cache. However, there comes a time - the lookup timeout - when even the resolver gives up. At this point it will wait for pending upstream queries to complete or timeout and drop the query. % RESOLVER_SET_ROOT_ADDRESS setting root address %1(%2) -This message may appear multiple times during startup; it lists the root -addresses used by the resolver. +This message gives the address of one of the root servers used by the +resolver. It is output during startup and may appear multiple times, +once for each root server address. % RESOLVER_SHUTDOWN resolver shutdown complete -This information message is output when the resolver has shut down. +This informational message is output when the resolver has shut down. % RESOLVER_STARTED resolver started This informational message is output by the resolver when all initialization @@ -189,31 +213,36 @@ has been completed and it is entering its main loop. An informational message, this is output when the resolver starts up. % RESOLVER_UNEXPECTED_RESPONSE received unexpected response, ignoring -A debug message noting that the server has received a response instead of a -query and is ignoring it. +This is a debug message noting that the resolver received a DNS response +packet on the port on which is it listening for queries. The packet +has been ignored. % RESOLVER_UNSUPPORTED_OPCODE opcode %1 not supported by the resolver -A debug message, the resolver received a message with an unsupported opcode -(it can only process QUERY opcodes). It will return a message to the sender -with the RCODE set to NOTIMP. +This is debug message output when the resolver received a message with an +unsupported opcode (it can only process QUERY opcodes). It will return +a message to the sender with the RCODE set to NOTIMP. -% RESOLVER_SET_QUERY_ACL query ACL is configured -A debug message that appears when a new query ACL is configured for the -resolver. +% RESOLVER_SET_QUERY_ACL query ACL is configured +This debug message is generated when a new query ACL is configured for +the resolver. -% RESOLVER_QUERY_ACCEPTED query accepted: '%1/%2/%3' from %4 -A debug message that indicates an incoming query is accepted in terms of -the query ACL. The log message shows the query in the form of -//, and the client that sends the -query in the form of #. +% RESOLVER_QUERY_ACCEPTED query accepted: '%1/%2/%3' from %4 +This debug message is produced by the resolver when an incoming query +is accepted in terms of the query ACL. The log message shows the query +in the form of //, and the client +that sends the query in the form of #. -% RESOLVER_QUERY_REJECTED query rejected: '%1/%2/%3' from %4 -An informational message that indicates an incoming query is rejected -in terms of the query ACL. This results in a response with an RCODE of -REFUSED. See QUERYACCEPTED for the information given in the message. +% RESOLVER_QUERY_REJECTED query rejected: '%1/%2/%3' from %4 +This is an informational message that indicates an incoming query has +been rejected by the resolver because of the query ACL. This results +in a response with an RCODE of REFUSED. The log message shows the query +in the form of //, and the client +that sends the query in the form of #. -% RESOLVER_QUERY_DROPPED query dropped: '%1/%2/%3' from %4 -An informational message that indicates an incoming query is dropped -in terms of the query ACL. Unlike the QUERYREJECTED case, the server does -not return any response. See QUERYACCEPTED for the information given in -the message. +% RESOLVER_QUERY_DROPPED query dropped: '%1/%2/%3' from %4 +This is an informational message that indicates an incoming query has +been dropped by the resolver because of the query ACL. Unlike the +RESOLVER_QUERY_REJECTED case, the server does not return any response. +The log message shows the query in the form of //, and the client that sends the query in the form of +#. diff --git a/src/bin/resolver/tests/Makefile.am b/src/bin/resolver/tests/Makefile.am index c5196176f9..97a2ba6509 100644 --- a/src/bin/resolver/tests/Makefile.am +++ b/src/bin/resolver/tests/Makefile.am @@ -39,6 +39,7 @@ run_unittests_LDADD += $(top_builddir)/src/lib/dns/libdns++.la run_unittests_LDADD += $(top_builddir)/src/lib/asiodns/libasiodns.la run_unittests_LDADD += $(top_builddir)/src/lib/asiolink/libasiolink.la run_unittests_LDADD += $(top_builddir)/src/lib/config/libcfgclient.la +run_unittests_LDADD += $(top_builddir)/src/lib/acl/libdnsacl.la run_unittests_LDADD += $(top_builddir)/src/lib/cc/libcc.la run_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la run_unittests_LDADD += $(top_builddir)/src/lib/xfr/libxfr.la diff --git a/src/bin/resolver/tests/resolver_config_unittest.cc b/src/bin/resolver/tests/resolver_config_unittest.cc index 90063016e6..c089041d4e 100644 --- a/src/bin/resolver/tests/resolver_config_unittest.cc +++ b/src/bin/resolver/tests/resolver_config_unittest.cc @@ -43,6 +43,7 @@ using namespace std; using boost::scoped_ptr; using namespace isc::acl; +using isc::acl::dns::RequestContext; using namespace isc::data; using namespace isc::testutils; using namespace isc::asiodns; @@ -57,19 +58,23 @@ protected: DNSService dnss; Resolver server; scoped_ptr endpoint; - scoped_ptr request; + scoped_ptr query_message; scoped_ptr client; + scoped_ptr request; ResolverConfig() : dnss(ios, NULL, NULL, NULL) { server.setDNSService(dnss); server.setConfigured(); } - const Client& createClient(const string& source_addr) { + const RequestContext& createRequest(const string& source_addr) { endpoint.reset(IOEndpoint::create(IPPROTO_UDP, IOAddress(source_addr), 53210)); - request.reset(new IOMessage(NULL, 0, IOSocket::getDummyUDPSocket(), - *endpoint)); - client.reset(new Client(*request)); - return (*client); + query_message.reset(new IOMessage(NULL, 0, + IOSocket::getDummyUDPSocket(), + *endpoint)); + client.reset(new Client(*query_message)); + request.reset(new RequestContext(client->getRequestSourceIPAddress(), + NULL)); + return (*request); } void invalidTest(const string &JSON, const string& name); }; @@ -100,14 +105,14 @@ TEST_F(ResolverConfig, forwardAddresses) { TEST_F(ResolverConfig, forwardAddressConfig) { // Try putting there some address - ElementPtr config(Element::fromJSON("{" - "\"forward_addresses\": [" - " {" - " \"address\": \"192.0.2.1\"," - " \"port\": 53" - " }" - "]" - "}")); + ConstElementPtr config(Element::fromJSON("{" + "\"forward_addresses\": [" + " {" + " \"address\": \"192.0.2.1\"," + " \"port\": 53" + " }" + "]" + "}")); ConstElementPtr result(server.updateConfig(config)); EXPECT_EQ(result->toWire(), isc::config::createAnswer()->toWire()); EXPECT_TRUE(server.isForwarding()); @@ -127,14 +132,14 @@ TEST_F(ResolverConfig, forwardAddressConfig) { TEST_F(ResolverConfig, rootAddressConfig) { // Try putting there some address - ElementPtr config(Element::fromJSON("{" - "\"root_addresses\": [" - " {" - " \"address\": \"192.0.2.1\"," - " \"port\": 53" - " }" - "]" - "}")); + ConstElementPtr config(Element::fromJSON("{" + "\"root_addresses\": [" + " {" + " \"address\": \"192.0.2.1\"," + " \"port\": 53" + " }" + "]" + "}")); ConstElementPtr result(server.updateConfig(config)); EXPECT_EQ(result->toWire(), isc::config::createAnswer()->toWire()); ASSERT_EQ(1, server.getRootAddresses().size()); @@ -210,12 +215,12 @@ TEST_F(ResolverConfig, timeouts) { } TEST_F(ResolverConfig, timeoutsConfig) { - ElementPtr config = Element::fromJSON("{" - "\"timeout_query\": 1000," - "\"timeout_client\": 2000," - "\"timeout_lookup\": 3000," - "\"retries\": 4" - "}"); + ConstElementPtr config = Element::fromJSON("{" + "\"timeout_query\": 1000," + "\"timeout_client\": 2000," + "\"timeout_lookup\": 3000," + "\"retries\": 4" + "}"); ConstElementPtr result(server.updateConfig(config)); EXPECT_EQ(result->toWire(), isc::config::createAnswer()->toWire()); EXPECT_EQ(1000, server.getQueryTimeout()); @@ -253,51 +258,51 @@ TEST_F(ResolverConfig, invalidTimeoutsConfig) { TEST_F(ResolverConfig, defaultQueryACL) { // If no configuration is loaded, the default ACL should reject everything. - EXPECT_EQ(REJECT, server.getQueryACL().execute(createClient("192.0.2.1"))); + EXPECT_EQ(REJECT, server.getQueryACL().execute(createRequest("192.0.2.1"))); EXPECT_EQ(REJECT, server.getQueryACL().execute( - createClient("2001:db8::1"))); + createRequest("2001:db8::1"))); // The following would be allowed if the server had loaded the default // configuration from the spec file. In this context it should not have // happened, and they should be rejected just like the above cases. - EXPECT_EQ(REJECT, server.getQueryACL().execute(createClient("127.0.0.1"))); - EXPECT_EQ(REJECT, server.getQueryACL().execute(createClient("::1"))); + EXPECT_EQ(REJECT, server.getQueryACL().execute(createRequest("127.0.0.1"))); + EXPECT_EQ(REJECT, server.getQueryACL().execute(createRequest("::1"))); } TEST_F(ResolverConfig, emptyQueryACL) { // Explicitly configured empty ACL should have the same effect. - ElementPtr config(Element::fromJSON("{ \"query_acl\": [] }")); + ConstElementPtr config(Element::fromJSON("{ \"query_acl\": [] }")); ConstElementPtr result(server.updateConfig(config)); EXPECT_EQ(result->toWire(), isc::config::createAnswer()->toWire()); - EXPECT_EQ(REJECT, server.getQueryACL().execute(createClient("192.0.2.1"))); + EXPECT_EQ(REJECT, server.getQueryACL().execute(createRequest("192.0.2.1"))); EXPECT_EQ(REJECT, server.getQueryACL().execute( - createClient("2001:db8::1"))); + createRequest("2001:db8::1"))); } TEST_F(ResolverConfig, queryACLIPv4) { // A simple "accept" query for a specific IPv4 address - ElementPtr config(Element::fromJSON( - "{ \"query_acl\": " - " [ {\"action\": \"ACCEPT\"," - " \"from\": \"192.0.2.1\"} ] }")); + ConstElementPtr config(Element::fromJSON( + "{ \"query_acl\": " + " [ {\"action\": \"ACCEPT\"," + " \"from\": \"192.0.2.1\"} ] }")); ConstElementPtr result(server.updateConfig(config)); EXPECT_EQ(result->toWire(), isc::config::createAnswer()->toWire()); - EXPECT_EQ(ACCEPT, server.getQueryACL().execute(createClient("192.0.2.1"))); + EXPECT_EQ(ACCEPT, server.getQueryACL().execute(createRequest("192.0.2.1"))); EXPECT_EQ(REJECT, server.getQueryACL().execute( - createClient("2001:db8::1"))); + createRequest("2001:db8::1"))); } TEST_F(ResolverConfig, queryACLIPv6) { // same for IPv6 - ElementPtr config(Element::fromJSON( - "{ \"query_acl\": " - " [ {\"action\": \"ACCEPT\"," - " \"from\": \"2001:db8::1\"} ] }")); + ConstElementPtr config(Element::fromJSON( + "{ \"query_acl\": " + " [ {\"action\": \"ACCEPT\"," + " \"from\": \"2001:db8::1\"} ] }")); ConstElementPtr result(server.updateConfig(config)); EXPECT_EQ(result->toWire(), isc::config::createAnswer()->toWire()); - EXPECT_EQ(REJECT, server.getQueryACL().execute(createClient("192.0.2.1"))); + EXPECT_EQ(REJECT, server.getQueryACL().execute(createRequest("192.0.2.1"))); EXPECT_EQ(ACCEPT, server.getQueryACL().execute( - createClient("2001:db8::1"))); + createRequest("2001:db8::1"))); } TEST_F(ResolverConfig, multiEntryACL) { @@ -306,25 +311,26 @@ TEST_F(ResolverConfig, multiEntryACL) { // as it should have been tested in the underlying ACL module. All we // have to do to check is a reasonably complicated ACL configuration is // loaded as expected. - ElementPtr config(Element::fromJSON( - "{ \"query_acl\": " - " [ {\"action\": \"ACCEPT\"," - " \"from\": \"192.0.2.1\"}," - " {\"action\": \"REJECT\"," - " \"from\": \"192.0.2.0/24\"}," - " {\"action\": \"DROP\"," - " \"from\": \"2001:db8::1\"}," - "] }")); + ConstElementPtr config(Element::fromJSON( + "{ \"query_acl\": " + " [ {\"action\": \"ACCEPT\"," + " \"from\": \"192.0.2.1\"}," + " {\"action\": \"REJECT\"," + " \"from\": \"192.0.2.0/24\"}," + " {\"action\": \"DROP\"," + " \"from\": \"2001:db8::1\"}," + "] }")); ConstElementPtr result(server.updateConfig(config)); EXPECT_EQ(result->toWire(), isc::config::createAnswer()->toWire()); - EXPECT_EQ(ACCEPT, server.getQueryACL().execute(createClient("192.0.2.1"))); - EXPECT_EQ(REJECT, server.getQueryACL().execute(createClient("192.0.2.2"))); + EXPECT_EQ(ACCEPT, server.getQueryACL().execute(createRequest("192.0.2.1"))); + EXPECT_EQ(REJECT, server.getQueryACL().execute(createRequest("192.0.2.2"))); EXPECT_EQ(DROP, server.getQueryACL().execute( - createClient("2001:db8::1"))); + createRequest("2001:db8::1"))); EXPECT_EQ(REJECT, server.getQueryACL().execute( - createClient("2001:db8::2"))); // match the default rule + createRequest("2001:db8::2"))); // match the default rule } + int getResultCode(ConstElementPtr result) { int rcode; @@ -332,6 +338,22 @@ getResultCode(ConstElementPtr result) { return (rcode); } +TEST_F(ResolverConfig, queryACLActionOnly) { + // "action only" rule will be accepted by the loader, which can + // effectively change the default action. + ConstElementPtr config(Element::fromJSON( + "{ \"query_acl\": " + " [ {\"action\": \"ACCEPT\"," + " \"from\": \"192.0.2.1\"}," + " {\"action\": \"DROP\"} ] }")); + EXPECT_EQ(0, getResultCode(server.updateConfig(config))); + EXPECT_EQ(ACCEPT, server.getQueryACL().execute(createRequest("192.0.2.1"))); + + // We reject non matching queries by default, but the last resort + // rule should have changed the action in that case to "DROP". + EXPECT_EQ(DROP, server.getQueryACL().execute(createRequest("192.0.2.2"))); +} + TEST_F(ResolverConfig, badQueryACL) { // Most of these cases shouldn't happen in practice because the syntax // check should be performed before updateConfig(). But we check at @@ -346,10 +368,6 @@ TEST_F(ResolverConfig, badQueryACL) { server.updateConfig( Element::fromJSON("{ \"query_acl\":" " [ {\"from\": \"192.0.2.1\"} ] }")))); - EXPECT_EQ(1, getResultCode( - server.updateConfig( - Element::fromJSON("{ \"query_acl\":" - " [ {\"action\": \"DROP\"} ] }")))); // invalid "action" EXPECT_EQ(1, getResultCode( server.updateConfig( @@ -361,7 +379,6 @@ TEST_F(ResolverConfig, badQueryACL) { Element::fromJSON("{ \"query_acl\":" " [ {\"action\": \"BADACTION\"," " \"from\": \"192.0.2.1\"}]}")))); - // invalid "from" EXPECT_EQ(1, getResultCode( server.updateConfig( diff --git a/src/bin/resolver/tests/resolver_unittest.cc b/src/bin/resolver/tests/resolver_unittest.cc index 9bcc261f73..71474dd17a 100644 --- a/src/bin/resolver/tests/resolver_unittest.cc +++ b/src/bin/resolver/tests/resolver_unittest.cc @@ -27,6 +27,7 @@ using namespace std; using namespace isc::dns; using namespace isc::data; +using isc::acl::dns::RequestACL; using namespace isc::testutils; using isc::UnitTestUtil; @@ -156,8 +157,7 @@ TEST_F(ResolverTest, notifyFail) { TEST_F(ResolverTest, setQueryACL) { // valid cases are tested through other tests. We only explicitly check // an invalid case: passing a NULL shared pointer. - EXPECT_THROW(server.setQueryACL( - boost::shared_ptr()), + EXPECT_THROW(server.setQueryACL(boost::shared_ptr()), isc::InvalidParameter); } diff --git a/src/bin/sockcreator/README b/src/bin/sockcreator/README index 4dbbee726e..e142d191d7 100644 --- a/src/bin/sockcreator/README +++ b/src/bin/sockcreator/README @@ -3,7 +3,7 @@ The socket creator The only thing we need higher rights than standard user is binding sockets to ports lower than 1024. So we will have a separate process that keeps the -rights, while the rests drop them for security reasons. +rights, while the rest drops them for security reasons. This process is the socket creator. Its goal is to be as simple as possible and to contain as little code as possible to minimise the amount of code diff --git a/src/bin/stats/Makefile.am b/src/bin/stats/Makefile.am index c8b18c9d2e..63e2a3b38b 100644 --- a/src/bin/stats/Makefile.am +++ b/src/bin/stats/Makefile.am @@ -5,16 +5,25 @@ pkglibexecdir = $(libexecdir)/@PACKAGE@ pkglibexec_SCRIPTS = b10-stats b10-stats-httpd b10_statsdir = $(pkgdatadir) -b10_stats_DATA = stats.spec stats-httpd.spec stats-schema.spec +b10_stats_DATA = stats.spec stats-httpd.spec b10_stats_DATA += stats-httpd-xml.tpl stats-httpd-xsd.tpl stats-httpd-xsl.tpl +nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/stats_messages.py +nodist_pylogmessage_PYTHON += $(PYTHON_LOGMSGPKG_DIR)/work/stats_httpd_messages.py +pylogmessagedir = $(pyexecdir)/isc/log_messages/ + CLEANFILES = b10-stats stats.pyc CLEANFILES += b10-stats-httpd stats_httpd.pyc +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/stats_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/stats_messages.pyc +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/stats_httpd_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/stats_httpd_messages.pyc man_MANS = b10-stats.8 b10-stats-httpd.8 EXTRA_DIST = $(man_MANS) b10-stats.xml b10-stats-httpd.xml -EXTRA_DIST += stats.spec stats-httpd.spec stats-schema.spec +EXTRA_DIST += stats.spec stats-httpd.spec EXTRA_DIST += stats-httpd-xml.tpl stats-httpd-xsd.tpl stats-httpd-xsl.tpl +EXTRA_DIST += stats_messages.mes stats_httpd_messages.mes if ENABLE_MAN @@ -26,12 +35,20 @@ b10-stats-httpd.8: b10-stats-httpd.xml endif +$(PYTHON_LOGMSGPKG_DIR)/work/stats_messages.py : stats_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/stats_messages.mes + +$(PYTHON_LOGMSGPKG_DIR)/work/stats_httpd_messages.py : stats_httpd_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/stats_httpd_messages.mes + # this is done here since configure.ac AC_OUTPUT doesn't expand exec_prefix -b10-stats: stats.py +b10-stats: stats.py $(PYTHON_LOGMSGPKG_DIR)/work/stats_messages.py $(SED) -e "s|@@PYTHONPATH@@|@pyexecdir@|" stats.py >$@ chmod a+x $@ -b10-stats-httpd: stats_httpd.py +b10-stats-httpd: stats_httpd.py $(PYTHON_LOGMSGPKG_DIR)/work/stats_httpd_messages.py $(SED) -e "s|@@PYTHONPATH@@|@pyexecdir@|" stats_httpd.py >$@ chmod a+x $@ diff --git a/src/bin/stats/b10-stats-httpd.8 b/src/bin/stats/b10-stats-httpd.8 index ed4aafa6c6..1206e1d791 100644 --- a/src/bin/stats/b10-stats-httpd.8 +++ b/src/bin/stats/b10-stats-httpd.8 @@ -36,7 +36,7 @@ b10-stats-httpd \- BIND 10 HTTP server for HTTP/XML interface of statistics .PP \fBb10\-stats\-httpd\fR -is a standalone HTTP server\&. It is intended for HTTP/XML interface for statistics module\&. This server process runs as a process separated from the process of the BIND 10 Stats daemon (\fBb10\-stats\fR)\&. The server is initially executed by the BIND 10 boss process (\fBbind10\fR) and eventually exited by it\&. The server is intended to be server requests by HTTP clients like web browsers and third\-party modules\&. When the server is asked, it requests BIND 10 statistics data from +is a standalone HTTP server\&. It is intended for HTTP/XML interface for statistics module\&. This server process runs as a process separated from the process of the BIND 10 Stats daemon (\fBb10\-stats\fR)\&. The server is initially executed by the BIND 10 boss process (\fBbind10\fR) and eventually exited by it\&. The server is intended to be server requests by HTTP clients like web browsers and third\-party modules\&. When the server is asked, it requests BIND 10 statistics data or its schema from \fBb10\-stats\fR, and it sends the data back in Python dictionary format and the server converts it into XML format\&. The server sends it to the HTTP client\&. The server can send three types of document, which are XML (Extensible Markup Language), XSD (XML Schema definition) and XSL (Extensible Stylesheet Language)\&. The XML document is the statistics data of BIND 10, The XSD document is the data schema of it, and The XSL document is the style sheet to be showed for the web browsers\&. There is different URL for each document\&. But please note that you would be redirected to the URL of XML document if you request the URL of the root document\&. For example, you would be redirected to http://127\&.0\&.0\&.1:8000/bind10/statistics/xml if you request http://127\&.0\&.0\&.1:8000/\&. Please see the manual and the spec file of \fBb10\-stats\fR for more details about the items of BIND 10 statistics\&. The server uses CC session in communication with @@ -66,10 +66,6 @@ bindctl(1)\&. Please see the manual of bindctl(1) about how to configure the settings\&. .PP -/usr/local/share/bind10\-devel/stats\-schema\&.spec -\(em This is a spec file for data schema of of BIND 10 statistics\&. This schema cannot be configured via -bindctl(1)\&. -.PP /usr/local/share/bind10\-devel/stats\-httpd\-xml\&.tpl \(em the template file of XML document\&. diff --git a/src/bin/stats/b10-stats-httpd.xml b/src/bin/stats/b10-stats-httpd.xml index 34c704f509..c8df9b8a6e 100644 --- a/src/bin/stats/b10-stats-httpd.xml +++ b/src/bin/stats/b10-stats-httpd.xml @@ -57,7 +57,7 @@ by the BIND 10 boss process (bind10) and eventually exited by it. The server is intended to be server requests by HTTP clients like web browsers and third-party modules. When the server is - asked, it requests BIND 10 statistics data from + asked, it requests BIND 10 statistics data or its schema from b10-stats, and it sends the data back in Python dictionary format and the server converts it into XML format. The server sends it to the HTTP client. The server can send three types of document, @@ -112,12 +112,6 @@ of bindctl1 about how to configure the settings. - /usr/local/share/bind10-devel/stats-schema.spec - - — This is a spec file for data schema of - of BIND 10 statistics. This schema cannot be configured - via bindctl1. - /usr/local/share/bind10-devel/stats-httpd-xml.tpl @@ -138,7 +132,7 @@ CONFIGURATION AND COMMANDS - The configurable setting in + The configurable setting in stats-httpd.spec is: diff --git a/src/bin/stats/b10-stats.8 b/src/bin/stats/b10-stats.8 index f69e4d37fa..0204ca10bc 100644 --- a/src/bin/stats/b10-stats.8 +++ b/src/bin/stats/b10-stats.8 @@ -1,22 +1,13 @@ '\" t .\" Title: b10-stats .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] -.\" Generator: DocBook XSL Stylesheets v1.76.1 -.\" Date: Oct 15, 2010 +.\" Generator: DocBook XSL Stylesheets v1.75.2 +.\" Date: August 11, 2011 .\" Manual: BIND10 .\" Source: BIND10 .\" Language: English .\" -.TH "B10\-STATS" "8" "Oct 15, 2010" "BIND10" "BIND10" -.\" ----------------------------------------------------------------- -.\" * Define some portability stuff -.\" ----------------------------------------------------------------- -.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.\" http://bugs.debian.org/507673 -.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html -.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.ie \n(.g .ds Aq \(aq -.el .ds Aq ' +.TH "B10\-STATS" "8" "August 11, 2011" "BIND10" "BIND10" .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- @@ -45,9 +36,9 @@ with other modules like \fBb10\-auth\fR and so on\&. It waits for coming data from other modules, then other modules send data to stats module periodically\&. Other modules send stats data to stats module independently from implementation of stats module, so the frequency of sending data may not be constant\&. Stats module collects data and aggregates it\&. \fBb10\-stats\fR -invokes "sendstats" command for +invokes an internal command for \fBbind10\fR -after its initial starting because it\*(Aqs sure to collect statistics data from +after its initial starting because it\'s sure to collect statistics data from \fBbind10\fR\&. .SH "OPTIONS" .PP @@ -59,6 +50,84 @@ This \fBb10\-stats\fR switches to verbose mode\&. It sends verbose messages to STDOUT\&. .RE +.SH "CONFIGURATION AND COMMANDS" +.PP +The +\fBb10\-stats\fR +command does not have any configurable settings\&. +.PP +The configuration commands are: +.PP + + +\fBremove\fR +removes the named statistics name and data\&. +.PP + + +\fBreset\fR +will reset all statistics data to default values except for constant names\&. This may re\-add previously removed statistics names\&. +.PP + +\fBset\fR +.PP + +\fBshow\fR +will send the statistics data in JSON format\&. By default, it outputs all the statistics data it has collected\&. An optional item name may be specified to receive individual output\&. +.PP + +\fBshutdown\fR +will shutdown the +\fBb10\-stats\fR +process\&. (Note that the +\fBbind10\fR +parent may restart it\&.) +.PP + +\fBstatus\fR +simply indicates that the daemon is running\&. +.SH "STATISTICS DATA" +.PP +The +\fBb10\-stats\fR +daemon contains these statistics: +.PP +report_time +.RS 4 +The latest report date and time in ISO 8601 format\&. +.RE +.PP +stats\&.boot_time +.RS 4 +The date and time when this daemon was started in ISO 8601 format\&. This is a constant which can\'t be reset except by restarting +\fBb10\-stats\fR\&. +.RE +.PP +stats\&.last_update_time +.RS 4 +The date and time (in ISO 8601 format) when this daemon last received data from another component\&. +.RE +.PP +stats\&.lname +.RS 4 +This is the name used for the +\fBb10\-msgq\fR +command\-control channel\&. (This is a constant which can\'t be reset except by restarting +\fBb10\-stats\fR\&.) +.RE +.PP +stats\&.start_time +.RS 4 +This is the date and time (in ISO 8601 format) when this daemon started collecting data\&. +.RE +.PP +stats\&.timestamp +.RS 4 +The current date and time represented in seconds since UNIX epoch (1970\-01\-01T0 0:00:00Z) with precision (delimited with a period) up to one hundred thousandth of second\&. +.RE +.PP +See other manual pages for explanations for their statistics that are kept track by +\fBb10\-stats\fR\&. .SH "FILES" .PP /usr/local/share/bind10\-devel/stats\&.spec @@ -66,10 +135,6 @@ switches to verbose mode\&. It sends verbose messages to STDOUT\&. \fBb10\-stats\fR\&. It contains commands for \fBb10\-stats\fR\&. They can be invoked via bindctl(1)\&. -.PP -/usr/local/share/bind10\-devel/stats\-schema\&.spec -\(em This is a spec file for data schema of of BIND 10 statistics\&. This schema cannot be configured via -bindctl(1)\&. .SH "SEE ALSO" .PP @@ -82,7 +147,7 @@ BIND 10 Guide\&. .PP The \fBb10\-stats\fR -daemon was initially designed and implemented by Naoki Kambe of JPRS in Oct 2010\&. +daemon was initially designed and implemented by Naoki Kambe of JPRS in October 2010\&. .SH "COPYRIGHT" .br Copyright \(co 2010 Internet Systems Consortium, Inc. ("ISC") diff --git a/src/bin/stats/b10-stats.xml b/src/bin/stats/b10-stats.xml index f0c472dd29..13ada7aa4a 100644 --- a/src/bin/stats/b10-stats.xml +++ b/src/bin/stats/b10-stats.xml @@ -20,7 +20,7 @@ - Oct 15, 2010 + August 11, 2011 @@ -64,9 +64,10 @@ send stats data to stats module independently from implementation of stats module, so the frequency of sending data may not be constant. Stats module collects data and aggregates - it. b10-stats invokes "sendstats" command + it. b10-stats invokes an internal command for bind10 after its initial starting because it's sure to collect statistics data from bind10. + @@ -86,6 +87,123 @@ + + CONFIGURATION AND COMMANDS + + + The b10-stats command does not have any + configurable settings. + + + + + The configuration commands are: + + + + + remove removes the named statistics name and data. + + + + + reset will reset all statistics data to + default values except for constant names. + This may re-add previously removed statistics names. + + + + set + + + + + show will send the statistics data + in JSON format. + By default, it outputs all the statistics data it has collected. + An optional item name may be specified to receive individual output. + + + + + + shutdown will shutdown the + b10-stats process. + (Note that the bind10 parent may restart it.) + + + + status simply indicates that the daemon is + running. + + + + + + STATISTICS DATA + + + The b10-stats daemon contains these statistics: + + + + + + report_time + + The latest report date and time in + ISO 8601 format. + + + + stats.boot_time + The date and time when this daemon was + started in ISO 8601 format. + This is a constant which can't be reset except by restarting + b10-stats. + + + + + stats.last_update_time + The date and time (in ISO 8601 format) + when this daemon last received data from another component. + + + + + stats.lname + This is the name used for the + b10-msgq command-control channel. + (This is a constant which can't be reset except by restarting + b10-stats.) + + + + + stats.start_time + This is the date and time (in ISO 8601 format) + when this daemon started collecting data. + + + + + stats.timestamp + The current date and time represented in + seconds since UNIX epoch (1970-01-01T0 0:00:00Z) with + precision (delimited with a period) up to + one hundred thousandth of second. + + + + + + See other manual pages for explanations for their statistics + that are kept track by b10-stats. + + + + FILES /usr/local/share/bind10-devel/stats.spec @@ -95,12 +213,6 @@ invoked via bindctl1. - /usr/local/share/bind10-devel/stats-schema.spec - - — This is a spec file for data schema of - of BIND 10 statistics. This schema cannot be configured - via bindctl1. - @@ -126,7 +238,7 @@ HISTORY The b10-stats daemon was initially designed - and implemented by Naoki Kambe of JPRS in Oct 2010. + and implemented by Naoki Kambe of JPRS in October 2010. diff --git a/src/bin/xfrin/tests/Makefile.am b/src/bin/xfrin/tests/Makefile.am index 0f485aa2fd..3d560093ec 100644 --- a/src/bin/xfrin/tests/Makefile.am +++ b/src/bin/xfrin/tests/Makefile.am @@ -6,7 +6,7 @@ EXTRA_DIST = $(PYTESTS) # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/xfr/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/util/io/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS @@ -19,6 +19,6 @@ endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/bin/xfrin:$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python \ + PYTHONPATH=$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/bin/xfrin:$(COMMON_PYTHON_PATH) \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ done diff --git a/src/bin/xfrin/tests/xfrin_test.py b/src/bin/xfrin/tests/xfrin_test.py index 2acd9d68a9..05cce986a9 100644 --- a/src/bin/xfrin/tests/xfrin_test.py +++ b/src/bin/xfrin/tests/xfrin_test.py @@ -18,6 +18,7 @@ import socket import io from isc.testutils.tsigctx_mock import MockTSIGContext from xfrin import * +import isc.log # # Commonly used (mostly constant) test parameters @@ -954,13 +955,20 @@ class TestXfrin(unittest.TestCase): self.assertEqual(zone_info.tsig_key.to_text(), TSIGKey(zone_config['tsig_key']).to_text()) else: self.assertIsNone(zone_info.tsig_key) + if 'ixfr_disabled' in zone_config and\ + zone_config.get('ixfr_disabled'): + self.assertTrue(zone_info.ixfr_disabled) + else: + # if not set, should default to False + self.assertFalse(zone_info.ixfr_disabled) def test_command_handler_zones(self): config1 = { 'transfers_in': 3, 'zones': [ { 'name': 'test.example.', 'master_addr': '192.0.2.1', - 'master_port': 53 + 'master_port': 53, + 'ixfr_disabled': False } ]} self.assertEqual(self.xfr.config_handler(config1)['result'][0], 0) @@ -971,7 +979,8 @@ class TestXfrin(unittest.TestCase): { 'name': 'test.example.', 'master_addr': '192.0.2.2', 'master_port': 53, - 'tsig_key': "example.com:SFuWd/q99SzF8Yzd1QbB9g==" + 'tsig_key': "example.com:SFuWd/q99SzF8Yzd1QbB9g==", + 'ixfr_disabled': True } ]} self.assertEqual(self.xfr.config_handler(config2)['result'][0], 0) @@ -1115,6 +1124,7 @@ class TestMain(unittest.TestCase): if __name__== "__main__": try: + isc.log.resetUnitTestRootLogger() unittest.main() except KeyboardInterrupt as e: print(e) diff --git a/src/bin/xfrin/xfrin.py.in b/src/bin/xfrin/xfrin.py.in index 64e3563200..a77a383315 100755 --- a/src/bin/xfrin/xfrin.py.in +++ b/src/bin/xfrin/xfrin.py.in @@ -29,7 +29,7 @@ from isc.config.ccsession import * from isc.notify import notify_out import isc.util.process import isc.net.parse -from xfrin_messages import * +from isc.log_messages.xfrin_messages import * isc.log.init("b10-xfrin") logger = isc.log.Logger("xfrin") @@ -152,7 +152,7 @@ class XfrinConnection(asyncore.dispatcher): self.connect(self._master_address) return True except socket.error as e: - logger.error(CONNECT_MASTER, self._master_address, str(e)) + logger.error(XFRIN_CONNECT_MASTER, self._master_address, str(e)) return False def _create_query(self, query_type): @@ -451,6 +451,7 @@ class ZoneInfo: self.set_master_port(config_data.get('master_port')) self.set_zone_class(config_data.get('class')) self.set_tsig_key(config_data.get('tsig_key')) + self.set_ixfr_disabled(config_data.get('ixfr_disabled')) def set_name(self, name_str): """Set the name for this zone given a name string. @@ -525,6 +526,16 @@ class ZoneInfo: errmsg = "bad TSIG key string: " + tsig_key_str raise XfrinZoneInfoException(errmsg) + def set_ixfr_disabled(self, ixfr_disabled): + """Set ixfr_disabled. If set to False (the default), it will use + IXFR for incoming transfers. If set to True, it will use AXFR. + At this moment there is no automatic fallback""" + # don't care what type it is; if evaluates to true, set to True + if ixfr_disabled: + self.ixfr_disabled = True + else: + self.ixfr_disabled = False + def get_master_addr_info(self): return (self.master_addr.family, socket.SOCK_STREAM, (str(self.master_addr), self.master_port)) @@ -548,8 +559,7 @@ class Xfrin: self._send_cc_session = isc.cc.Session() self._module_cc = isc.config.ModuleCCSession(SPECFILE_LOCATION, self.config_handler, - self.command_handler, - None, True) + self.command_handler) self._module_cc.start() config_data = self._module_cc.get_full_config() self.config_handler(config_data) diff --git a/src/bin/xfrin/xfrin.spec b/src/bin/xfrin/xfrin.spec index a3e62cefc4..bc937205d8 100644 --- a/src/bin/xfrin/xfrin.spec +++ b/src/bin/xfrin/xfrin.spec @@ -43,6 +43,11 @@ { "item_name": "tsig_key", "item_type": "string", "item_optional": true + }, + { "item_name": "ixfr_disabled", + "item_type": "boolean", + "item_optional": false, + "item_default": false } ] } diff --git a/src/bin/xfrout/Makefile.am b/src/bin/xfrout/Makefile.am index c5492adf09..6100e64bf7 100644 --- a/src/bin/xfrout/Makefile.am +++ b/src/bin/xfrout/Makefile.am @@ -6,9 +6,13 @@ pkglibexec_SCRIPTS = b10-xfrout b10_xfroutdir = $(pkgdatadir) b10_xfrout_DATA = xfrout.spec -pyexec_DATA = xfrout_messages.py -CLEANFILES= b10-xfrout xfrout.pyc xfrout.spec xfrout_messages.py xfrout_messages.pyc +nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/xfrout_messages.py +pylogmessagedir = $(pyexecdir)/isc/log_messages/ + +CLEANFILES = b10-xfrout xfrout.pyc xfrout.spec +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/xfrout_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/xfrout_messages.pyc man_MANS = b10-xfrout.8 EXTRA_DIST = $(man_MANS) b10-xfrout.xml xfrout_messages.mes @@ -21,14 +25,15 @@ b10-xfrout.8: b10-xfrout.xml endif # Define rule to build logging source files from message file -xfrout_messages.py: xfrout_messages.mes - $(top_builddir)/src/lib/log/compiler/message -p $(top_srcdir)/src/bin/xfrout/xfrout_messages.mes +$(PYTHON_LOGMSGPKG_DIR)/work/xfrout_messages.py : xfrout_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/xfrout_messages.mes xfrout.spec: xfrout.spec.pre $(SED) -e "s|@@LOCALSTATEDIR@@|$(localstatedir)|" xfrout.spec.pre >$@ # this is done here since configure.ac AC_OUTPUT doesn't expand exec_prefix -b10-xfrout: xfrout.py xfrout_messages.py +b10-xfrout: xfrout.py $(PYTHON_LOGMSGPKG_DIR)/work/xfrout_messages.py $(SED) -e "s|@@PYTHONPATH@@|@pyexecdir@|" \ -e "s|@@LOCALSTATEDIR@@|$(localstatedir)|" xfrout.py >$@ chmod a+x $@ diff --git a/src/bin/xfrout/b10-xfrout.xml b/src/bin/xfrout/b10-xfrout.xml index ad71fe2bf7..9889b8058e 100644 --- a/src/bin/xfrout/b10-xfrout.xml +++ b/src/bin/xfrout/b10-xfrout.xml @@ -134,6 +134,14 @@ data storage types. + + + The configuration commands are: diff --git a/src/bin/xfrout/tests/Makefile.am b/src/bin/xfrout/tests/Makefile.am index 6ca2b420e1..ace8fc9f30 100644 --- a/src/bin/xfrout/tests/Makefile.am +++ b/src/bin/xfrout/tests/Makefile.am @@ -1,15 +1,17 @@ PYCOVERAGE_RUN=@PYCOVERAGE_RUN@ PYTESTS = xfrout_test.py -EXTRA_DIST = $(PYTESTS) +noinst_SCRIPTS = $(PYTESTS) # If necessary (rare cases), explicitly specify paths to dynamic libraries # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/util/io/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/util/io/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$(abs_top_builddir)/src/lib/acl/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS +# We set B10_FROM_BUILD below, so that the test can refer to the in-source +# spec file. check-local: if ENABLE_PYTHON_COVERAGE touch $(abs_top_srcdir)/.coverage @@ -18,7 +20,9 @@ if ENABLE_PYTHON_COVERAGE endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ + chmod +x $(abs_builddir)/$$pytest ; \ + B10_FROM_BUILD=$(abs_top_builddir) \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_builddir)/src/bin/xfrout:$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/util/io/.libs \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/bin/xfrout:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/util/io/.libs \ $(PYCOVERAGE_RUN) $(abs_builddir)/$$pytest || exit ; \ done diff --git a/src/bin/xfrout/tests/xfrout_test.py.in b/src/bin/xfrout/tests/xfrout_test.py.in index adabf48ebf..85979a012e 100644 --- a/src/bin/xfrout/tests/xfrout_test.py.in +++ b/src/bin/xfrout/tests/xfrout_test.py.in @@ -20,9 +20,12 @@ import unittest import os from isc.testutils.tsigctx_mock import MockTSIGContext from isc.cc.session import * +import isc.config from pydnspp import * from xfrout import * import xfrout +import isc.log +import isc.acl.dns TSIG_KEY = TSIGKey("example.com:SFuWd/q99SzF8Yzd1QbB9g==") @@ -99,26 +102,34 @@ class TestXfroutSession(unittest.TestCase): def message_has_tsig(self, msg): return msg.get_tsig_record() is not None - def create_request_data_with_tsig(self): + def create_request_data(self, with_tsig=False): msg = Message(Message.RENDER) query_id = 0x1035 msg.set_qid(query_id) msg.set_opcode(Opcode.QUERY()) msg.set_rcode(Rcode.NOERROR()) - query_question = Question(Name("example.com."), RRClass.IN(), RRType.AXFR()) + query_question = Question(Name("example.com"), RRClass.IN(), + RRType.AXFR()) msg.add_question(query_question) renderer = MessageRenderer() - tsig_ctx = MockTSIGContext(TSIG_KEY) - msg.to_wire(renderer, tsig_ctx) - reply_data = renderer.get_data() - return reply_data + if with_tsig: + tsig_ctx = MockTSIGContext(TSIG_KEY) + msg.to_wire(renderer, tsig_ctx) + else: + msg.to_wire(renderer) + request_data = renderer.get_data() + return request_data def setUp(self): self.sock = MySocket(socket.AF_INET,socket.SOCK_STREAM) - #self.log = isc.log.NSLogger('xfrout', '', severity = 'critical', log_to_console = False ) - self.xfrsess = MyXfroutSession(self.sock, None, Dbserver(), TSIGKeyRing()) - self.mdata = bytes(b'\xd6=\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x07example\x03com\x00\x00\xfc\x00\x01') + self.xfrsess = MyXfroutSession(self.sock, None, Dbserver(), + TSIGKeyRing(), ('127.0.0.1', 12345), + # When not testing ACLs, simply accept + isc.acl.dns.REQUEST_LOADER.load( + [{"action": "ACCEPT"}]), + {}) + self.mdata = self.create_request_data(False) self.soa_record = (4, 3, 'example.com.', 'com.example.', 3600, 'SOA', None, 'master.example.com. admin.example.com. 1234 3600 1800 2419200 7200') def test_parse_query_message(self): @@ -126,17 +137,158 @@ class TestXfroutSession(unittest.TestCase): self.assertEqual(get_rcode.to_text(), "NOERROR") # tsig signed query message - request_data = self.create_request_data_with_tsig() + request_data = self.create_request_data(True) # BADKEY [rcode, msg] = self.xfrsess._parse_query_message(request_data) self.assertEqual(rcode.to_text(), "NOTAUTH") self.assertTrue(self.xfrsess._tsig_ctx is not None) # NOERROR - self.xfrsess._tsig_key_ring.add(TSIG_KEY) + self.assertEqual(TSIGKeyRing.SUCCESS, + self.xfrsess._tsig_key_ring.add(TSIG_KEY)) [rcode, msg] = self.xfrsess._parse_query_message(request_data) self.assertEqual(rcode.to_text(), "NOERROR") self.assertTrue(self.xfrsess._tsig_ctx is not None) + def check_transfer_acl(self, acl_setter): + # ACL checks, put some ACL inside + acl_setter(isc.acl.dns.REQUEST_LOADER.load([ + { + "from": "127.0.0.1", + "action": "ACCEPT" + }, + { + "from": "192.0.2.1", + "action": "DROP" + } + ])) + # Localhost (the default in this test) is accepted + rcode, msg = self.xfrsess._parse_query_message(self.mdata) + self.assertEqual(rcode.to_text(), "NOERROR") + # This should be dropped completely, therefore returning None + self.xfrsess._remote = ('192.0.2.1', 12345) + rcode, msg = self.xfrsess._parse_query_message(self.mdata) + self.assertEqual(None, rcode) + # This should be refused, therefore REFUSED + self.xfrsess._remote = ('192.0.2.2', 12345) + rcode, msg = self.xfrsess._parse_query_message(self.mdata) + self.assertEqual(rcode.to_text(), "REFUSED") + + # TSIG signed request + request_data = self.create_request_data(True) + + # If the TSIG check fails, it should not check ACL + # (If it checked ACL as well, it would just drop the request) + self.xfrsess._remote = ('192.0.2.1', 12345) + self.xfrsess._tsig_key_ring = TSIGKeyRing() + rcode, msg = self.xfrsess._parse_query_message(request_data) + self.assertEqual(rcode.to_text(), "NOTAUTH") + self.assertTrue(self.xfrsess._tsig_ctx is not None) + + # ACL using TSIG: successful case + acl_setter(isc.acl.dns.REQUEST_LOADER.load([ + {"key": "example.com", "action": "ACCEPT"}, {"action": "REJECT"} + ])) + self.assertEqual(TSIGKeyRing.SUCCESS, + self.xfrsess._tsig_key_ring.add(TSIG_KEY)) + [rcode, msg] = self.xfrsess._parse_query_message(request_data) + self.assertEqual(rcode.to_text(), "NOERROR") + + # ACL using TSIG: key name doesn't match; should be rejected + acl_setter(isc.acl.dns.REQUEST_LOADER.load([ + {"key": "example.org", "action": "ACCEPT"}, {"action": "REJECT"} + ])) + [rcode, msg] = self.xfrsess._parse_query_message(request_data) + self.assertEqual(rcode.to_text(), "REFUSED") + + # ACL using TSIG: no TSIG; should be rejected + acl_setter(isc.acl.dns.REQUEST_LOADER.load([ + {"key": "example.org", "action": "ACCEPT"}, {"action": "REJECT"} + ])) + [rcode, msg] = self.xfrsess._parse_query_message(self.mdata) + self.assertEqual(rcode.to_text(), "REFUSED") + + # + # ACL using IP + TSIG: both should match + # + acl_setter(isc.acl.dns.REQUEST_LOADER.load([ + {"ALL": [{"key": "example.com"}, {"from": "192.0.2.1"}], + "action": "ACCEPT"}, + {"action": "REJECT"} + ])) + # both matches + self.xfrsess._remote = ('192.0.2.1', 12345) + [rcode, msg] = self.xfrsess._parse_query_message(request_data) + self.assertEqual(rcode.to_text(), "NOERROR") + # TSIG matches, but address doesn't + self.xfrsess._remote = ('192.0.2.2', 12345) + [rcode, msg] = self.xfrsess._parse_query_message(request_data) + self.assertEqual(rcode.to_text(), "REFUSED") + # Address matches, but TSIG doesn't (not included) + self.xfrsess._remote = ('192.0.2.1', 12345) + [rcode, msg] = self.xfrsess._parse_query_message(self.mdata) + self.assertEqual(rcode.to_text(), "REFUSED") + # Neither address nor TSIG matches + self.xfrsess._remote = ('192.0.2.2', 12345) + [rcode, msg] = self.xfrsess._parse_query_message(self.mdata) + self.assertEqual(rcode.to_text(), "REFUSED") + + def test_transfer_acl(self): + # ACL checks only with the default ACL + def acl_setter(acl): + self.xfrsess._acl = acl + self.check_transfer_acl(acl_setter) + + def test_transfer_zoneacl(self): + # ACL check with a per zone ACL + default ACL. The per zone ACL + # should match the queryied zone, so it should be used. + def acl_setter(acl): + zone_key = ('IN', 'example.com.') + self.xfrsess._zone_config[zone_key] = {} + self.xfrsess._zone_config[zone_key]['transfer_acl'] = acl + self.xfrsess._acl = isc.acl.dns.REQUEST_LOADER.load([ + {"from": "127.0.0.1", "action": "DROP"}]) + self.check_transfer_acl(acl_setter) + + def test_transfer_zoneacl_nomatch(self): + # similar to the previous one, but the per zone doesn't match the + # query. The default should be used. + def acl_setter(acl): + zone_key = ('IN', 'example.org.') + self.xfrsess._zone_config[zone_key] = {} + self.xfrsess._zone_config[zone_key]['transfer_acl'] = \ + isc.acl.dns.REQUEST_LOADER.load([ + {"from": "127.0.0.1", "action": "DROP"}]) + self.xfrsess._acl = acl + self.check_transfer_acl(acl_setter) + + def test_get_transfer_acl(self): + # set the default ACL. If there's no specific zone ACL, this one + # should be used. + self.xfrsess._acl = isc.acl.dns.REQUEST_LOADER.load([ + {"from": "127.0.0.1", "action": "ACCEPT"}]) + acl = self.xfrsess._get_transfer_acl(Name('example.com'), RRClass.IN()) + self.assertEqual(acl, self.xfrsess._acl) + + # install a per zone config with transfer ACL for example.com. Then + # that ACL will be used for example.com; for others the default ACL + # will still be used. + com_acl = isc.acl.dns.REQUEST_LOADER.load([ + {"from": "127.0.0.1", "action": "REJECT"}]) + self.xfrsess._zone_config[('IN', 'example.com.')] = {} + self.xfrsess._zone_config[('IN', 'example.com.')]['transfer_acl'] = \ + com_acl + self.assertEqual(com_acl, + self.xfrsess._get_transfer_acl(Name('example.com'), + RRClass.IN())) + self.assertEqual(self.xfrsess._acl, + self.xfrsess._get_transfer_acl(Name('example.org'), + RRClass.IN())) + + # Name matching should be case insensitive. + self.assertEqual(com_acl, + self.xfrsess._get_transfer_acl(Name('EXAMPLE.COM'), + RRClass.IN())) + def test_get_query_zone_name(self): msg = self.getmsg() self.assertEqual(self.xfrsess._get_query_zone_name(msg), "example.com.") @@ -195,20 +347,6 @@ class TestXfroutSession(unittest.TestCase): self.assertEqual(msg.get_rcode(), rcode) self.assertTrue(msg.get_header_flag(Message.HEADERFLAG_AA)) - def test_reply_query_with_format_error(self): - msg = self.getmsg() - self.xfrsess._reply_query_with_format_error(msg, self.sock) - get_msg = self.sock.read_msg() - self.assertEqual(get_msg.get_rcode().to_text(), "FORMERR") - - # tsig signed message - msg = self.getmsg() - self.xfrsess._tsig_ctx = self.create_mock_tsig_ctx(TSIGError.NOERROR) - self.xfrsess._reply_query_with_format_error(msg, self.sock) - get_msg = self.sock.read_msg() - self.assertEqual(get_msg.get_rcode().to_text(), "FORMERR") - self.assertTrue(self.message_has_tsig(get_msg)) - def test_create_rrset_from_db_record(self): rrset = self.xfrsess._create_rrset_from_db_record(self.soa_record) self.assertEqual(rrset.get_name().to_text(), "example.com.") @@ -502,9 +640,11 @@ class TestXfroutSession(unittest.TestCase): # and it should not have sent anything else self.assertEqual(0, len(self.sock.sendqueue)) -class MyCCSession(): +class MyCCSession(isc.config.ConfigData): def __init__(self): - pass + module_spec = isc.config.module_spec_from_file( + xfrout.SPECFILE_LOCATION) + ConfigData.__init__(self, module_spec) def get_remote_config_value(self, module_name, identifier): if module_name == "Auth" and identifier == "database_file": @@ -515,18 +655,42 @@ class MyCCSession(): class MyUnixSockServer(UnixSockServer): def __init__(self): - self._lock = threading.Lock() - self._transfers_counter = 0 self._shutdown_event = threading.Event() - self._max_transfers_out = 10 + self._common_init() self._cc = MyCCSession() - #self._log = isc.log.NSLogger('xfrout', '', severity = 'critical', log_to_console = False ) + self.update_config_data(self._cc.get_full_config()) class TestUnixSockServer(unittest.TestCase): def setUp(self): self.write_sock, self.read_sock = socket.socketpair() self.unix = MyUnixSockServer() + def test_guess_remote(self): + """Test we can guess the remote endpoint when we have only the + file descriptor. This is needed, because we get only that one + from auth.""" + # We test with UDP, as it can be "connected" without other + # endpoint + sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + sock.connect(('127.0.0.1', 12345)) + self.assertEqual(('127.0.0.1', 12345), + self.unix._guess_remote(sock.fileno())) + if socket.has_ipv6: + # Don't check IPv6 address on hosts not supporting them + sock = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) + sock.connect(('::1', 12345)) + self.assertEqual(('::1', 12345, 0, 0), + self.unix._guess_remote(sock.fileno())) + # Try when pretending there's no IPv6 support + # (No need to pretend when there's really no IPv6) + xfrout.socket.has_ipv6 = False + sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + sock.connect(('127.0.0.1', 12345)) + self.assertEqual(('127.0.0.1', 12345), + self.unix._guess_remote(sock.fileno())) + # Return it back + xfrout.socket.has_ipv6 = True + def test_receive_query_message(self): send_msg = b"\xd6=\x00\x00\x00\x01\x00" msg_len = struct.pack('H', socket.htons(len(send_msg))) @@ -535,15 +699,37 @@ class TestUnixSockServer(unittest.TestCase): recv_msg = self.unix._receive_query_message(self.read_sock) self.assertEqual(recv_msg, send_msg) - def test_updata_config_data(self): + def check_default_ACL(self): + context = isc.acl.dns.RequestContext(socket.getaddrinfo("127.0.0.1", + 1234, 0, socket.SOCK_DGRAM, + socket.IPPROTO_UDP, + socket.AI_NUMERICHOST)[0][4]) + self.assertEqual(isc.acl.acl.ACCEPT, self.unix._acl.execute(context)) + + def check_loaded_ACL(self, acl): + context = isc.acl.dns.RequestContext(socket.getaddrinfo("127.0.0.1", + 1234, 0, socket.SOCK_DGRAM, + socket.IPPROTO_UDP, + socket.AI_NUMERICHOST)[0][4]) + self.assertEqual(isc.acl.acl.ACCEPT, acl.execute(context)) + context = isc.acl.dns.RequestContext(socket.getaddrinfo("192.0.2.1", + 1234, 0, socket.SOCK_DGRAM, + socket.IPPROTO_UDP, + socket.AI_NUMERICHOST)[0][4]) + self.assertEqual(isc.acl.acl.REJECT, acl.execute(context)) + + def test_update_config_data(self): + self.check_default_ACL() tsig_key_str = 'example.com:SFuWd/q99SzF8Yzd1QbB9g==' tsig_key_list = [tsig_key_str] bad_key_list = ['bad..example.com:SFuWd/q99SzF8Yzd1QbB9g=='] self.unix.update_config_data({'transfers_out':10 }) self.assertEqual(self.unix._max_transfers_out, 10) self.assertTrue(self.unix.tsig_key_ring is not None) + self.check_default_ACL() - self.unix.update_config_data({'transfers_out':9, 'tsig_key_ring':tsig_key_list}) + self.unix.update_config_data({'transfers_out':9, + 'tsig_key_ring':tsig_key_list}) self.assertEqual(self.unix._max_transfers_out, 9) self.assertEqual(self.unix.tsig_key_ring.size(), 1) self.unix.tsig_key_ring.remove(Name("example.com.")) @@ -554,6 +740,81 @@ class TestUnixSockServer(unittest.TestCase): self.assertRaises(None, self.unix.update_config_data(config_data)) self.assertEqual(self.unix.tsig_key_ring.size(), 0) + # Load the ACL + self.unix.update_config_data({'transfer_acl': [{'from': '127.0.0.1', + 'action': 'ACCEPT'}]}) + self.check_loaded_ACL(self.unix._acl) + # Pass a wrong data there and check it does not replace the old one + self.assertRaises(XfroutConfigError, + self.unix.update_config_data, + {'transfer_acl': ['Something bad']}) + self.check_loaded_ACL(self.unix._acl) + + def test_zone_config_data(self): + # By default, there's no specific zone config + self.assertEqual({}, self.unix._zone_config) + + # Adding config for a specific zone. The config is empty unless + # explicitly specified. + self.unix.update_config_data({'zone_config': + [{'origin': 'example.com', + 'class': 'IN'}]}) + self.assertEqual({}, self.unix._zone_config[('IN', 'example.com.')]) + + # zone class can be omitted + self.unix.update_config_data({'zone_config': + [{'origin': 'example.com'}]}) + self.assertEqual({}, self.unix._zone_config[('IN', 'example.com.')]) + + # zone class, name are stored in the "normalized" form. class + # strings are upper cased, names are down cased. + self.unix.update_config_data({'zone_config': + [{'origin': 'EXAMPLE.com'}]}) + self.assertEqual({}, self.unix._zone_config[('IN', 'example.com.')]) + + # invalid zone class, name will result in exceptions + self.assertRaises(EmptyLabel, + self.unix.update_config_data, + {'zone_config': [{'origin': 'bad..example'}]}) + self.assertRaises(InvalidRRClass, + self.unix.update_config_data, + {'zone_config': [{'origin': 'example.com', + 'class': 'badclass'}]}) + + # Configuring a couple of more zones + self.unix.update_config_data({'zone_config': + [{'origin': 'example.com'}, + {'origin': 'example.com', + 'class': 'CH'}, + {'origin': 'example.org'}]}) + self.assertEqual({}, self.unix._zone_config[('IN', 'example.com.')]) + self.assertEqual({}, self.unix._zone_config[('CH', 'example.com.')]) + self.assertEqual({}, self.unix._zone_config[('IN', 'example.org.')]) + + # Duplicate data: should be rejected with an exception + self.assertRaises(XfroutConfigError, + self.unix.update_config_data, + {'zone_config': [{'origin': 'example.com'}, + {'origin': 'example.org'}, + {'origin': 'example.com'}]}) + + def test_zone_config_data_with_acl(self): + # Similar to the previous test, but with transfer_acl config + self.unix.update_config_data({'zone_config': + [{'origin': 'example.com', + 'transfer_acl': + [{'from': '127.0.0.1', + 'action': 'ACCEPT'}]}]}) + acl = self.unix._zone_config[('IN', 'example.com.')]['transfer_acl'] + self.check_loaded_ACL(acl) + + # invalid ACL syntax will be rejected with exception + self.assertRaises(XfroutConfigError, + self.unix.update_config_data, + {'zone_config': [{'origin': 'example.com', + 'transfer_acl': + [{'action': 'BADACTION'}]}]}) + def test_get_db_file(self): self.assertEqual(self.unix.get_db_file(), "initdb.file") @@ -670,4 +931,5 @@ class TestInitialization(unittest.TestCase): self.assertEqual(xfrout.UNIX_SOCKET_FILE, "The/Socket/File") if __name__== "__main__": + isc.log.resetUnitTestRootLogger() unittest.main() diff --git a/src/bin/xfrout/xfrout.py.in b/src/bin/xfrout/xfrout.py.in index a75ff22245..8049e29e3a 100755 --- a/src/bin/xfrout/xfrout.py.in +++ b/src/bin/xfrout/xfrout.py.in @@ -35,7 +35,7 @@ import errno from optparse import OptionParser, OptionValueError from isc.util import socketserver_mixin -from xfrout_messages import * +from isc.log_messages.xfrout_messages import * isc.log.init("b10-xfrout") logger = isc.log.Logger("xfrout") @@ -48,8 +48,23 @@ except ImportError as e: # must keep running, so we warn about it and move forward. log.error(XFROUT_IMPORT, str(e)) +from isc.acl.acl import ACCEPT, REJECT, DROP, LoaderError +from isc.acl.dns import REQUEST_LOADER + isc.util.process.rename() +class XfroutConfigError(Exception): + """An exception indicating an error in updating xfrout configuration. + + This exception is raised when the xfrout process encouters an error in + handling configuration updates. Not all syntax error can be caught + at the module-CC layer, so xfrout needs to (explicitly or implicitly) + validate the given configuration data itself. When it finds an error + it raises this exception (either directly or by converting an exception + from other modules) as a unified error in configuration. + """ + pass + def init_paths(): global SPECFILE_PATH global AUTH_SPECFILE_PATH @@ -76,14 +91,12 @@ init_paths() SPECFILE_LOCATION = SPECFILE_PATH + "/xfrout.spec" AUTH_SPECFILE_LOCATION = AUTH_SPECFILE_PATH + os.sep + "auth.spec" -MAX_TRANSFERS_OUT = 10 VERBOSE_MODE = False # tsig sign every N axfr packets. TSIG_SIGN_EVERY_NTH = 96 XFROUT_MAX_MESSAGE_SIZE = 65535 - def get_rrset_len(rrset): """Returns the wire length of the given RRset""" bytes = bytearray() @@ -92,16 +105,17 @@ def get_rrset_len(rrset): class XfroutSession(): - def __init__(self, sock_fd, request_data, server, tsig_key_ring): - # The initializer for the superclass may call functions - # that need _log to be set, so we set it first + def __init__(self, sock_fd, request_data, server, tsig_key_ring, remote, + default_acl, zone_config): self._sock_fd = sock_fd self._request_data = request_data self._server = server - #self._log = log self._tsig_key_ring = tsig_key_ring self._tsig_ctx = None self._tsig_len = 0 + self._remote = remote + self._acl = default_acl + self._zone_config = zone_config self.handle() def create_tsig_ctx(self, tsig_record, tsig_key_ring): @@ -114,7 +128,7 @@ class XfroutSession(): self.dns_xfrout_start(self._sock_fd, self._request_data) #TODO, avoid catching all exceptions except Exception as e: - logger.error(XFROUT_HANDLE_QUERY_ERROR, str(e)) + logger.error(XFROUT_HANDLE_QUERY_ERROR, e) pass os.close(self._sock_fd) @@ -137,16 +151,50 @@ class XfroutSession(): try: msg = Message(Message.PARSE) Message.from_wire(msg, mdata) - - # TSIG related checks - rcode = self._check_request_tsig(msg, mdata) - - except Exception as err: - logger.error(XFROUT_PARSE_QUERY_ERROR, str(err)) + except Exception as err: # Exception is too broad + logger.error(XFROUT_PARSE_QUERY_ERROR, err) return Rcode.FORMERR(), None + # TSIG related checks + rcode = self._check_request_tsig(msg, mdata) + + if rcode == Rcode.NOERROR(): + # ACL checks + zone_name = msg.get_question()[0].get_name() + zone_class = msg.get_question()[0].get_class() + acl = self._get_transfer_acl(zone_name, zone_class) + acl_result = acl.execute( + isc.acl.dns.RequestContext(self._remote, + msg.get_tsig_record())) + if acl_result == DROP: + logger.info(XFROUT_QUERY_DROPPED, zone_name, zone_class, + self._remote[0], self._remote[1]) + return None, None + elif acl_result == REJECT: + logger.info(XFROUT_QUERY_REJECTED, zone_name, zone_class, + self._remote[0], self._remote[1]) + return Rcode.REFUSED(), msg + return rcode, msg + def _get_transfer_acl(self, zone_name, zone_class): + '''Return the ACL that should be applied for a given zone. + + The zone is identified by a tuple of name and RR class. + If a per zone configuration for the zone exists and contains + transfer_acl, that ACL will be used; otherwise, the default + ACL will be used. + + ''' + # Internally zone names are managed in lower cased label characters, + # so we first need to convert the name. + zone_name_lower = Name(zone_name.to_text(), True) + config_key = (zone_class.to_text(), zone_name_lower.to_text()) + if config_key in self._zone_config and \ + 'transfer_acl' in self._zone_config[config_key]: + return self._zone_config[config_key]['transfer_acl'] + return self._acl + def _get_query_zone_name(self, msg): question = msg.get_question()[0] return question.get_name().to_text() @@ -183,18 +231,11 @@ class XfroutSession(): def _reply_query_with_error_rcode(self, msg, sock_fd, rcode_): - msg.make_response() - msg.set_rcode(rcode_) - self._send_message(sock_fd, msg, self._tsig_ctx) - - - def _reply_query_with_format_error(self, msg, sock_fd): - '''query message format isn't legal.''' if not msg: return # query message is invalid. send nothing back. msg.make_response() - msg.set_rcode(Rcode.FORMERR()) + msg.set_rcode(rcode_) self._send_message(sock_fd, msg, self._tsig_ctx) def _zone_has_soa(self, zone): @@ -244,10 +285,13 @@ class XfroutSession(): def dns_xfrout_start(self, sock_fd, msg_query): rcode_, msg = self._parse_query_message(msg_query) #TODO. create query message and parse header - if rcode_ == Rcode.NOTAUTH(): + if rcode_ is None: # Dropped by ACL + return + elif rcode_ == Rcode.NOTAUTH() or rcode_ == Rcode.REFUSED(): return self._reply_query_with_error_rcode(msg, sock_fd, rcode_) elif rcode_ != Rcode.NOERROR(): - return self._reply_query_with_format_error(msg, sock_fd) + return self._reply_query_with_error_rcode(msg, sock_fd, + Rcode.FORMERR()) zone_name = self._get_query_zone_name(msg) zone_class_str = self._get_query_zone_class(msg) @@ -257,7 +301,7 @@ class XfroutSession(): if rcode_ != Rcode.NOERROR(): logger.info(XFROUT_AXFR_TRANSFER_FAILED, zone_name, zone_class_str, rcode_.to_text()) - return self. _reply_query_with_error_rcode(msg, sock_fd, rcode_) + return self._reply_query_with_error_rcode(msg, sock_fd, rcode_) try: logger.info(XFROUT_AXFR_TRANSFER_STARTED, zone_name, zone_class_str) @@ -367,21 +411,28 @@ class XfroutSession(): self._send_message_with_last_soa(msg, sock_fd, rrset_soa, message_upper_len, count_since_last_tsig_sign) -class UnixSockServer(socketserver_mixin.NoPollMixIn, ThreadingUnixStreamServer): +class UnixSockServer(socketserver_mixin.NoPollMixIn, + ThreadingUnixStreamServer): '''The unix domain socket server which accept xfr query sent from auth server.''' - def __init__(self, sock_file, handle_class, shutdown_event, config_data, cc): + def __init__(self, sock_file, handle_class, shutdown_event, config_data, + cc): self._remove_unused_sock_file(sock_file) self._sock_file = sock_file socketserver_mixin.NoPollMixIn.__init__(self) ThreadingUnixStreamServer.__init__(self, sock_file, handle_class) - self._lock = threading.Lock() - self._transfers_counter = 0 self._shutdown_event = shutdown_event self._write_sock, self._read_sock = socket.socketpair() - #self._log = log - self.update_config_data(config_data) + self._common_init() self._cc = cc + self.update_config_data(config_data) + + def _common_init(self): + '''Initialization shared with the mock server class used for tests''' + self._lock = threading.Lock() + self._transfers_counter = 0 + self._zone_config = {} + self._acl = None # this will be initialized in update_config_data() def _receive_query_message(self, sock): ''' receive request message from sock''' @@ -459,16 +510,41 @@ class UnixSockServer(socketserver_mixin.NoPollMixIn, ThreadingUnixStreamServer): if not request_data: return - t = threading.Thread(target = self.finish_request, + t = threading.Thread(target=self.finish_request, args = (sock_fd, request_data)) if self.daemon_threads: t.daemon = True t.start() + def _guess_remote(self, sock_fd): + """ + Guess remote address and port of the socket. The sock_fd must be a + socket + """ + # This uses a trick. If the socket is IPv4 in reality and we pretend + # it to be IPv6, it returns IPv4 address anyway. This doesn't seem + # to care about the SOCK_STREAM parameter at all (which it really is, + # except for testing) + if socket.has_ipv6: + sock = socket.fromfd(sock_fd, socket.AF_INET6, socket.SOCK_STREAM) + else: + # To make it work even on hosts without IPv6 support + # (Any idea how to simulate this in test?) + sock = socket.fromfd(sock_fd, socket.AF_INET, socket.SOCK_STREAM) + return sock.getpeername() def finish_request(self, sock_fd, request_data): - '''Finish one request by instantiating RequestHandlerClass.''' - self.RequestHandlerClass(sock_fd, request_data, self, self.tsig_key_ring) + '''Finish one request by instantiating RequestHandlerClass. + + This method creates a XfroutSession object. + ''' + self._lock.acquire() + acl = self._acl + zone_config = self._zone_config + self._lock.release() + self.RequestHandlerClass(sock_fd, request_data, self, + self.tsig_key_ring, + self._guess_remote(sock_fd), acl, zone_config) def _remove_unused_sock_file(self, sock_file): '''Try to remove the socket file. If the file is being used @@ -510,14 +586,65 @@ class UnixSockServer(socketserver_mixin.NoPollMixIn, ThreadingUnixStreamServer): pass def update_config_data(self, new_config): - '''Apply the new config setting of xfrout module. ''' - logger.info(XFROUT_NEW_CONFIG) + '''Apply the new config setting of xfrout module. + + ''' self._lock.acquire() - self._max_transfers_out = new_config.get('transfers_out') - self.set_tsig_key_ring(new_config.get('tsig_key_ring')) + try: + logger.info(XFROUT_NEW_CONFIG) + new_acl = self._acl + if 'transfer_acl' in new_config: + try: + new_acl = REQUEST_LOADER.load(new_config['transfer_acl']) + except LoaderError as e: + raise XfroutConfigError('Failed to parse transfer_acl: ' + + str(e)) + + new_zone_config = self._zone_config + zconfig_data = new_config.get('zone_config') + if zconfig_data is not None: + new_zone_config = self.__create_zone_config(zconfig_data) + + self._acl = new_acl + self._zone_config = new_zone_config + self._max_transfers_out = new_config.get('transfers_out') + self.set_tsig_key_ring(new_config.get('tsig_key_ring')) + except Exception as e: + self._lock.release() + raise e self._lock.release() logger.info(XFROUT_NEW_CONFIG_DONE) + def __create_zone_config(self, zone_config_list): + new_config = {} + for zconf in zone_config_list: + # convert the class, origin (name) pair. First build pydnspp + # object to reject invalid input. + zclass_str = zconf.get('class') + if zclass_str is None: + #zclass_str = 'IN' # temporary + zclass_str = self._cc.get_default_value('zone_config/class') + zclass = RRClass(zclass_str) + zorigin = Name(zconf['origin'], True) + config_key = (zclass.to_text(), zorigin.to_text()) + + # reject duplicate config + if config_key in new_config: + raise XfroutConfigError('Duplicate zone_config for ' + + str(zorigin) + '/' + str(zclass)) + + # create a new config entry, build any given (and known) config + new_config[config_key] = {} + if 'transfer_acl' in zconf: + try: + new_config[config_key]['transfer_acl'] = \ + REQUEST_LOADER.load(zconf['transfer_acl']) + except LoaderError as e: + raise XfroutConfigError('Failed to parse transfer_acl ' + + 'for ' + zorigin.to_text() + '/' + + zclass_str + ': ' + str(e)) + return new_config + def set_tsig_key_ring(self, key_list): """Set the tsig_key_ring , given a TSIG key string list representation. """ @@ -563,23 +690,21 @@ class UnixSockServer(socketserver_mixin.NoPollMixIn, ThreadingUnixStreamServer): class XfroutServer: def __init__(self): self._unix_socket_server = None - #self._log = None self._listen_sock_file = UNIX_SOCKET_FILE self._shutdown_event = threading.Event() - self._cc = isc.config.ModuleCCSession(SPECFILE_LOCATION, self.config_handler, self.command_handler, None, True) + self._cc = isc.config.ModuleCCSession(SPECFILE_LOCATION, self.config_handler, self.command_handler) self._config_data = self._cc.get_full_config() self._cc.start() self._cc.add_remote_config(AUTH_SPECFILE_LOCATION); - #self._log = isc.log.NSLogger(self._config_data.get('log_name'), self._config_data.get('log_file'), - # self._config_data.get('log_severity'), self._config_data.get('log_versions'), - # self._config_data.get('log_max_bytes'), True) self._start_xfr_query_listener() self._start_notifier() def _start_xfr_query_listener(self): '''Start a new thread to accept xfr query. ''' - self._unix_socket_server = UnixSockServer(self._listen_sock_file, XfroutSession, - self._shutdown_event, self._config_data, + self._unix_socket_server = UnixSockServer(self._listen_sock_file, + XfroutSession, + self._shutdown_event, + self._config_data, self._cc) listener = threading.Thread(target=self._unix_socket_server.serve_forever) listener.start() @@ -601,11 +726,13 @@ class XfroutServer: continue self._config_data[key] = new_config[key] - #if self._log: - # self._log.update_config(new_config) - if self._unix_socket_server: - self._unix_socket_server.update_config_data(self._config_data) + try: + self._unix_socket_server.update_config_data(self._config_data) + except Exception as e: + answer = create_answer(1, + "Failed to handle new configuration: " + + str(e)) return answer @@ -685,6 +812,10 @@ if '__main__' == __name__: logger.INFO(XFROUT_STOPPED_BY_KEYBOARD) except SessionError as e: logger.error(XFROUT_CC_SESSION_ERROR, str(e)) + except ModuleCCSessionError as e: + logger.error(XFROUT_MODULECC_SESSION_ERROR, str(e)) + except XfroutConfigError as e: + logger.error(XFROUT_CONFIG_ERROR, str(e)) except SessionTimeout as e: logger.error(XFROUT_CC_SESSION_TIMEOUT_ERROR) diff --git a/src/bin/xfrout/xfrout.spec.pre.in b/src/bin/xfrout/xfrout.spec.pre.in index 2efa3d7d29..0891a579c8 100644 --- a/src/bin/xfrout/xfrout.spec.pre.in +++ b/src/bin/xfrout/xfrout.spec.pre.in @@ -16,27 +16,27 @@ }, { "item_name": "log_file", - "item_type": "string", + "item_type": "string", "item_optional": false, "item_default": "@@LOCALSTATEDIR@@/@PACKAGE@/log/Xfrout.log" }, { "item_name": "log_severity", - "item_type": "string", + "item_type": "string", "item_optional": false, - "item_default": "debug" + "item_default": "debug" }, { "item_name": "log_versions", - "item_type": "integer", + "item_type": "integer", "item_optional": false, - "item_default": 5 + "item_default": 5 }, { "item_name": "log_max_bytes", - "item_type": "integer", + "item_type": "integer", "item_optional": false, - "item_default": 1048576 + "item_default": 1048576 }, { "item_name": "tsig_key_ring", @@ -49,6 +49,57 @@ "item_type": "string", "item_optional": true } + }, + { + "item_name": "transfer_acl", + "item_type": "list", + "item_optional": false, + "item_default": [{"action": "ACCEPT"}], + "list_item_spec": + { + "item_name": "acl_element", + "item_type": "any", + "item_optional": true + } + }, + { + "item_name": "zone_config", + "item_type": "list", + "item_optional": true, + "item_default": [], + "list_item_spec": + { + "item_name": "zone_config_element", + "item_type": "map", + "item_optional": true, + "item_default": { "origin": "" }, + "map_item_spec": [ + { + "item_name": "origin", + "item_type": "string", + "item_optional": false, + "item_default": "" + }, + { + "item_name": "class", + "item_type": "string", + "item_optional": false, + "item_default": "IN" + }, + { + "item_name": "transfer_acl", + "item_type": "list", + "item_optional": true, + "item_default": [{"action": "ACCEPT"}], + "list_item_spec": + { + "item_name": "acl_element", + "item_type": "any", + "item_optional": true + } + } + ] + } } ], "commands": [ diff --git a/src/bin/xfrout/xfrout_messages.mes b/src/bin/xfrout/xfrout_messages.mes index 2dada5404d..b2e432ca5c 100644 --- a/src/bin/xfrout/xfrout_messages.mes +++ b/src/bin/xfrout/xfrout_messages.mes @@ -47,8 +47,19 @@ a valid TSIG key. There was a problem reading from the command and control channel. The most likely cause is that the msgq daemon is not running. +% XFROUT_MODULECC_SESSION_ERROR error encountered by configuration/command module: %1 +There was a problem in the lower level module handling configuration and +control commands. This could happen for various reasons, but the most likely +cause is that the configuration database contains a syntax error and xfrout +failed to start at initialization. A detailed error message from the module +will also be displayed. + +% XFROUT_CONFIG_ERROR error found in configuration data: %1 +The xfrout process encountered an error when installing the configuration at +startup time. Details of the error are included in the log message. + % XFROUT_CC_SESSION_TIMEOUT_ERROR timeout waiting for cc response -There was a problem reading a response from antoher module over the +There was a problem reading a response from another module over the command and control channel. The most likely cause is that the configuration manager b10-cfgmgr is not running. @@ -95,6 +106,17 @@ in the log message, but at this point no specific information other than that could be given. This points to incomplete exception handling in the code. +% XFROUT_QUERY_DROPPED request to transfer %1/%2 to [%3]:%4 dropped +The xfrout process silently dropped a request to transfer zone to given host. +This is required by the ACLs. The %1 and %2 represent the zone name and class, +the %3 and %4 the IP address and port of the peer requesting the transfer. + +% XFROUT_QUERY_REJECTED request to transfer %1/%2 to [%3]:%4 rejected +The xfrout process rejected (by REFUSED rcode) a request to transfer zone to +given host. This is because of ACLs. The %1 and %2 represent the zone name and +class, the %3 and %4 the IP address and port of the peer requesting the +transfer. + % XFROUT_RECEIVE_FILE_DESCRIPTOR_ERROR error receiving the file descriptor for an XFR connection There was an error receiving the file descriptor for the transfer request. Normally, the request is received by b10-auth, and passed on diff --git a/src/bin/zonemgr/Makefile.am b/src/bin/zonemgr/Makefile.am index 8ab5f7a0ab..aa427fdf2e 100644 --- a/src/bin/zonemgr/Makefile.am +++ b/src/bin/zonemgr/Makefile.am @@ -7,10 +7,15 @@ pkglibexec_SCRIPTS = b10-zonemgr b10_zonemgrdir = $(pkgdatadir) b10_zonemgr_DATA = zonemgr.spec -CLEANFILES = b10-zonemgr zonemgr.pyc zonemgr.spec +nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/zonemgr_messages.py +pylogmessagedir = $(pyexecdir)/isc/log_messages/ + +CLEANFILES = b10-zonemgr zonemgr.pyc zonemgr.spec +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/zonemgr_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/zonemgr_messages.pyc man_MANS = b10-zonemgr.8 -EXTRA_DIST = $(man_MANS) b10-zonemgr.xml +EXTRA_DIST = $(man_MANS) b10-zonemgr.xml zonemgr_messages.mes if ENABLE_MAN @@ -19,10 +24,15 @@ b10-zonemgr.8: b10-zonemgr.xml endif +# Build logging source file from message files +$(PYTHON_LOGMSGPKG_DIR)/work/zonemgr_messages.py : zonemgr_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/zonemgr_messages.mes + zonemgr.spec: zonemgr.spec.pre $(SED) -e "s|@@LOCALSTATEDIR@@|$(localstatedir)|" zonemgr.spec.pre >$@ -b10-zonemgr: zonemgr.py +b10-zonemgr: zonemgr.py $(PYTHON_LOGMSGPKG_DIR)/work/zonemgr_messages.py $(SED) -e "s|@@PYTHONPATH@@|@pyexecdir@|" \ -e "s|@@LOCALSTATEDIR@@|$(localstatedir)|" zonemgr.py >$@ chmod a+x $@ diff --git a/src/bin/zonemgr/tests/Makefile.am b/src/bin/zonemgr/tests/Makefile.am index 97f9b5e6ca..769d332b87 100644 --- a/src/bin/zonemgr/tests/Makefile.am +++ b/src/bin/zonemgr/tests/Makefile.am @@ -7,7 +7,7 @@ CLEANFILES = initdb.file # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/util/io/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS @@ -20,6 +20,6 @@ endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_builddir)/src/bin/zonemgr:$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/xfr/.libs \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/bin/zonemgr:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/xfr/.libs \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ done diff --git a/src/bin/zonemgr/tests/zonemgr_test.py b/src/bin/zonemgr/tests/zonemgr_test.py index 496ce6bdec..80e41b3194 100644 --- a/src/bin/zonemgr/tests/zonemgr_test.py +++ b/src/bin/zonemgr/tests/zonemgr_test.py @@ -152,6 +152,16 @@ class TestZonemgrRefresh(unittest.TestCase): self.assertTrue((time1 + 3600 * (1 - self.zone_refresh._refresh_jitter)) <= zone_timeout) self.assertTrue(zone_timeout <= time2 + 3600) + # No soa rdata + self.zone_refresh._zonemgr_refresh_info[ZONE_NAME_CLASS1_IN]["zone_soa_rdata"] = None + time3 = time.time() + self.zone_refresh._set_zone_retry_timer(ZONE_NAME_CLASS1_IN) + zone_timeout = self.zone_refresh._zonemgr_refresh_info[ZONE_NAME_CLASS1_IN]["next_refresh_time"] + time4 = time.time() + self.assertTrue((time3 + self.zone_refresh._lowerbound_retry * (1 - self.zone_refresh._refresh_jitter)) + <= zone_timeout) + self.assertTrue(zone_timeout <= time4 + self.zone_refresh._lowerbound_retry) + def test_zone_not_exist(self): self.assertFalse(self.zone_refresh._zone_not_exist(ZONE_NAME_CLASS1_IN)) self.assertTrue(self.zone_refresh._zone_not_exist(ZONE_NAME_CLASS1_CH)) @@ -304,8 +314,8 @@ class TestZonemgrRefresh(unittest.TestCase): def get_zone_soa2(zone_name, db_file): return None sqlite3_ds.get_zone_soa = get_zone_soa2 - self.assertRaises(ZonemgrException, self.zone_refresh.zonemgr_add_zone, \ - ZONE_NAME_CLASS1_IN) + self.zone_refresh.zonemgr_add_zone(ZONE_NAME_CLASS2_IN) + self.assertTrue(self.zone_refresh._zonemgr_refresh_info[ZONE_NAME_CLASS2_IN]["zone_soa_rdata"] is None) sqlite3_ds.get_zone_soa = old_get_zone_soa def test_zone_handle_notify(self): @@ -362,6 +372,15 @@ class TestZonemgrRefresh(unittest.TestCase): self.assertRaises(ZonemgrException, self.zone_refresh.zone_refresh_fail, ZONE_NAME_CLASS3_CH) self.assertRaises(ZonemgrException, self.zone_refresh.zone_refresh_fail, ZONE_NAME_CLASS3_IN) + old_get_zone_soa = sqlite3_ds.get_zone_soa + def get_zone_soa(zone_name, db_file): + return None + sqlite3_ds.get_zone_soa = get_zone_soa + self.zone_refresh.zone_refresh_fail(ZONE_NAME_CLASS1_IN) + self.assertEqual(self.zone_refresh._zonemgr_refresh_info[ZONE_NAME_CLASS1_IN]["zone_state"], + ZONE_EXPIRED) + sqlite3_ds.get_zone_soa = old_get_zone_soa + def test_find_need_do_refresh_zone(self): time1 = time.time() self.zone_refresh._zonemgr_refresh_info = { @@ -440,6 +459,8 @@ class TestZonemgrRefresh(unittest.TestCase): "class": "IN" } ] } self.zone_refresh.update_config_data(config_data) + self.assertTrue(("example.net.", "IN") in + self.zone_refresh._zonemgr_refresh_info) # update all values config_data = { @@ -479,14 +500,16 @@ class TestZonemgrRefresh(unittest.TestCase): "secondary_zones": [ { "name": "doesnotexist", "class": "IN" } ] } - self.assertRaises(ZonemgrException, - self.zone_refresh.update_config_data, - config_data) - self.assertEqual(60, self.zone_refresh._lowerbound_refresh) - self.assertEqual(30, self.zone_refresh._lowerbound_retry) - self.assertEqual(19800, self.zone_refresh._max_transfer_timeout) - self.assertEqual(0.25, self.zone_refresh._refresh_jitter) - self.assertEqual(0.35, self.zone_refresh._reload_jitter) + self.zone_refresh.update_config_data(config_data) + name_class = ("doesnotexist.", "IN") + self.assertTrue(self.zone_refresh._zonemgr_refresh_info[name_class]["zone_soa_rdata"] + is None) + # The other configs should be updated successfully + self.assertEqual(61, self.zone_refresh._lowerbound_refresh) + self.assertEqual(31, self.zone_refresh._lowerbound_retry) + self.assertEqual(19801, self.zone_refresh._max_transfer_timeout) + self.assertEqual(0.21, self.zone_refresh._refresh_jitter) + self.assertEqual(0.71, self.zone_refresh._reload_jitter) # Make sure we accept 0 as a value config_data = { @@ -526,10 +549,11 @@ class TestZonemgrRefresh(unittest.TestCase): self.zone_refresh._zonemgr_refresh_info) # This one does not exist config.set_zone_list_from_name_classes(["example.net", "CH"]) - self.assertRaises(ZonemgrException, - self.zone_refresh.update_config_data, config) - # So it should not affect the old ones - self.assertTrue(("example.net.", "IN") in + self.zone_refresh.update_config_data(config) + self.assertFalse(("example.net.", "CH") in + self.zone_refresh._zonemgr_refresh_info) + # Simply skip loading soa for the zone, the other configs should be updated successful + self.assertFalse(("example.net.", "IN") in self.zone_refresh._zonemgr_refresh_info) # Make sure it works even when we "accidentally" forget the final dot config.set_zone_list_from_name_classes([("example.net", "IN")]) @@ -596,15 +620,18 @@ class TestZonemgr(unittest.TestCase): config_data3 = {"refresh_jitter" : 0.7} self.zonemgr.config_handler(config_data3) self.assertEqual(0.5, self.zonemgr._config_data.get("refresh_jitter")) - # The zone doesn't exist in database, it should be rejected + # The zone doesn't exist in database, simply skip loading soa for it and log an warning self.zonemgr._zone_refresh = ZonemgrRefresh(None, "initdb.file", None, config_data1) config_data1["secondary_zones"] = [{"name": "nonexistent.example", "class": "IN"}] - self.assertNotEqual(self.zonemgr.config_handler(config_data1), - {"result": [0]}) - # As it is rejected, the old value should be kept - self.assertEqual(0.5, self.zonemgr._config_data.get("refresh_jitter")) + self.assertEqual(self.zonemgr.config_handler(config_data1), + {"result": [0]}) + # other configs should be updated successfully + name_class = ("nonexistent.example.", "IN") + self.assertTrue(self.zonemgr._zone_refresh._zonemgr_refresh_info[name_class]["zone_soa_rdata"] + is None) + self.assertEqual(0.1, self.zonemgr._config_data.get("refresh_jitter")) def test_get_db_file(self): self.assertEqual("initdb.file", self.zonemgr.get_db_file()) diff --git a/src/bin/zonemgr/zonemgr.py.in b/src/bin/zonemgr/zonemgr.py.in index c6e316354b..5c8d9b54ba 100755 --- a/src/bin/zonemgr/zonemgr.py.in +++ b/src/bin/zonemgr/zonemgr.py.in @@ -37,6 +37,16 @@ from isc.datasrc import sqlite3_ds from optparse import OptionParser, OptionValueError from isc.config.ccsession import * import isc.util.process +from isc.log_messages.zonemgr_messages import * + +# Initialize logging for called modules. +isc.log.init("b10-zonemgr") +logger = isc.log.Logger("zonemgr") + +# Constants for debug levels, to be removed when we have #1074. +DBG_START_SHUT = 0 +DBG_ZONEMGR_COMMAND = 10 +DBG_ZONEMGR_BASIC = 40 isc.util.process.rename() @@ -77,13 +87,6 @@ REFRESH_OFFSET = 3 RETRY_OFFSET = 4 EXPIRED_OFFSET = 5 -# verbose mode -VERBOSE_MODE = False - -def log_msg(msg): - if VERBOSE_MODE: - sys.stdout.write("[b10-zonemgr] %s\n" % str(msg)) - class ZonemgrException(Exception): pass @@ -93,7 +96,6 @@ class ZonemgrRefresh: do zone refresh. Zone timers can be started by calling run_timer(), and it can be stopped by calling shutdown() in another thread. - """ def __init__(self, cc, db_file, slave_socket, config_data): @@ -140,7 +142,10 @@ class ZonemgrRefresh: """Set zone next refresh time after zone refresh fail. now + retry - retry_jitter <= next_refresh_time <= now + retry """ - zone_retry_time = float(self._get_zone_soa_rdata(zone_name_class).split(" ")[RETRY_OFFSET]) + if (self._get_zone_soa_rdata(zone_name_class) is not None): + zone_retry_time = float(self._get_zone_soa_rdata(zone_name_class).split(" ")[RETRY_OFFSET]) + else: + zone_retry_time = 0.0 zone_retry_time = max(self._lowerbound_retry, zone_retry_time) self._set_zone_timer(zone_name_class, zone_retry_time, self._refresh_jitter * zone_retry_time) @@ -157,6 +162,7 @@ class ZonemgrRefresh: def zone_refresh_success(self, zone_name_class): """Update zone info after zone refresh success""" if (self._zone_not_exist(zone_name_class)): + logger.error(ZONEMGR_UNKNOWN_ZONE_SUCCESS, zone_name_class[0], zone_name_class[1]) raise ZonemgrException("[b10-zonemgr] Zone (%s, %s) doesn't " "belong to zonemgr" % zone_name_class) self.zonemgr_reload_zone(zone_name_class) @@ -167,10 +173,12 @@ class ZonemgrRefresh: def zone_refresh_fail(self, zone_name_class): """Update zone info after zone refresh fail""" if (self._zone_not_exist(zone_name_class)): + logger.error(ZONEMGR_UNKNOWN_ZONE_FAIL, zone_name_class[0], zone_name_class[1]) raise ZonemgrException("[b10-zonemgr] Zone (%s, %s) doesn't " "belong to zonemgr" % zone_name_class) # Is zone expired? - if (self._zone_is_expired(zone_name_class)): + if ((self._get_zone_soa_rdata(zone_name_class) is None) or + self._zone_is_expired(zone_name_class)): self._set_zone_state(zone_name_class, ZONE_EXPIRED) else: self._set_zone_state(zone_name_class, ZONE_OK) @@ -179,6 +187,7 @@ class ZonemgrRefresh: def zone_handle_notify(self, zone_name_class, master): """Handle zone notify""" if (self._zone_not_exist(zone_name_class)): + logger.error(ZONEMGR_UNKNOWN_ZONE_NOTIFIED, zone_name_class[0], zone_name_class[1]) raise ZonemgrException("[b10-zonemgr] Notified zone (%s, %s) " "doesn't belong to zonemgr" % zone_name_class) self._set_zone_notifier_master(zone_name_class, master) @@ -191,19 +200,23 @@ class ZonemgrRefresh: def zonemgr_add_zone(self, zone_name_class): """ Add a zone into zone manager.""" - log_msg("Loading zone (%s, %s)" % zone_name_class) + + logger.debug(DBG_ZONEMGR_BASIC, ZONEMGR_LOAD_ZONE, zone_name_class[0], zone_name_class[1]) zone_info = {} zone_soa = sqlite3_ds.get_zone_soa(str(zone_name_class[0]), self._db_file) - if not zone_soa: - raise ZonemgrException("[b10-zonemgr] zone (%s, %s) doesn't have soa." % zone_name_class) - zone_info["zone_soa_rdata"] = zone_soa[7] + if zone_soa is None: + logger.warn(ZONEMGR_NO_SOA, zone_name_class[0], zone_name_class[1]) + zone_info["zone_soa_rdata"] = None + zone_reload_time = 0.0 + else: + zone_info["zone_soa_rdata"] = zone_soa[7] + zone_reload_time = float(zone_soa[7].split(" ")[RETRY_OFFSET]) zone_info["zone_state"] = ZONE_OK zone_info["last_refresh_time"] = self._get_current_time() self._zonemgr_refresh_info[zone_name_class] = zone_info # Imposes some random jitters to avoid many zones need to do refresh at the same time. - zone_reload_jitter = float(zone_soa[7].split(" ")[RETRY_OFFSET]) - zone_reload_jitter = max(self._lowerbound_retry, zone_reload_jitter) - self._set_zone_timer(zone_name_class, zone_reload_jitter, self._reload_jitter * zone_reload_jitter) + zone_reload_time = max(self._lowerbound_retry, zone_reload_time) + self._set_zone_timer(zone_name_class, zone_reload_time, self._reload_jitter * zone_reload_time) def _zone_is_expired(self, zone_name_class): """Judge whether a zone is expired or not.""" @@ -265,7 +278,7 @@ class ZonemgrRefresh: except isc.cc.session.SessionTimeout: pass # for now we just ignore the failure except socket.error: - sys.stderr.write("[b10-zonemgr] Failed to send to module %s, the session has been closed." % module_name) + logger.error(ZONEMGR_SEND_FAIL, module_name) def _find_need_do_refresh_zone(self): """Find the first zone need do refresh, if no zone need @@ -274,7 +287,8 @@ class ZonemgrRefresh: zone_need_refresh = None for zone_name_class in self._zonemgr_refresh_info.keys(): zone_state = self._get_zone_state(zone_name_class) - # If hasn't received refresh response but are within refresh timeout, skip the zone + # If hasn't received refresh response but are within refresh + # timeout, skip the zone if (ZONE_REFRESHING == zone_state and (self._get_zone_refresh_timeout(zone_name_class) > self._get_current_time())): continue @@ -294,7 +308,7 @@ class ZonemgrRefresh: def _do_refresh(self, zone_name_class): """Do zone refresh.""" - log_msg("Do refresh for zone (%s, %s)." % zone_name_class) + logger.debug(DBG_ZONEMGR_BASIC, ZONEMGR_REFRESH_ZONE, zone_name_class[0], zone_name_class[1]) self._set_zone_state(zone_name_class, ZONE_REFRESHING) self._set_zone_refresh_timeout(zone_name_class, self._get_current_time() + self._max_transfer_timeout) notify_master = self._get_zone_notifier_master(zone_name_class) @@ -351,7 +365,7 @@ class ZonemgrRefresh: if e.args[0] == errno.EINTR: (rlist, wlist, xlist) = ([], [], []) else: - sys.stderr.write("[b10-zonemgr] Error with select(); %s\n" % e) + logger.error(ZONEMGR_SELECT_ERROR, e); break for fd in rlist: @@ -365,12 +379,14 @@ class ZonemgrRefresh: def run_timer(self, daemon=False): """ - Keep track of zone timers. Spawns and starts a thread. The thread object is returned. + Keep track of zone timers. Spawns and starts a thread. The thread object + is returned. You can stop it by calling shutdown(). """ # Small sanity check if self._running: + logger.error(ZONEMGR_TIMER_THREAD_RUNNING) raise RuntimeError("Trying to run the timers twice at the same time") # Prepare the launch @@ -395,6 +411,7 @@ class ZonemgrRefresh: called from a different thread. """ if not self._running: + logger.error(ZONEMGR_NO_TIMER_THREAD) raise RuntimeError("Trying to shutdown, but not running") # Ask the thread to stop @@ -409,12 +426,6 @@ class ZonemgrRefresh: def update_config_data(self, new_config): """ update ZonemgrRefresh config """ - # TODO: we probably want to store all this info in a nice - # class, so that we don't have to backup and restore every - # single value. - # TODO2: We also don't use get_default_value yet - backup = self._zonemgr_refresh_info.copy() - # Get a new value, but only if it is defined (commonly used below) # We don't use "value or default", because if value would be # 0, we would take default @@ -424,26 +435,21 @@ class ZonemgrRefresh: else: return default - # store the values so we can restore them if there is a problem - lowerbound_refresh_backup = self._lowerbound_refresh self._lowerbound_refresh = val_or_default( new_config.get('lowerbound_refresh'), self._lowerbound_refresh) - lowerbound_retry_backup = self._lowerbound_retry self._lowerbound_retry = val_or_default( new_config.get('lowerbound_retry'), self._lowerbound_retry) - max_transfer_timeout_backup = self._max_transfer_timeout self._max_transfer_timeout = val_or_default( new_config.get('max_transfer_timeout'), self._max_transfer_timeout) - refresh_jitter_backup = self._refresh_jitter self._refresh_jitter = val_or_default( new_config.get('refresh_jitter'), self._refresh_jitter) - reload_jitter_backup = self._reload_jitter self._reload_jitter = val_or_default( new_config.get('reload_jitter'), self._reload_jitter) + try: required = {} secondary_zones = new_config.get('secondary_zones') @@ -458,6 +464,7 @@ class ZonemgrRefresh: required[name_class] = True # Add it only if it isn't there already if not name_class in self._zonemgr_refresh_info: + # If we are not able to find it in database, log an warning self.zonemgr_add_zone(name_class) # Drop the zones that are no longer there # Do it in two phases, python doesn't like deleting while iterating @@ -467,14 +474,7 @@ class ZonemgrRefresh: to_drop.append(old_zone) for drop in to_drop: del self._zonemgr_refresh_info[drop] - # If we are not able to find it in database, restore the original except: - self._zonemgr_refresh_info = backup - self._lowerbound_refresh = lowerbound_refresh_backup - self._lowerbound_retry = lowerbound_retry_backup - self._max_transfer_timeout = max_transfer_timeout_backup - self._refresh_jitter = refresh_jitter_backup - self._reload_jitter = reload_jitter_backup raise class Zonemgr: @@ -515,8 +515,8 @@ class Zonemgr: return db_file def shutdown(self): - """Shutdown the zonemgr process. the thread which is keeping track of zone - timers should be terminated. + """Shutdown the zonemgr process. The thread which is keeping track of + zone timers should be terminated. """ self._zone_refresh.shutdown() @@ -556,17 +556,17 @@ class Zonemgr: # jitter should not be bigger than half of the original value if config_data.get('refresh_jitter') > 0.5: config_data['refresh_jitter'] = 0.5 - log_msg("[b10-zonemgr] refresh_jitter is too big, its value will " - "be set to 0.5") - + logger.warn(ZONEMGR_JITTER_TOO_BIG) def _parse_cmd_params(self, args, command): zone_name = args.get("zone_name") if not zone_name: + logger.error(ZONEMGR_NO_ZONE_NAME) raise ZonemgrException("zone name should be provided") zone_class = args.get("zone_class") if not zone_class: + logger.error(ZONEMGR_NO_ZONE_CLASS) raise ZonemgrException("zone class should be provided") if (command != ZONE_NOTIFY_COMMAND): @@ -574,6 +574,7 @@ class Zonemgr: master_str = args.get("master") if not master_str: + logger.error(ZONEMGR_NO_MASTER_ADDRESS) raise ZonemgrException("master address should be provided") return ((zone_name, zone_class), master_str) @@ -581,15 +582,16 @@ class Zonemgr: def command_handler(self, command, args): """Handle command receivd from command channel. - ZONE_NOTIFY_COMMAND is issued by Auth process; ZONE_XFRIN_SUCCESS_COMMAND - and ZONE_XFRIN_FAILED_COMMAND are issued by Xfrin process; shutdown is issued - by a user or Boss process. """ + ZONE_NOTIFY_COMMAND is issued by Auth process; + ZONE_XFRIN_SUCCESS_COMMAND and ZONE_XFRIN_FAILED_COMMAND are issued by + Xfrin process; + shutdown is issued by a user or Boss process. """ answer = create_answer(0) if command == ZONE_NOTIFY_COMMAND: """ Handle Auth notify command""" # master is the source sender of the notify message. zone_name_class, master = self._parse_cmd_params(args, command) - log_msg("Received notify command for zone (%s, %s)." % zone_name_class) + logger.debug(DBG_ZONEMGR_COMMAND, ZONEMGR_RECEIVE_NOTIFY, zone_name_class[0], zone_name_class[1]) with self._lock: self._zone_refresh.zone_handle_notify(zone_name_class, master) # Send notification to zonemgr timer thread @@ -598,6 +600,7 @@ class Zonemgr: elif command == ZONE_XFRIN_SUCCESS_COMMAND: """ Handle xfrin success command""" zone_name_class = self._parse_cmd_params(args, command) + logger.debug(DBG_ZONEMGR_COMMAND, ZONEMGR_RECEIVE_XFRIN_SUCCESS, zone_name_class[0], zone_name_class[1]) with self._lock: self._zone_refresh.zone_refresh_success(zone_name_class) self._master_socket.send(b" ")# make self._slave_socket readble @@ -605,14 +608,17 @@ class Zonemgr: elif command == ZONE_XFRIN_FAILED_COMMAND: """ Handle xfrin fail command""" zone_name_class = self._parse_cmd_params(args, command) + logger.debug(DBG_ZONEMGR_COMMAND, ZONEMGR_RECEIVE_XFRIN_FAILED, zone_name_class[0], zone_name_class[1]) with self._lock: self._zone_refresh.zone_refresh_fail(zone_name_class) self._master_socket.send(b" ")# make self._slave_socket readble elif command == "shutdown": + logger.debug(DBG_ZONEMGR_COMMAND, ZONEMGR_RECEIVE_SHUTDOWN) self.shutdown() else: + logger.warn(ZONEMGR_RECEIVE_UNKNOWN, str(command)) answer = create_answer(1, "Unknown command:" + str(command)) return answer @@ -639,25 +645,29 @@ def set_cmd_options(parser): if '__main__' == __name__: try: + logger.debug(DBG_START_SHUT, ZONEMGR_STARTING) parser = OptionParser() set_cmd_options(parser) (options, args) = parser.parse_args() - VERBOSE_MODE = options.verbose + if options.verbose: + logger.set_severity("DEBUG", 99) set_signal_handler() zonemgrd = Zonemgr() zonemgrd.run() except KeyboardInterrupt: - sys.stderr.write("[b10-zonemgr] exit zonemgr process\n") + logger.info(ZONEMGR_KEYBOARD_INTERRUPT) + except isc.cc.session.SessionError as e: - sys.stderr.write("[b10-zonemgr] Error creating zonemgr, " - "is the command channel daemon running?\n") + logger.error(ZONEMGR_SESSION_ERROR) + except isc.cc.session.SessionTimeout as e: - sys.stderr.write("[b10-zonemgr] Error creating zonemgr, " - "is the configuration manager running?\n") + logger.error(ZONEMGR_SESSION_TIMEOUT) + except isc.config.ModuleCCSessionError as e: - sys.stderr.write("[b10-zonemgr] exit zonemgr process: %s\n" % str(e)) + logger.error(ZONEMGR_CCSESSION_ERROR, str(e)) if zonemgrd and zonemgrd.running: zonemgrd.shutdown() + logger.debug(DBG_START_SHUT, ZONEMGR_SHUTDOWN) diff --git a/src/bin/zonemgr/zonemgr_messages.mes b/src/bin/zonemgr/zonemgr_messages.mes new file mode 100644 index 0000000000..8abec5d802 --- /dev/null +++ b/src/bin/zonemgr/zonemgr_messages.mes @@ -0,0 +1,145 @@ +# Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and/or distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +# PERFORMANCE OF THIS SOFTWARE. + +# No namespace declaration - these constants go in the global namespace +# of the zonemgr messages python module. + +% ZONEMGR_CCSESSION_ERROR command channel session error: %1 +An error was encountered on the command channel. The message indicates +the nature of the error. + +% ZONEMGR_JITTER_TOO_BIG refresh_jitter is too big, setting to 0.5 +The value specified in the configuration for the refresh jitter is too large +so its value has been set to the maximum of 0.5. + +% ZONEMGR_KEYBOARD_INTERRUPT exiting zonemgr process as result of keyboard interrupt +An informational message output when the zone manager was being run at a +terminal and it was terminated via a keyboard interrupt signal. + +% ZONEMGR_LOAD_ZONE loading zone %1 (class %2) +This is a debug message indicating that the zone of the specified class +is being loaded. + +% ZONEMGR_NO_MASTER_ADDRESS internal BIND 10 command did not contain address of master +A command received by the zone manager from the Auth module did not +contain the address of the master server from which a NOTIFY message +was received. This may be due to an internal programming error; please +submit a bug report. + +% ZONEMGR_NO_SOA zone %1 (class %2) does not have an SOA record +When loading the named zone of the specified class the zone manager +discovered that the data did not contain an SOA record. The load has +been abandoned. + +% ZONEMGR_NO_TIMER_THREAD trying to stop zone timer thread but it is not running +An attempt was made to stop the timer thread (used to track when zones +should be refreshed) but it was not running. This may indicate an +internal program error. Please submit a bug report. + +% ZONEMGR_NO_ZONE_CLASS internal BIND 10 command did not contain class of zone +A command received by the zone manager from another BIND 10 module did +not contain the class of the zone on which the zone manager should act. +This may be due to an internal programming error; please submit a +bug report. + +% ZONEMGR_NO_ZONE_NAME internal BIND 10 command did not contain name of zone +A command received by the zone manager from another BIND 10 module did +not contain the name of the zone on which the zone manager should act. +This may be due to an internal programming error; please submit a +bug report. + +% ZONEMGR_RECEIVE_NOTIFY received NOTIFY command for zone %1 (class %2) +This is a debug message indicating that the zone manager has received a +NOTIFY command over the command channel. The command is sent by the Auth +process when it is acting as a slave server for the zone and causes the +zone manager to record the master server for the zone and start a timer; +when the timer expires, the master will be polled to see if it contains +new data. + +% ZONEMGR_RECEIVE_SHUTDOWN received SHUTDOWN command +This is a debug message indicating that the zone manager has received +a SHUTDOWN command over the command channel from the Boss process. +It will act on this command and shut down. + +% ZONEMGR_RECEIVE_UNKNOWN received unknown command '%1' +This is a warning message indicating that the zone manager has received +the stated command over the command channel. The command is not known +to the zone manager and although the command is ignored, its receipt +may indicate an internal error. Please submit a bug report. + +% ZONEMGR_RECEIVE_XFRIN_FAILED received XFRIN FAILED command for zone %1 (class %2) +This is a debug message indicating that the zone manager has received +an XFRIN FAILED command over the command channel. The command is sent +by the Xfrin process when a transfer of zone data into the system has +failed, and causes the zone manager to schedule another transfer attempt. + +% ZONEMGR_RECEIVE_XFRIN_SUCCESS received XFRIN SUCCESS command for zone %1 (class %2) +This is a debug message indicating that the zone manager has received +an XFRIN SUCCESS command over the command channel. The command is sent +by the Xfrin process when the transfer of zone data into the system has +succeeded, and causes the data to be loaded and served by BIND 10. + +% ZONEMGR_REFRESH_ZONE refreshing zone %1 (class %2) +The zone manager is refreshing the named zone of the specified class +with updated information. + +% ZONEMGR_SELECT_ERROR error with select(): %1 +An attempt to wait for input from a socket failed. The failing operation +is a call to the operating system's select() function, which failed for +the given reason. + +% ZONEMGR_SEND_FAIL failed to send command to %1, session has been closed +The zone manager attempted to send a command to the named BIND 10 module, +but the send failed. The session between the modules has been closed. + +% ZONEMGR_SESSION_ERROR unable to establish session to command channel daemon +The zonemgr process was not able to be started because it could not +connect to the command channel daemon. The most usual cause of this +problem is that the daemon is not running. + +% ZONEMGR_SESSION_TIMEOUT timeout on session to command channel daemon +The zonemgr process was not able to be started because it timed out when +connecting to the command channel daemon. The most usual cause of this +problem is that the daemon is not running. + +% ZONEMGR_SHUTDOWN zone manager has shut down +A debug message, output when the zone manager has shut down completely. + +% ZONEMGR_STARTING zone manager starting +A debug message output when the zone manager starts up. + +% ZONEMGR_TIMER_THREAD_RUNNING trying to start timer thread but one is already running +This message is issued when an attempt is made to start the timer +thread (which keeps track of when zones need a refresh) but one is +already running. It indicates either an error in the program logic or +a problem with stopping a previous instance of the timer. Please submit +a bug report. + +% ZONEMGR_UNKNOWN_ZONE_FAIL zone %1 (class %2) is not known to the zone manager +An XFRIN operation has failed but the zone that was the subject of the +operation is not being managed by the zone manager. This may indicate +an error in the program (as the operation should not have been initiated +if this were the case). Please submit a bug report. + +% ZONEMGR_UNKNOWN_ZONE_NOTIFIED notified zone %1 (class %2) is not known to the zone manager +A NOTIFY was received but the zone that was the subject of the operation +is not being managed by the zone manager. This may indicate an error +in the program (as the operation should not have been initiated if this +were the case). Please submit a bug report. + +% ZONEMGR_UNKNOWN_ZONE_SUCCESS zone %1 (class %2) is not known to the zone manager +An XFRIN operation has succeeded but the zone received is not being +managed by the zone manager. This may indicate an error in the program +(as the operation should not have been initiated if this were the case). +Please submit a bug report. diff --git a/src/cppcheck-suppress.lst b/src/cppcheck-suppress.lst index a4fea30c4c..8a4c7c18de 100644 --- a/src/cppcheck-suppress.lst +++ b/src/cppcheck-suppress.lst @@ -3,7 +3,7 @@ debug missingInclude // This is a template, and should be excluded from the check -unreadVariable:src/lib/dns/rdata/template.cc:60 +unreadVariable:src/lib/dns/rdata/template.cc:61 // Intentional self assignment tests. Suppress warning about them. selfAssignment:src/lib/dns/tests/name_unittest.cc:293 selfAssignment:src/lib/dns/tests/rdata_unittest.cc:228 diff --git a/src/lib/Makefile.am b/src/lib/Makefile.am index f4bef6b45a..04eee45f8d 100644 --- a/src/lib/Makefile.am +++ b/src/lib/Makefile.am @@ -1,3 +1,3 @@ -SUBDIRS = exceptions util log cryptolink dns cc config python xfr \ - bench asiolink asiodns nsas cache resolve testutils datasrc \ - acl server_common +SUBDIRS = exceptions util log cryptolink dns cc config acl xfr bench \ + asiolink asiodns nsas cache resolve testutils datasrc \ + server_common python diff --git a/src/lib/acl/Makefile.am b/src/lib/acl/Makefile.am index f211025343..92b7869742 100644 --- a/src/lib/acl/Makefile.am +++ b/src/lib/acl/Makefile.am @@ -19,7 +19,7 @@ libacl_la_LIBADD += $(top_builddir)/src/lib/util/libutil.la # DNS specialized one lib_LTLIBRARIES += libdnsacl.la -libdnsacl_la_SOURCES = dns.h dns.cc +libdnsacl_la_SOURCES = dns.h dns.cc dnsname_check.h libdnsacl_la_LIBADD = libacl.la libdnsacl_la_LIBADD += $(top_builddir)/src/lib/dns/libdns++.la diff --git a/src/lib/acl/acl.h b/src/lib/acl/acl.h index 998b2b0634..76039c9338 100644 --- a/src/lib/acl/acl.h +++ b/src/lib/acl/acl.h @@ -88,8 +88,11 @@ public: * the context against conditions and if it matches, returns the * action that belongs to the first matched entry or default action * if nothing matches. + * * \param context The thing that should be checked. It is directly * passed to the checks. + * + * \return The action for the ACL entry that first matches the context. */ const Action& execute(const Context& context) const { const typename Entries::const_iterator end(entries_.end()); diff --git a/src/lib/acl/dns.cc b/src/lib/acl/dns.cc index 16f1bf5dcb..b9cf91f7f8 100644 --- a/src/lib/acl/dns.cc +++ b/src/lib/acl/dns.cc @@ -12,20 +12,126 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. -#include "dns.h" +#include +#include +#include + +#include + +#include + +#include +#include + +#include + +#include +#include +#include +#include +#include + +using namespace std; +using boost::shared_ptr; +using namespace isc::dns; +using namespace isc::data; namespace isc { namespace acl { + +/// The specialization of \c IPCheck for access control with \c RequestContext. +/// +/// It returns \c true if the remote (source) IP address of the request +/// matches the expression encapsulated in the \c IPCheck, and returns +/// \c false if not. +template <> +bool +IPCheck::matches( + const dns::RequestContext& request) const +{ + return (compare(request.remote_address.getData(), + request.remote_address.getFamily())); +} + namespace dns { -Loader& -getLoader() { - static Loader* loader(NULL); - if (loader == NULL) { - loader = new Loader(REJECT); - // TODO: This is the place where we register default check creators - // like IP check, etc, once we have them. +/// The specialization of \c NameCheck for access control with +/// \c RequestContext. +/// +/// It returns \c true if the request contains a TSIG record and its key +/// (owner) name is equal to the name stored in the check; otherwise +/// it returns \c false. +template<> +bool +NameCheck::matches(const RequestContext& request) const { + return (request.tsig != NULL && request.tsig->getName() == name_); +} + +vector +internal::RequestCheckCreator::names() const { + // Probably we should eventually build this vector in a more + // sophisticated way. For now, it's simple enough to hardcode + // everything. + vector supported_names; + supported_names.push_back("from"); + supported_names.push_back("key"); + return (supported_names); +} + +shared_ptr +internal::RequestCheckCreator::create(const string& name, + ConstElementPtr definition, + // unused: + const acl::Loader&) +{ + if (!definition) { + isc_throw(LoaderError, + "NULL pointer is passed to RequestCheckCreator"); } + + if (name == "from") { + return (shared_ptr( + new internal::RequestIPCheck(definition->stringValue()))); + } else if (name == "key") { + return (shared_ptr( + new internal::RequestKeyCheck( + Name(definition->stringValue())))); + } else { + // This case shouldn't happen (normally) as it should have been + // rejected at the loader level. But we explicitly catch the case + // and throw an exception for that. + isc_throw(LoaderError, "Invalid check name for RequestCheck: " << + name); + } +} + +RequestLoader& +getRequestLoader() { + static RequestLoader* loader(NULL); + if (loader == NULL) { + // Creator registration may throw, so we first store the new loader + // in an auto pointer in order to provide the strong exception + // guarantee. + auto_ptr loader_ptr = + auto_ptr(new RequestLoader(REJECT)); + + // Register default check creator(s) + loader_ptr->registerCreator(shared_ptr( + new internal::RequestCheckCreator())); + loader_ptr->registerCreator( + shared_ptr >( + new NotCreator("NOT"))); + loader_ptr->registerCreator( + shared_ptr >( + new LogicCreator("ANY"))); + loader_ptr->registerCreator( + shared_ptr >( + new LogicCreator("ALL"))); + + // From this point there shouldn't be any exception thrown + loader = loader_ptr.release(); + } + return (*loader); } diff --git a/src/lib/acl/dns.h b/src/lib/acl/dns.h index 6f36e51893..426c9614c8 100644 --- a/src/lib/acl/dns.h +++ b/src/lib/acl/dns.h @@ -13,14 +13,23 @@ // PERFORMANCE OF THIS SOFTWARE. #ifndef ACL_DNS_H -#define ACL_DNS_H +#define ACL_DNS_H 1 -#include "loader.h" +#include +#include -#include -#include +#include + +#include + +#include +#include +#include namespace isc { +namespace dns { +class TSIGRecord; +} namespace acl { namespace dns { @@ -30,47 +39,74 @@ namespace dns { * This plays the role of Context of the generic template ACLs (in namespace * isc::acl). * - * It is simple structure holding just the bunch of information. Therefore - * the names don't end up with a slash, there are no methods so they can't be - * confused with local variables. + * It is a simple structure holding just the bunch of information. Therefore + * the names don't end up with an underscore; there are no methods so they + * can't be confused with local variables. * - * \todo Do we want a constructor to set this in a shorter manner? So we can - * call the ACLs directly? + * This structure is generally expected to be ephemeral and read-only: It + * would be constructed immediately before a particular ACL is checked + * and used only for the ACL match purposes. Due to this nature, and since + * ACL processing is often performance sensitive (typically it's performed + * against all incoming packets), the construction is designed to be + * lightweight: it tries to avoid expensive data copies or dynamic memory + * allocation as much as possible. Specifically, the constructor can + * take a pointer or reference to an object and keeps it as a reference + * (not making a local copy). This also means the caller is responsible for + * keeping the passed parameters valid while this structure is used. + * This should generally be reasonable as this structure is expected to be + * used only for a very short period as stated above. + * + * Based on the minimalist philosophy, the initial implementation only + * maintains the remote (source) IP address of the request and (optionally) + * the TSIG record included in the request. We may add more parameters of + * the request as we see the need for them. Possible additional parameters + * are the local (destination) IP address, the remote and local port numbers, + * various fields of the DNS request (e.g. a particular header flag value). */ struct RequestContext { - /// \brief The DNS message (payload). - isc::dns::ConstMessagePtr message; - /// \brief The remote IP address (eg. the client). - asiolink::IOAddress remote_address; - /// \brief The local IP address (ours, of the interface where we received). - asiolink::IOAddress local_address; - /// \brief The remote port. - uint16_t remote_port; - /// \brief The local port. - uint16_t local_port; - /** - * \brief Name of the TSIG key the message is signed with. - * - * This will be either the name of the TSIG key the message is signed with, - * or empty string, if the message is not signed. It is true we could get - * the information from the message itself, but because at the time when - * the ACL is checked, the signature has been verified already, so passing - * it around is probably cheaper. - * - * It is expected that messages with invalid signatures are handled before - * ACL. - */ - std::string tsig_key_name; + /// The constructor + /// + /// This is a trivial constructor that perform straightforward + /// initialization of the member variables from the given parameters. + /// + /// \exception None + /// + /// \parameter remote_address_param The remote IP address + /// \parameter tsig_param A valid pointer to the TSIG record included in + /// the request or NULL if the request doesn't contain a TSIG. + RequestContext(const IPAddress& remote_address_param, + const isc::dns::TSIGRecord* tsig_param) : + remote_address(remote_address_param), + tsig(tsig_param) + {} + + /// + /// \name Parameter variables + /// + /// These member variables must be immutable so that the integrity of + /// the structure is kept throughout its lifetime. The easiest way is + /// to declare the variable as const. If it's not possible for a + /// particular variable, it must be defined as private and accessible + /// only via an accessor method. + //@{ + /// \brief The remote IP address (eg. the client's IP address). + const IPAddress& remote_address; + + /// \brief The TSIG record included in the request message, if any. + /// + /// If the request doesn't include a TSIG, this member will be NULL. + const isc::dns::TSIGRecord* const tsig; + //@} }; /// \brief DNS based check. -typedef acl::Check Check; +typedef acl::Check RequestCheck; /// \brief DNS based compound check. typedef acl::CompoundCheck CompoundCheck; /// \brief DNS based ACL. -typedef acl::ACL ACL; +typedef acl::ACL RequestACL; /// \brief DNS based ACL loader. -typedef acl::Loader Loader; +typedef acl::Loader RequestLoader; /** * \brief Loader singleton access function. @@ -80,10 +116,39 @@ typedef acl::Loader Loader; * one is enough, this one will have registered default checks and it * is known one, so any plugins can registrer additional checks as well. */ -Loader& getLoader(); +RequestLoader& getRequestLoader(); -} -} -} +// The following is essentially private to the implementation and could +// be hidden in the implementation file. But it's visible via this header +// file for testing purposes. They are not supposed to be used by normal +// applications directly, and to signal the intent, they are given inside +// a separate namespace. +namespace internal { + +// Shortcut typedef +typedef isc::acl::IPCheck RequestIPCheck; +typedef isc::acl::dns::NameCheck RequestKeyCheck; + +class RequestCheckCreator : public acl::Loader::CheckCreator { +public: + virtual std::vector names() const; + + virtual boost::shared_ptr + create(const std::string& name, isc::data::ConstElementPtr definition, + const acl::Loader& loader); + + /// Until we are sure how the various rules work for this case, we won't + /// allow unexpected special interpretation for list definitions. + virtual bool allowListAbbreviation() const { return (false); } +}; +} // end of namespace "internal" + +} // end of namespace "dns" +} // end of namespace "acl" +} // end of namespace "isc" #endif + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/acl/dnsname_check.h b/src/lib/acl/dnsname_check.h new file mode 100644 index 0000000000..7498d99f64 --- /dev/null +++ b/src/lib/acl/dnsname_check.h @@ -0,0 +1,83 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __DNSNAME_CHECK_H +#define __DNSNAME_CHECK_H 1 + +#include + +#include + +namespace isc { +namespace acl { +namespace dns { + +/// ACL check for DNS names +/// +/// This class is intended to perform a match between a domain name +/// specified in an ACL and a given name. The primary usage of this class +/// is an ACL match for TSIG keys, where an ACL would contain a list of +/// acceptable key names and the \c match() method would compare the owner +/// name of a TSIG record against the specified names. +/// +/// This class could be used for other kinds of names such as the query name +/// of normal DNS queries. +/// +/// The class is templated on the type of a context structure passed to the +/// matches() method, and a template specialisation for that method must be +/// supplied for the class to be used. +template +class NameCheck : public Check { +public: + /// The constructor + /// + /// \exception std::bad_alloc Resource allocation fails in copying the + /// name + /// + /// \param name The domain name to be matched in \c matches(). + NameCheck(const isc::dns::Name& name) : name_(name) {} + + /// Destructor + virtual ~NameCheck() {} + + /// The check method + /// + /// Matches the passed argument to the condition stored here. Different + /// specializations must be provided for different argument types, and the + /// program will fail to compile if a required specialisation is not + /// provided. + /// + /// \param context Information to be matched + virtual bool matches(const Context& context) const; + + /// Returns the name specified on construction. + /// + /// This is mainly for testing purposes. + /// + /// \exception None + const isc::dns::Name& getName() const { return (name_); } + +private: + const isc::dns::Name name_; +}; + +} // namespace dns +} // namespace acl +} // namespace isc + +#endif // __DNSNAME_CHECK_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/acl/loader.h b/src/lib/acl/loader.h index 6b024c9b3b..f60b144d07 100644 --- a/src/lib/acl/loader.h +++ b/src/lib/acl/loader.h @@ -15,7 +15,8 @@ #ifndef ACL_LOADER_H #define ACL_LOADER_H -#include "acl.h" +#include +#include #include #include #include @@ -81,7 +82,7 @@ public: * or if it doesn't contain one of the accepted values. * * \param action The JSON representation of the action. It must be a string - * and contain one of "ACCEPT", "REJECT" or "DENY". + * and contain one of "ACCEPT", "REJECT" or "DROP. * \note We could define different names or add aliases if needed. */ BasicAction defaultActionLoader(data::ConstElementPtr action); @@ -100,21 +101,21 @@ BasicAction defaultActionLoader(data::ConstElementPtr action); * * An ACL definition looks like this: * \verbatim - * [ - * { - * "action": "ACCEPT", - * "match-type": - * }, - * { - * "action": "REJECT", - * "match-type": - * "another-match-type": [, ] -* }, -* { -* "action": "DROP" -* } - * ] - * \endverbatim + [ + { + "action": "ACCEPT", + "match-type": + }, + { + "action": "REJECT", + "match-type": , + "another-match-type": [, ] + }, + { + "action": "DROP" + } + ] + \endverbatim * * This is a list of elements. Each element must have an "action" * entry/keyword. That one specifies which action is returned if this @@ -297,16 +298,28 @@ public: * \brief Load an ACL. * * This parses an ACL list, creates the checks and actions of each element - * and returns it. It may throw LoaderError if it isn't a list or the - * "action" key is missing in some element. Also, no exceptions from - * loadCheck (therefore from whatever creator is used) and from the - * actionLoader passed to constructor are not caught. + * and returns it. + * + * No exceptions from \c loadCheck (therefore from whatever creator is + * used) and from the actionLoader passed to constructor are caught. + * + * \exception InvalidParameter The given element is NULL (most likely a + * caller's bug) + * \exception LoaderError The given element isn't a list or the + * "action" key is missing in some element * * \param description The JSON list of ACL. + * + * \return The newly created ACL object */ boost::shared_ptr > load(const data::ConstElementPtr& description) const { + if (!description) { + isc_throw(isc::InvalidParameter, + "Null description is passed to ACL loader"); + } + // We first check it's a list, so we can use the list reference // (the list may be huge) if (description->getType() != data::Element::list) { @@ -460,3 +473,7 @@ private: #include "logic_check.h" #endif + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/acl/logic_check.h b/src/lib/acl/logic_check.h index 6e1c567a96..92441e8969 100644 --- a/src/lib/acl/logic_check.h +++ b/src/lib/acl/logic_check.h @@ -200,6 +200,86 @@ private: const std::string name_; }; +/** + * \brief The NOT operator for ACLs. + * + * This simply returns the negation of whatever returns the subexpression. + */ +template +class NotOperator : public CompoundCheck { +public: + /** + * \brief Constructor + * + * \param expr The subexpression to be negated by this NOT. + */ + NotOperator(const boost::shared_ptr >& expr) : + expr_(expr) + { } + /** + * \brief The list of subexpressions + * + * \return The vector will contain single value and it is the expression + * passed by constructor. + */ + virtual typename CompoundCheck::Checks getSubexpressions() const { + typename CompoundCheck::Checks result; + result.push_back(expr_.get()); + return (result); + } + /// \brief The matching function + virtual bool matches(const Context& context) const { + return (!expr_->matches(context)); + } +private: + /// \brief The subexpression + const boost::shared_ptr > expr_; +}; + +template +class NotCreator : public Loader::CheckCreator { +public: + /** + * \brief Constructor + * + * \param name The name of the NOT operator to be loaded as. + */ + NotCreator(const std::string& name) : + name_(name) + { } + /** + * \brief List of the names this loads + * + * \return Single-value vector containing the name passed to the + * constructor. + */ + virtual std::vector names() const { + std::vector result; + result.push_back(name_); + return (result); + } + /// \brief Create the check. + virtual boost::shared_ptr > create(const std::string&, + data::ConstElementPtr + definition, + const Loader& loader) + { + return (boost::shared_ptr >(new NotOperator( + loader.loadCheck(definition)))); + } + /** + * \brief Or-abbreviated form. + * + * This returns false. In theory, the NOT operator could be used with + * the abbreviated form, but it would be confusing. Such syntax is + * therefore explicitly forbidden. + */ + virtual bool allowListAbbreviation() const { return (false); } +public: + const std::string name_; +}; + } } diff --git a/src/lib/acl/tests/Makefile.am b/src/lib/acl/tests/Makefile.am index 03b08bbbc5..636951199b 100644 --- a/src/lib/acl/tests/Makefile.am +++ b/src/lib/acl/tests/Makefile.am @@ -16,10 +16,12 @@ run_unittests_SOURCES += acl_test.cc run_unittests_SOURCES += check_test.cc run_unittests_SOURCES += dns_test.cc run_unittests_SOURCES += ip_check_unittest.cc +run_unittests_SOURCES += dnsname_check_unittest.cc run_unittests_SOURCES += loader_test.cc run_unittests_SOURCES += logcheck.h run_unittests_SOURCES += creators.h run_unittests_SOURCES += logic_check_test.cc +run_unittests_SOURCES += sockaddr.h run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) run_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS) @@ -29,6 +31,7 @@ run_unittests_LDADD += $(top_builddir)/src/lib/util/unittests/libutil_unittests. run_unittests_LDADD += $(top_builddir)/src/lib/acl/libacl.la run_unittests_LDADD += $(top_builddir)/src/lib/util/libutil.la run_unittests_LDADD += $(top_builddir)/src/lib/cc/libcc.la +run_unittests_LDADD += $(top_builddir)/src/lib/dns/libdns++.la run_unittests_LDADD += $(top_builddir)/src/lib/log/liblog.la run_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la run_unittests_LDADD += $(top_builddir)/src/lib/acl/libdnsacl.la diff --git a/src/lib/acl/tests/dns_test.cc b/src/lib/acl/tests/dns_test.cc index e5e0f3a18a..b3ddbf43c0 100644 --- a/src/lib/acl/tests/dns_test.cc +++ b/src/lib/acl/tests/dns_test.cc @@ -12,24 +12,260 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include + +#include +#include +#include + +#include +#include + +#include + +#include +#include +#include +#include + +#include #include +#include +#include +#include + +#include "sockaddr.h" + #include +using namespace std; +using boost::scoped_ptr; +using namespace isc::dns; +using namespace isc::dns::rdata; +using namespace isc::data; +using namespace isc::acl; using namespace isc::acl::dns; +using isc::acl::LoaderError; namespace { -// Tests that the getLoader actually returns something, returns the same every -// time and the returned value can be used to anything. It is not much of a -// test, but the getLoader is not much of a function. -TEST(DNSACL, getLoader) { - Loader* l(&getLoader()); +TEST(DNSACL, getRequestLoader) { + dns::RequestLoader* l(&getRequestLoader()); ASSERT_TRUE(l != NULL); - EXPECT_EQ(l, &getLoader()); - EXPECT_NO_THROW(l->load(isc::data::Element::fromJSON( - "[{\"action\": \"DROP\"}]"))); - // TODO Test that the things we should register by default, like IP based - // check, are loaded. + EXPECT_EQ(l, &getRequestLoader()); + EXPECT_NO_THROW(l->load(Element::fromJSON("[{\"action\": \"DROP\"}]"))); + + // Confirm it can load the ACl syntax acceptable to a default creator. + // Tests to see whether the loaded rules work correctly will be in + // other dedicated tests below. + EXPECT_NO_THROW(l->load(Element::fromJSON("[{\"action\": \"DROP\"," + " \"from\": \"192.0.2.1\"}]"))); +} + +class RequestCheckCreatorTest : public ::testing::Test { +protected: + dns::internal::RequestCheckCreator creator_; + + typedef boost::shared_ptr ConstRequestCheckPtr; + ConstRequestCheckPtr check_; +}; + +TEST_F(RequestCheckCreatorTest, names) { + const vector names = creator_.names(); + EXPECT_EQ(2, names.size()); + EXPECT_TRUE(find(names.begin(), names.end(), "from") != names.end()); + EXPECT_TRUE(find(names.begin(), names.end(), "key") != names.end()); +} + +TEST_F(RequestCheckCreatorTest, allowListAbbreviation) { + EXPECT_FALSE(creator_.allowListAbbreviation()); +} + +// The following two tests check the creator for the form of +// 'from: "IP prefix"'. We don't test many variants of prefixes, which +// are done in the tests for IPCheck. +TEST_F(RequestCheckCreatorTest, createIPv4Check) { + check_ = creator_.create("from", Element::fromJSON("\"192.0.2.1\""), + getRequestLoader()); + const dns::internal::RequestIPCheck& ipcheck_ = + dynamic_cast(*check_); + EXPECT_EQ(AF_INET, ipcheck_.getFamily()); + EXPECT_EQ(32, ipcheck_.getPrefixlen()); + const vector check_address(ipcheck_.getAddress()); + ASSERT_EQ(4, check_address.size()); + const uint8_t expected_address[] = { 192, 0, 2, 1 }; + EXPECT_TRUE(equal(check_address.begin(), check_address.end(), + expected_address)); +} + +TEST_F(RequestCheckCreatorTest, createIPv6Check) { + check_ = creator_.create("from", + Element::fromJSON("\"2001:db8::5300/120\""), + getRequestLoader()); + const dns::internal::RequestIPCheck& ipcheck = + dynamic_cast(*check_); + EXPECT_EQ(AF_INET6, ipcheck.getFamily()); + EXPECT_EQ(120, ipcheck.getPrefixlen()); + const vector check_address(ipcheck.getAddress()); + ASSERT_EQ(16, check_address.size()); + const uint8_t expected_address[] = { 0x20, 0x01, 0x0d, 0xb8, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x53, 0x00 }; + EXPECT_TRUE(equal(check_address.begin(), check_address.end(), + expected_address)); +} + +TEST_F(RequestCheckCreatorTest, createTSIGKeyCheck) { + check_ = creator_.create("key", Element::fromJSON("\"key.example.com\""), + getRequestLoader()); + const dns::internal::RequestKeyCheck& keycheck = + dynamic_cast(*check_); + EXPECT_EQ(Name("key.example.com"), keycheck.getName()); +} + +TEST_F(RequestCheckCreatorTest, badCreate) { + // Invalid name + EXPECT_THROW(creator_.create("bad", Element::fromJSON("\"192.0.2.1\""), + getRequestLoader()), LoaderError); + + // Invalid type of parameter + EXPECT_THROW(creator_.create("from", Element::fromJSON("4"), + getRequestLoader()), + isc::data::TypeError); + EXPECT_THROW(creator_.create("from", Element::fromJSON("[]"), + getRequestLoader()), + isc::data::TypeError); + EXPECT_THROW(creator_.create("key", Element::fromJSON("1"), + getRequestLoader()), + isc::data::TypeError); + EXPECT_THROW(creator_.create("key", Element::fromJSON("{}"), + getRequestLoader()), + isc::data::TypeError); + + // Syntax error for IPCheck + EXPECT_THROW(creator_.create("from", Element::fromJSON("\"bad\""), + getRequestLoader()), + isc::InvalidParameter); + + // Syntax error for Name (key) Check + EXPECT_THROW(creator_.create("key", Element::fromJSON("\"bad..name\""), + getRequestLoader()), + EmptyLabel); + + // NULL pointer + EXPECT_THROW(creator_.create("from", ConstElementPtr(), getRequestLoader()), + LoaderError); +} + +class RequestCheckTest : public ::testing::Test { +protected: + typedef boost::shared_ptr ConstRequestCheckPtr; + + // A helper shortcut to create a single IP check for the given prefix. + ConstRequestCheckPtr createIPCheck(const string& prefix) { + return (creator_.create("from", Element::fromJSON( + string("\"") + prefix + string("\"")), + getRequestLoader())); + } + + // A helper shortcut to create a single Name (key) check for the given + // name. + ConstRequestCheckPtr createKeyCheck(const string& key_name) { + return (creator_.create("key", Element::fromJSON( + string("\"") + key_name + string("\"")), + getRequestLoader())); + } + + // create a one time request context for a specific test. Note that + // getSockaddr() uses a static storage, so it cannot be called more than + // once in a single test. + const dns::RequestContext& getRequest4(const TSIGRecord* tsig = NULL) { + ipaddr.reset(new IPAddress(tests::getSockAddr("192.0.2.1"))); + request.reset(new dns::RequestContext(*ipaddr, tsig)); + return (*request); + } + const dns::RequestContext& getRequest6(const TSIGRecord* tsig = NULL) { + ipaddr.reset(new IPAddress(tests::getSockAddr("2001:db8::1"))); + request.reset(new dns::RequestContext(*ipaddr, tsig)); + return (*request); + } + + // create a one time TSIG Record for a specific test. The only parameter + // of the record that matters is the key name; others are hardcoded with + // arbitrarily chosen values. + const TSIGRecord* getTSIGRecord(const string& key_name) { + tsig_rdata.reset(new any::TSIG(TSIGKey::HMACMD5_NAME(), 0, 0, 0, NULL, + 0, 0, 0, NULL)); + tsig.reset(new TSIGRecord(Name(key_name), *tsig_rdata)); + return (tsig.get()); + } + +private: + scoped_ptr ipaddr; + scoped_ptr request; + scoped_ptr tsig_rdata; + scoped_ptr tsig; + dns::internal::RequestCheckCreator creator_; +}; + +TEST_F(RequestCheckTest, checkIPv4) { + // Exact match + EXPECT_TRUE(createIPCheck("192.0.2.1")->matches(getRequest4())); + // Exact match (negative) + EXPECT_FALSE(createIPCheck("192.0.2.53")->matches(getRequest4())); + // Prefix match + EXPECT_TRUE(createIPCheck("192.0.2.0/24")->matches(getRequest4())); + // Prefix match (negative) + EXPECT_FALSE(createIPCheck("192.0.1.0/24")->matches(getRequest4())); + // Address family mismatch (the first 4 bytes of the IPv6 address has the + // same binary representation as the client's IPv4 address, which + // shouldn't confuse the match logic) + EXPECT_FALSE(createIPCheck("c000:0201::")->matches(getRequest4())); +} + +TEST_F(RequestCheckTest, checkIPv6) { + // The following are a set of tests of the same concept as checkIPv4 + EXPECT_TRUE(createIPCheck("2001:db8::1")->matches(getRequest6())); + EXPECT_FALSE(createIPCheck("2001:db8::53")->matches(getRequest6())); + EXPECT_TRUE(createIPCheck("2001:db8::/64")->matches(getRequest6())); + EXPECT_FALSE(createIPCheck("2001:db8:1::/64")->matches(getRequest6())); + EXPECT_FALSE(createIPCheck("32.1.13.184")->matches(getRequest6())); +} + +TEST_F(RequestCheckTest, checkTSIGKey) { + EXPECT_TRUE(createKeyCheck("key.example.com")->matches( + getRequest4(getTSIGRecord("key.example.com")))); + EXPECT_FALSE(createKeyCheck("key.example.com")->matches( + getRequest4(getTSIGRecord("badkey.example.com")))); + + // Same for IPv6 (which shouldn't matter) + EXPECT_TRUE(createKeyCheck("key.example.com")->matches( + getRequest6(getTSIGRecord("key.example.com")))); + EXPECT_FALSE(createKeyCheck("key.example.com")->matches( + getRequest6(getTSIGRecord("badkey.example.com")))); + + // by default the test request doesn't have a TSIG key, which shouldn't + // match any key checks. + EXPECT_FALSE(createKeyCheck("key.example.com")->matches(getRequest4())); + EXPECT_FALSE(createKeyCheck("key.example.com")->matches(getRequest6())); +} + +// The following tests test only the creators are registered, they are tested +// elsewhere + +TEST(DNSACL, notLoad) { + EXPECT_NO_THROW(getRequestLoader().loadCheck(isc::data::Element::fromJSON( + "{\"NOT\": {\"from\": \"192.0.2.1\"}}"))); +} + +TEST(DNSACL, allLoad) { + EXPECT_NO_THROW(getRequestLoader().loadCheck(isc::data::Element::fromJSON( + "{\"ALL\": [{\"from\": \"192.0.2.1\"}]}"))); +} + +TEST(DNSACL, anyLoad) { + EXPECT_NO_THROW(getRequestLoader().loadCheck(isc::data::Element::fromJSON( + "{\"ANY\": [{\"from\": \"192.0.2.1\"}]}"))); } } diff --git a/src/lib/acl/tests/dnsname_check_unittest.cc b/src/lib/acl/tests/dnsname_check_unittest.cc new file mode 100644 index 0000000000..95b531460f --- /dev/null +++ b/src/lib/acl/tests/dnsname_check_unittest.cc @@ -0,0 +1,59 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +#include + +using namespace isc::dns; +using namespace isc::acl::dns; + +// Provide a specialization of the DNSNameCheck::matches() method. +namespace isc { +namespace acl { +namespace dns { +template <> +bool NameCheck::matches(const Name& name) const { + return (name_ == name); +} +} // namespace dns +} // namespace acl +} // namespace isc + +namespace { +TEST(DNSNameCheck, construct) { + EXPECT_EQ(Name("example.com"), + NameCheck(Name("example.com")).getName()); + + // Construct the same check with an explicit trailing dot. Should result + // in the same result. + EXPECT_EQ(Name("example.com"), + NameCheck(Name("example.com.")).getName()); +} + +TEST(DNSNameCheck, match) { + NameCheck check(Name("example.com")); + EXPECT_TRUE(check.matches(Name("example.com"))); + EXPECT_FALSE(check.matches(Name("example.org"))); + + // comparison is case insensitive + EXPECT_TRUE(check.matches(Name("EXAMPLE.COM"))); + + // this is exact match. so super/sub domains don't match + EXPECT_FALSE(check.matches(Name("com"))); + EXPECT_FALSE(check.matches(Name("www.example.com"))); +} +} // Unnamed namespace diff --git a/src/lib/acl/tests/ip_check_unittest.cc b/src/lib/acl/tests/ip_check_unittest.cc index fb249788f5..8b8c49808c 100644 --- a/src/lib/acl/tests/ip_check_unittest.cc +++ b/src/lib/acl/tests/ip_check_unittest.cc @@ -14,12 +14,13 @@ #include #include -#include #include #include #include +#include "sockaddr.h" + using namespace isc::acl; using namespace isc::acl::internal; using namespace std; @@ -159,32 +160,8 @@ TEST(IPFunctionCheck, SplitIPAddress) { EXPECT_THROW(splitIPAddress(" 1/ "), isc::InvalidParameter); } -const struct sockaddr& -getSockAddr(const char* const addr) { - struct addrinfo hints, *res; - memset(&hints, 0, sizeof(hints)); - hints.ai_family = AF_UNSPEC; - hints.ai_socktype = SOCK_STREAM; - hints.ai_flags = AI_NUMERICHOST; - - if (getaddrinfo(addr, NULL, &hints, &res) == 0) { - static struct sockaddr_storage ss; - void* ss_ptr = &ss; - memcpy(ss_ptr, res->ai_addr, res->ai_addrlen); - freeaddrinfo(res); - return (*static_cast(ss_ptr)); - } - - // We don't expect getaddrinfo to fail for our tests. But if that - // ever happens we return a dummy value that would make subsequent test - // fail. - static struct sockaddr sa_dummy; - sa_dummy.sa_family = AF_UNSPEC; - return (sa_dummy); -} - TEST(IPAddress, constructIPv4) { - IPAddress ipaddr(getSockAddr("192.0.2.1")); + IPAddress ipaddr(tests::getSockAddr("192.0.2.1")); const char expected_data[4] = { 192, 0, 2, 1 }; EXPECT_EQ(AF_INET, ipaddr.getFamily()); EXPECT_EQ(4, ipaddr.getLength()); @@ -192,7 +169,7 @@ TEST(IPAddress, constructIPv4) { } TEST(IPAddress, constructIPv6) { - IPAddress ipaddr(getSockAddr("2001:db8:1234:abcd::53")); + IPAddress ipaddr(tests::getSockAddr("2001:db8:1234:abcd::53")); const char expected_data[16] = { 0x20, 0x01, 0x0d, 0xb8, 0x12, 0x34, 0xab, 0xcd, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x53 }; diff --git a/src/lib/acl/tests/loader_test.cc b/src/lib/acl/tests/loader_test.cc index 4415081f7f..1705c0a6f8 100644 --- a/src/lib/acl/tests/loader_test.cc +++ b/src/lib/acl/tests/loader_test.cc @@ -13,6 +13,7 @@ // PERFORMANCE OF THIS SOFTWARE. #include "creators.h" +#include #include #include #include @@ -373,7 +374,10 @@ TEST_F(LoaderTest, ACLPropagate) { Element::fromJSON( "[{\"action\": \"ACCEPT\", \"throw\": 1}]")), TestCreatorError); +} +TEST_F(LoaderTest, nullDescription) { + EXPECT_THROW(loader_.load(ConstElementPtr()), isc::InvalidParameter); } } diff --git a/src/lib/acl/tests/logic_check_test.cc b/src/lib/acl/tests/logic_check_test.cc index eec6d51b8a..1c80277a2b 100644 --- a/src/lib/acl/tests/logic_check_test.cc +++ b/src/lib/acl/tests/logic_check_test.cc @@ -93,6 +93,7 @@ public: LogicCreator("ALL"))); loader_.registerCreator(CreatorPtr(new ThrowCreator)); loader_.registerCreator(CreatorPtr(new LogCreator)); + loader_.registerCreator(CreatorPtr(new NotCreator("NOT"))); } // To mark which parts of the check did run Log log_; @@ -242,4 +243,49 @@ TEST_F(LogicCreatorTest, nested) { log_.checkFirst(2); } +void notTest(bool value) { + NotOperator notOp(shared_ptr >(new ConstCheck(value, 0))); + Log log; + // It returns negated value + EXPECT_EQ(!value, notOp.matches(log)); + // And runs the only one thing there + log.checkFirst(1); + // Check the getSubexpressions does sane things + ASSERT_EQ(1, notOp.getSubexpressions().size()); + EXPECT_EQ(value, notOp.getSubexpressions()[0]->matches(log)); +} + +TEST(Not, trueValue) { + notTest(true); +} + +TEST(Not, falseValue) { + notTest(false); +} + +TEST_F(LogicCreatorTest, notInvalid) { + EXPECT_THROW(loader_.loadCheck(Element::fromJSON("{\"NOT\": null}")), + LoaderError); + EXPECT_THROW(loader_.loadCheck(Element::fromJSON("{\"NOT\": \"hello\"}")), + LoaderError); + EXPECT_THROW(loader_.loadCheck(Element::fromJSON("{\"NOT\": true}")), + LoaderError); + EXPECT_THROW(loader_.loadCheck(Element::fromJSON("{\"NOT\": 42}")), + LoaderError); + EXPECT_THROW(loader_.loadCheck(Element::fromJSON("{\"NOT\": []}")), + LoaderError); + EXPECT_THROW(loader_.loadCheck(Element::fromJSON("{\"NOT\": [{" + "\"logcheck\": [0, true]" + "}]}")), + LoaderError); +} + +TEST_F(LogicCreatorTest, notValid) { + shared_ptr > notOp(load >("{\"NOT\":" + " {\"logcheck\":" + " [0, true]}}")); + EXPECT_FALSE(notOp->matches(log_)); + log_.checkFirst(1); +} + } diff --git a/src/lib/acl/tests/sockaddr.h b/src/lib/acl/tests/sockaddr.h new file mode 100644 index 0000000000..bd304516ad --- /dev/null +++ b/src/lib/acl/tests/sockaddr.h @@ -0,0 +1,69 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __ACL_TEST_SOCKADDR_H +#define __ACL_TEST_SOCKADDR_H 1 + +#include +#include +#include +#include + +#include + +namespace isc { +namespace acl { +namespace tests { + +// This is a helper function that returns a sockaddr for the given textual +// IP address. Note that "inline" is crucial because this function is defined +// in a header file included in multiple .cc files. Without inline it would +// produce an external linkage and cause troubles at link time. +// +// Note that this function uses a static storage for the return value. +// So if it's called more than once in a singe context (e.g., in the same +// EXPECT_xx()), it's unlikely to work as expected. +inline const struct sockaddr& +getSockAddr(const char* const addr) { + struct addrinfo hints, *res; + memset(&hints, 0, sizeof(hints)); + hints.ai_family = AF_UNSPEC; + hints.ai_socktype = SOCK_STREAM; + hints.ai_flags = AI_NUMERICHOST; + + if (getaddrinfo(addr, NULL, &hints, &res) == 0) { + static struct sockaddr_storage ss; + void* ss_ptr = &ss; + memcpy(ss_ptr, res->ai_addr, res->ai_addrlen); + freeaddrinfo(res); + return (*static_cast(ss_ptr)); + } + + // We don't expect getaddrinfo to fail for our tests. But if that + // ever happens we throw an exception to make sure the corresponding test + // fail (either due to a failure of *_NO_THROW or the uncaught exception). + isc_throw(Unexpected, + "failed to convert textual IP address to sockaddr for " << + addr); +} + +} // end of namespace "tests" +} // end of namespace "acl" +} // end of namespace "isc" + +#endif // __ACL_TEST_SOCKADDR_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/asiodns/asiodns_messages.mes b/src/lib/asiodns/asiodns_messages.mes index 3e11ede159..feb75d44fc 100644 --- a/src/lib/asiodns/asiodns_messages.mes +++ b/src/lib/asiodns/asiodns_messages.mes @@ -26,13 +26,13 @@ enabled. % ASIODNS_OPEN_SOCKET error %1 opening %2 socket to %3(%4) The asynchronous I/O code encountered an error when trying to open a socket of the specified protocol in order to send a message to the target address. -The number of the system error that cause the problem is given in the +The number of the system error that caused the problem is given in the message. % ASIODNS_READ_DATA error %1 reading %2 data from %3(%4) The asynchronous I/O code encountered an error when trying to read data from the specified address on the given protocol. The number of the system -error that cause the problem is given in the message. +error that caused the problem is given in the message. % ASIODNS_READ_TIMEOUT receive timeout while waiting for data from %1(%2) An upstream fetch from the specified address timed out. This may happen for @@ -41,9 +41,9 @@ or a problem on the network. The message will only appear if debug is enabled. % ASIODNS_SEND_DATA error %1 sending data using %2 to %3(%4) -The asynchronous I/O code encountered an error when trying send data to -the specified address on the given protocol. The the number of the system -error that cause the problem is given in the message. +The asynchronous I/O code encountered an error when trying to send data to +the specified address on the given protocol. The number of the system +error that caused the problem is given in the message. % ASIODNS_UNKNOWN_ORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3) An internal consistency check on the origin of a message from the diff --git a/src/lib/asiodns/tests/run_unittests.cc b/src/lib/asiodns/tests/run_unittests.cc index df77368f43..5cacdaf1c1 100644 --- a/src/lib/asiodns/tests/run_unittests.cc +++ b/src/lib/asiodns/tests/run_unittests.cc @@ -15,14 +15,14 @@ #include #include -#include +#include #include int main(int argc, char* argv[]) { ::testing::InitGoogleTest(&argc, argv); // Initialize Google test - isc::log::LoggerManager::init("unittest"); // Set a root logger name + isc::log::initLogger(); // Initialize logging isc::UnitTestUtil::addDataPath(TEST_DATA_DIR); // Add location of test data return (isc::util::unittests::run_all()); diff --git a/src/lib/asiolink/README b/src/lib/asiolink/README index 66091b1c2b..b9e38f98b4 100644 --- a/src/lib/asiolink/README +++ b/src/lib/asiolink/README @@ -20,3 +20,10 @@ Some of the classes defined here--for example, IOSocket, IOEndpoint, and IOAddress--are to be used by BIND 10 modules as wrappers around ASIO-specific classes. + +Logging +------- + +At this point, nothing is logged by this low-level library. We may +revisit that in the future, if we find suitable messages to log, but +right now there are also no loggers initialized or called. diff --git a/src/lib/asiolink/tests/interval_timer_unittest.cc b/src/lib/asiolink/tests/interval_timer_unittest.cc index 8e8ef81101..420cb90bb0 100644 --- a/src/lib/asiolink/tests/interval_timer_unittest.cc +++ b/src/lib/asiolink/tests/interval_timer_unittest.cc @@ -28,7 +28,7 @@ const boost::posix_time::time_duration TIMER_MARGIN_MSEC = using namespace isc::asiolink; -// This fixture is for testing IntervalTimer. Some callback functors are +// This fixture is for testing IntervalTimer. Some callback functors are // registered as callback function of the timer to test if they are called // or not. class IntervalTimerTest : public ::testing::Test { @@ -50,7 +50,9 @@ protected: }; class TimerCallBackCounter : public std::unary_function { public: - TimerCallBackCounter(IntervalTimerTest* test_obj) : test_obj_(test_obj) { + TimerCallBackCounter(IntervalTimerTest* test_obj) : + test_obj_(test_obj) + { counter_ = 0; } void operator()() { @@ -164,24 +166,20 @@ TEST_F(IntervalTimerTest, startIntervalTimer) { itimer.setup(TimerCallBack(this), 100); EXPECT_EQ(100, itimer.getInterval()); io_service_.run(); - // reaches here after timer expired + // Control reaches here after io_service_ was stopped by TimerCallBack. + // delta: difference between elapsed time and 100 milliseconds. boost::posix_time::time_duration test_runtime = boost::posix_time::microsec_clock::universal_time() - start; - EXPECT_FALSE(test_runtime.is_negative()) << - "test duration " << test_runtime << + EXPECT_FALSE(test_runtime.is_negative()) << + "test duration " << test_runtime << " negative - clock skew?"; - boost::posix_time::time_duration delta = - test_runtime - boost::posix_time::milliseconds(100); - if (delta.is_negative()) { - delta.invert_sign(); - } - // expect TimerCallBack is called; timer_called_ is true + // Expect TimerCallBack is called; timer_called_ is true EXPECT_TRUE(timer_called_); - // expect interval is 100 milliseconds +/- TIMER_MARGIN_MSEC. - EXPECT_TRUE(delta < TIMER_MARGIN_MSEC) << - "delta " << delta.total_milliseconds() << "msec " << - ">= " << TIMER_MARGIN_MSEC.total_milliseconds(); + // Expect test_runtime is 100 milliseconds or longer. + EXPECT_TRUE(test_runtime > boost::posix_time::milliseconds(100)) << + "test runtime " << test_runtime.total_milliseconds() << + "msec " << ">= 100"; } TEST_F(IntervalTimerTest, destructIntervalTimer) { @@ -244,7 +242,7 @@ TEST_F(IntervalTimerTest, cancel) { } TEST_F(IntervalTimerTest, overwriteIntervalTimer) { - // Calling setup() multiple times updates call back function and interval. + // Call setup() multiple times to update call back function and interval. // // There are two timers: // itimer (A) @@ -266,7 +264,7 @@ TEST_F(IntervalTimerTest, overwriteIntervalTimer) { // 0 100 200 300 400 500 600 700 800 (ms) // (A) i-------------+----C----s // ^ ^stop io_service - // |change call back function + // |change call back function and interval // (B) i------------------+-------------------S // ^(stop io_service on fail) // @@ -279,30 +277,11 @@ TEST_F(IntervalTimerTest, overwriteIntervalTimer) { itimer.setup(TimerCallBackCounter(this), 300); itimer_overwriter.setup(TimerCallBackOverwriter(this, itimer), 400); io_service_.run(); - // reaches here after timer expired - // if interval is updated, it takes - // 400 milliseconds for TimerCallBackOverwriter - // + 100 milliseconds for TimerCallBack (stop) - // = 500 milliseconds. - // otherwise (test fails), it takes - // 400 milliseconds for TimerCallBackOverwriter - // + 400 milliseconds for TimerCallBackOverwriter (stop) - // = 800 milliseconds. - // delta: difference between elapsed time and 400 + 100 milliseconds - boost::posix_time::time_duration test_runtime = - boost::posix_time::microsec_clock::universal_time() - start; - EXPECT_FALSE(test_runtime.is_negative()) << - "test duration " << test_runtime << - " negative - clock skew?"; - boost::posix_time::time_duration delta = - test_runtime - boost::posix_time::milliseconds(400 + 100); - if (delta.is_negative()) { - delta.invert_sign(); - } - // expect callback function is updated: TimerCallBack is called + // Control reaches here after io_service_ was stopped by + // TimerCallBackCounter or TimerCallBackOverwriter. + + // Expect callback function is updated: TimerCallBack is called EXPECT_TRUE(timer_called_); - // expect interval is updated - EXPECT_TRUE(delta < TIMER_MARGIN_MSEC) << - "delta " << delta.total_milliseconds() << " msec " << - ">= " << TIMER_MARGIN_MSEC.total_milliseconds(); + // Expect interval is updated: return value of getInterval() is updated + EXPECT_EQ(itimer.getInterval(), 100); } diff --git a/src/lib/asiolink/tests/io_endpoint_unittest.cc b/src/lib/asiolink/tests/io_endpoint_unittest.cc index f0279d18af..c7283ec80d 100644 --- a/src/lib/asiolink/tests/io_endpoint_unittest.cc +++ b/src/lib/asiolink/tests/io_endpoint_unittest.cc @@ -219,7 +219,7 @@ sockAddrMatch(const struct sockaddr& actual_sa, res->ai_addr->sa_len = actual_sa.sa_len; #endif EXPECT_EQ(0, memcmp(res->ai_addr, &actual_sa, res->ai_addrlen)); - free(res); + freeaddrinfo(res); } TEST(IOEndpointTest, getSockAddr) { diff --git a/src/lib/bench/tests/Makefile.am b/src/lib/bench/tests/Makefile.am index 3ebdf29a3e..3f8a67863b 100644 --- a/src/lib/bench/tests/Makefile.am +++ b/src/lib/bench/tests/Makefile.am @@ -16,6 +16,7 @@ run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) run_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS) run_unittests_LDADD = $(top_builddir)/src/lib/bench/libbench.la run_unittests_LDADD += $(top_builddir)/src/lib/dns/libdns++.la +run_unittests_LDADD += $(top_builddir)/src/lib/util/libutil.la run_unittests_LDADD += $(top_builddir)/src/lib/util/unittests/libutil_unittests.la run_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la run_unittests_LDADD += $(GTEST_LDADD) diff --git a/src/lib/cache/Makefile.am b/src/lib/cache/Makefile.am index bfbe24a0b2..9871a5ed40 100644 --- a/src/lib/cache/Makefile.am +++ b/src/lib/cache/Makefile.am @@ -31,5 +31,14 @@ libcache_la_SOURCES += cache_entry_key.h cache_entry_key.cc libcache_la_SOURCES += rrset_copy.h rrset_copy.cc libcache_la_SOURCES += local_zone_data.h local_zone_data.cc libcache_la_SOURCES += message_utility.h message_utility.cc +libcache_la_SOURCES += logger.h logger.cc +nodist_libcache_la_SOURCES = cache_messages.cc cache_messages.h -CLEANFILES = *.gcno *.gcda +BUILT_SOURCES = cache_messages.cc cache_messages.h + +cache_messages.cc cache_messages.h: cache_messages.mes + $(top_builddir)/src/lib/log/compiler/message $(top_srcdir)/src/lib/cache/cache_messages.mes + +CLEANFILES = *.gcno *.gcda cache_messages.cc cache_messages.h + +EXTRA_DIST = cache_messages.mes diff --git a/src/lib/cache/cache_messages.mes b/src/lib/cache/cache_messages.mes new file mode 100644 index 0000000000..19102aec4a --- /dev/null +++ b/src/lib/cache/cache_messages.mes @@ -0,0 +1,148 @@ +# Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and/or distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +# PERFORMANCE OF THIS SOFTWARE. + +$NAMESPACE isc::cache + +% CACHE_ENTRY_MISSING_RRSET missing RRset to generate message for %1 +The cache tried to generate the complete answer message. It knows the structure +of the message, but some of the RRsets to be put there are not in cache (they +probably expired already). Therefore it pretends the message was not found. + +% CACHE_LOCALZONE_FOUND found entry with key %1 in local zone data +Debug message, noting that the requested data was successfully found in the +local zone data of the cache. + +% CACHE_LOCALZONE_UNKNOWN entry with key %1 not found in local zone data +Debug message. The requested data was not found in the local zone data. + +% CACHE_LOCALZONE_UPDATE updating local zone element at key %1 +Debug message issued when there's update to the local zone section of cache. + +% CACHE_MESSAGES_DEINIT deinitialized message cache +Debug message. It is issued when the server deinitializes the message cache. + +% CACHE_MESSAGES_EXPIRED found an expired message entry for %1 in the message cache +Debug message. The requested data was found in the message cache, but it +already expired. Therefore the cache removes the entry and pretends it found +nothing. + +% CACHE_MESSAGES_FOUND found a message entry for %1 in the message cache +Debug message. We found the whole message in the cache, so it can be returned +to user without any other lookups. + +% CACHE_MESSAGES_INIT initialized message cache for %1 messages of class %2 +Debug message issued when a new message cache is issued. It lists the class +of messages it can hold and the maximum size of the cache. + +% CACHE_MESSAGES_REMOVE removing old instance of %1/%2/%3 first +Debug message. This may follow CACHE_MESSAGES_UPDATE and indicates that, while +updating, the old instance is being removed prior of inserting a new one. + +% CACHE_MESSAGES_UNCACHEABLE not inserting uncacheable message %1/%2/%3 +Debug message, noting that the given message can not be cached. This is because +there's no SOA record in the message. See RFC 2308 section 5 for more +information. + +% CACHE_MESSAGES_UNKNOWN no entry for %1 found in the message cache +Debug message. The message cache didn't find any entry for the given key. + +% CACHE_MESSAGES_UPDATE updating message entry %1/%2/%3 +Debug message issued when the message cache is being updated with a new +message. Either the old instance is removed or, if none is found, new one +is created. + +% CACHE_RESOLVER_DEEPEST looking up deepest NS for %1/%2 +Debug message. The resolver cache is looking up the deepest known nameserver, +so the resolution doesn't have to start from the root. + +% CACHE_RESOLVER_INIT_INFO initializing resolver cache for class %1 +Debug message, the resolver cache is being created for this given class. The +difference from CACHE_RESOLVER_INIT is only in different format of passed +information, otherwise it does the same. + +% CACHE_RESOLVER_INIT initializing resolver cache for class %1 +Debug message. The resolver cache is being created for this given class. + +% CACHE_RESOLVER_LOCAL_MSG message for %1/%2 found in local zone data +Debug message. The resolver cache found a complete message for the user query +in the zone data. + +% CACHE_RESOLVER_LOCAL_RRSET RRset for %1/%2 found in local zone data +Debug message. The resolver cache found a requested RRset in the local zone +data. + +% CACHE_RESOLVER_LOOKUP_MSG looking up message in resolver cache for %1/%2 +Debug message. The resolver cache is trying to find a message to answer the +user query. + +% CACHE_RESOLVER_LOOKUP_RRSET looking up RRset in resolver cache for %1/%2 +Debug message. The resolver cache is trying to find an RRset (which usually +originates as internally from resolver). + +% CACHE_RESOLVER_NO_QUESTION answer message for %1/%2 has empty question section +The cache tried to fill in found data into the response message. But it +discovered the message contains no question section, which is invalid. +This is likely a programmer error, please submit a bug report. + +% CACHE_RESOLVER_UNKNOWN_CLASS_MSG no cache for class %1 +Debug message. While trying to lookup a message in the resolver cache, it was +discovered there's no cache for this class at all. Therefore no message is +found. + +% CACHE_RESOLVER_UNKNOWN_CLASS_RRSET no cache for class %1 +Debug message. While trying to lookup an RRset in the resolver cache, it was +discovered there's no cache for this class at all. Therefore no data is found. + +% CACHE_RESOLVER_UPDATE_MSG updating message for %1/%2/%3 +Debug message. The resolver is updating a message in the cache. + +% CACHE_RESOLVER_UPDATE_RRSET updating RRset for %1/%2/%3 +Debug message. The resolver is updating an RRset in the cache. + +% CACHE_RESOLVER_UPDATE_UNKNOWN_CLASS_MSG no cache for class %1 +Debug message. While trying to insert a message into the cache, it was +discovered that there's no cache for the class of message. Therefore +the message will not be cached. + +% CACHE_RESOLVER_UPDATE_UNKNOWN_CLASS_RRSET no cache for class %1 +Debug message. While trying to insert an RRset into the cache, it was +discovered that there's no cache for the class of the RRset. Therefore +the message will not be cached. + +% CACHE_RRSET_EXPIRED found expired RRset %1/%2/%3 +Debug message. The requested data was found in the RRset cache. However, it is +expired, so the cache removed it and is going to pretend nothing was found. + +% CACHE_RRSET_INIT initializing RRset cache for %1 RRsets of class %2 +Debug message. The RRset cache to hold at most this many RRsets for the given +class is being created. + +% CACHE_RRSET_LOOKUP looking up %1/%2/%3 in RRset cache +Debug message. The resolver is trying to look up data in the RRset cache. + +% CACHE_RRSET_NOT_FOUND no RRset found for %1/%2/%3 in cache +Debug message which can follow CACHE_RRSET_LOOKUP. This means the data is not +in the cache. + +% CACHE_RRSET_REMOVE_OLD removing old RRset for %1/%2/%3 to make space for new one +Debug message which can follow CACHE_RRSET_UPDATE. During the update, the cache +removed an old instance of the RRset to replace it with the new one. + +% CACHE_RRSET_UNTRUSTED not replacing old RRset for %1/%2/%3, it has higher trust level +Debug message which can follow CACHE_RRSET_UPDATE. The cache already holds the +same RRset, but from more trusted source, so the old one is kept and new one +ignored. + +% CACHE_RRSET_UPDATE updating RRset %1/%2/%3 in the cache +Debug message. The RRset is updating its data with this given RRset. diff --git a/src/lib/cache/local_zone_data.cc b/src/lib/cache/local_zone_data.cc index 61ce35a1b8..13d1d75d88 100644 --- a/src/lib/cache/local_zone_data.cc +++ b/src/lib/cache/local_zone_data.cc @@ -16,6 +16,7 @@ #include "local_zone_data.h" #include "cache_entry_key.h" #include "rrset_copy.h" +#include "logger.h" using namespace std; using namespace isc::dns; @@ -33,8 +34,10 @@ LocalZoneData::lookup(const isc::dns::Name& name, string key = genCacheEntryName(name, type); RRsetMapIterator iter = rrsets_map_.find(key); if (iter == rrsets_map_.end()) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_LOCALZONE_UNKNOWN).arg(key); return (RRsetPtr()); } else { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_LOCALZONE_FOUND).arg(key); return (iter->second); } } @@ -43,6 +46,7 @@ void LocalZoneData::update(const isc::dns::RRset& rrset) { //TODO Do we really need to recreate the rrset again? string key = genCacheEntryName(rrset.getName(), rrset.getType()); + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_LOCALZONE_UPDATE).arg(key); RRset* rrset_copy = new RRset(rrset.getName(), rrset.getClass(), rrset.getType(), rrset.getTTL()); diff --git a/src/lib/cache/logger.cc b/src/lib/cache/logger.cc new file mode 100644 index 0000000000..f4b0f25494 --- /dev/null +++ b/src/lib/cache/logger.cc @@ -0,0 +1,23 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +namespace isc { +namespace cache { + +isc::log::Logger logger("cache"); + +} +} diff --git a/src/lib/cache/logger.h b/src/lib/cache/logger.h new file mode 100644 index 0000000000..8159ed4fa3 --- /dev/null +++ b/src/lib/cache/logger.h @@ -0,0 +1,44 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __DATASRC_LOGGER_H +#define __DATASRC_LOGGER_H + +#include +#include + +/// \file logger.h +/// \brief Cache library global logger +/// +/// This holds the logger for the cache library. It is a private header +/// and should not be included in any publicly used header, only in local +/// cc files. + +namespace isc { +namespace cache { + +/// \brief The logger for this library +extern isc::log::Logger logger; + +enum { + /// \brief Trace basic operations + DBG_TRACE_BASIC = 10, + /// \brief Trace data operations + DBG_TRACE_DATA = 40, +}; + +} +} + +#endif diff --git a/src/lib/cache/message_cache.cc b/src/lib/cache/message_cache.cc index 816ffe330b..e141bb52f5 100644 --- a/src/lib/cache/message_cache.cc +++ b/src/lib/cache/message_cache.cc @@ -1,6 +1,7 @@ // Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") // // Permission to use, copy, modify, and/or distribute this software for any +// // purpose with or without fee is hereby granted, provided that the above // copyright notice and this permission notice appear in all copies. // @@ -20,6 +21,7 @@ #include "message_cache.h" #include "message_utility.h" #include "cache_entry_key.h" +#include "logger.h" namespace isc { namespace cache { @@ -39,11 +41,14 @@ MessageCache::MessageCache(const RRsetCachePtr& rrset_cache, message_lru_((3 * cache_size), new HashDeleter(message_table_)) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, CACHE_MESSAGES_INIT).arg(cache_size). + arg(RRClass(message_class)); } MessageCache::~MessageCache() { // Destroy all the message entries in the cache. message_lru_.clear(); + LOG_DEBUG(logger, DBG_TRACE_BASIC, CACHE_MESSAGES_DEINIT); } bool @@ -57,26 +62,38 @@ MessageCache::lookup(const isc::dns::Name& qname, if(msg_entry) { // Check whether the message entry has expired. if (msg_entry->getExpireTime() > time(NULL)) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_MESSAGES_FOUND). + arg(entry_name); message_lru_.touch(msg_entry); return (msg_entry->genMessage(time(NULL), response)); } else { // message entry expires, remove it from hash table and lru list. + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_MESSAGES_EXPIRED). + arg(entry_name); message_table_.remove(entry_key); message_lru_.remove(msg_entry); return (false); } } + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_MESSAGES_UNKNOWN).arg(entry_name); return (false); } bool MessageCache::update(const Message& msg) { if (!canMessageBeCached(msg)){ + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_MESSAGES_UNCACHEABLE). + arg((*msg.beginQuestion())->getName()). + arg((*msg.beginQuestion())->getType()). + arg((*msg.beginQuestion())->getClass()); return (false); } QuestionIterator iter = msg.beginQuestion(); + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_MESSAGES_UPDATE). + arg((*iter)->getName()).arg((*iter)->getType()). + arg((*iter)->getClass()); std::string entry_name = genCacheEntryName((*iter)->getName(), (*iter)->getType()); HashKey entry_key = HashKey(entry_name, RRClass(message_class_)); @@ -88,6 +105,9 @@ MessageCache::update(const Message& msg) { // add the message entry, maybe there is one way to touch it once. MessageEntryPtr old_msg_entry = message_table_.get(entry_key); if (old_msg_entry) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_MESSAGES_REMOVE). + arg((*iter)->getName()).arg((*iter)->getType()). + arg((*iter)->getClass()); message_lru_.remove(old_msg_entry); } diff --git a/src/lib/cache/message_cache.h b/src/lib/cache/message_cache.h index 979b81455e..44d7fd1cec 100644 --- a/src/lib/cache/message_cache.h +++ b/src/lib/cache/message_cache.h @@ -39,7 +39,7 @@ private: MessageCache& operator=(const MessageCache& source); public: /// \param rrset_cache The cache that stores the RRsets that the - /// message entry will points to + /// message entry will point to /// \param cache_size The size of message cache. /// \param message_class The class of the message cache /// \param negative_soa_cache The cache that stores the SOA record diff --git a/src/lib/cache/message_entry.cc b/src/lib/cache/message_entry.cc index de4ea8916d..d9560a6b3f 100644 --- a/src/lib/cache/message_entry.cc +++ b/src/lib/cache/message_entry.cc @@ -20,6 +20,7 @@ #include "message_entry.h" #include "message_utility.h" #include "rrset_cache.h" +#include "logger.h" using namespace isc::dns; using namespace std; @@ -64,7 +65,7 @@ static uint32_t MAX_UINT32 = numeric_limits::max(); // tunable. Values of one to three hours have been found to work well // and would make sensible a default. Values exceeding one day have // been found to be problematic. (sec 5, RFC2308) -// The default value is 3 hourse (10800 seconds) +// The default value is 3 hours (10800 seconds) // TODO:Give an option to let user configure static uint32_t MAX_NEGATIVE_CACHE_TTL = 10800; @@ -142,6 +143,8 @@ MessageEntry::genMessage(const time_t& time_now, // has expired, if it is, return false. vector rrset_entry_vec; if (false == getRRsetEntries(rrset_entry_vec, time_now)) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_ENTRY_MISSING_RRSET). + arg(entry_name_); return (false); } diff --git a/src/lib/cache/resolver_cache.cc b/src/lib/cache/resolver_cache.cc index 6602f79b95..57935c06ea 100644 --- a/src/lib/cache/resolver_cache.cc +++ b/src/lib/cache/resolver_cache.cc @@ -17,6 +17,7 @@ #include "resolver_cache.h" #include "dns/message.h" #include "rrset_cache.h" +#include "logger.h" #include #include @@ -29,6 +30,7 @@ namespace cache { ResolverClassCache::ResolverClassCache(const RRClass& cache_class) : cache_class_(cache_class) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, CACHE_RESOLVER_INIT).arg(cache_class); local_zone_data_ = LocalZoneDataPtr(new LocalZoneData(cache_class_.getCode())); rrsets_cache_ = RRsetCachePtr(new RRsetCache(RRSET_CACHE_DEFAULT_SIZE, cache_class_.getCode())); @@ -45,6 +47,8 @@ ResolverClassCache::ResolverClassCache(const RRClass& cache_class) : ResolverClassCache::ResolverClassCache(const CacheSizeInfo& cache_info) : cache_class_(cache_info.cclass) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, CACHE_RESOLVER_INIT_INFO). + arg(cache_class_); uint16_t klass = cache_class_.getCode(); // TODO We should find one way to load local zone data. local_zone_data_ = LocalZoneDataPtr(new LocalZoneData(klass)); @@ -69,8 +73,11 @@ ResolverClassCache::lookup(const isc::dns::Name& qname, const isc::dns::RRType& qtype, isc::dns::Message& response) const { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_LOOKUP_MSG). + arg(qname).arg(qtype); // message response should has question section already. if (response.beginQuestion() == response.endQuestion()) { + LOG_ERROR(logger, CACHE_RESOLVER_NO_QUESTION).arg(qname).arg(qtype); isc_throw(MessageNoQuestionSection, "Message has no question section"); } @@ -79,6 +86,8 @@ ResolverClassCache::lookup(const isc::dns::Name& qname, // answer section. RRsetPtr rrset_ptr = local_zone_data_->lookup(qname, qtype); if (rrset_ptr) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_LOCAL_MSG). + arg(qname).arg(qtype); response.addRRset(Message::SECTION_ANSWER, rrset_ptr); return (true); } @@ -91,11 +100,15 @@ isc::dns::RRsetPtr ResolverClassCache::lookup(const isc::dns::Name& qname, const isc::dns::RRType& qtype) const { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_LOOKUP_RRSET). + arg(qname).arg(qtype); // Algorithm: // 1. Search in local zone data first, // 2. Then do search in rrsets_cache_. RRsetPtr rrset_ptr = local_zone_data_->lookup(qname, qtype); if (rrset_ptr) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_LOCAL_RRSET). + arg(qname).arg(qtype); return (rrset_ptr); } else { RRsetEntryPtr rrset_entry = rrsets_cache_->lookup(qname, qtype); @@ -109,6 +122,10 @@ ResolverClassCache::lookup(const isc::dns::Name& qname, bool ResolverClassCache::update(const isc::dns::Message& msg) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_UPDATE_MSG). + arg((*msg.beginQuestion())->getName()). + arg((*msg.beginQuestion())->getType()). + arg((*msg.beginQuestion())->getClass()); return (messages_cache_->update(msg)); } @@ -130,6 +147,9 @@ ResolverClassCache::updateRRsetCache(const isc::dns::ConstRRsetPtr& rrset_ptr, bool ResolverClassCache::update(const isc::dns::ConstRRsetPtr& rrset_ptr) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_UPDATE_RRSET). + arg(rrset_ptr->getName()).arg(rrset_ptr->getType()). + arg(rrset_ptr->getClass()); // First update local zone, then update rrset cache. local_zone_data_->update((*rrset_ptr.get())); updateRRsetCache(rrset_ptr, rrsets_cache_); @@ -166,6 +186,8 @@ ResolverCache::lookup(const isc::dns::Name& qname, if (cc) { return (cc->lookup(qname, qtype, response)); } else { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_UNKNOWN_CLASS_MSG). + arg(qclass); return (false); } } @@ -179,6 +201,8 @@ ResolverCache::lookup(const isc::dns::Name& qname, if (cc) { return (cc->lookup(qname, qtype)); } else { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_UNKNOWN_CLASS_RRSET). + arg(qclass); return (RRsetPtr()); } } @@ -187,6 +211,8 @@ isc::dns::RRsetPtr ResolverCache::lookupDeepestNS(const isc::dns::Name& qname, const isc::dns::RRClass& qclass) const { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RESOLVER_DEEPEST).arg(qname). + arg(qclass); isc::dns::RRType qtype = RRType::NS(); ResolverClassCache* cc = getClassCache(qclass); if (cc) { @@ -213,6 +239,9 @@ ResolverCache::update(const isc::dns::Message& msg) { if (cc) { return (cc->update(msg)); } else { + LOG_DEBUG(logger, DBG_TRACE_DATA, + CACHE_RESOLVER_UPDATE_UNKNOWN_CLASS_MSG). + arg((*msg.beginQuestion())->getClass()); return (false); } } @@ -223,6 +252,9 @@ ResolverCache::update(const isc::dns::ConstRRsetPtr& rrset_ptr) { if (cc) { return (cc->update(rrset_ptr)); } else { + LOG_DEBUG(logger, DBG_TRACE_DATA, + CACHE_RESOLVER_UPDATE_UNKNOWN_CLASS_RRSET). + arg(rrset_ptr->getClass()); return (false); } } diff --git a/src/lib/cache/rrset_cache.cc b/src/lib/cache/rrset_cache.cc index da19b6d2a0..1a5fd48dc5 100644 --- a/src/lib/cache/rrset_cache.cc +++ b/src/lib/cache/rrset_cache.cc @@ -14,8 +14,9 @@ #include -#include #include "rrset_cache.h" +#include "logger.h" +#include #include #include #include @@ -34,20 +35,28 @@ RRsetCache::RRsetCache(uint32_t cache_size, rrset_lru_((3 * cache_size), new HashDeleter(rrset_table_)) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, CACHE_RRSET_INIT).arg(cache_size). + arg(RRClass(rrset_class)); } RRsetEntryPtr RRsetCache::lookup(const isc::dns::Name& qname, const isc::dns::RRType& qtype) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RRSET_LOOKUP).arg(qname). + arg(qtype).arg(RRClass(class_)); const string entry_name = genCacheEntryName(qname, qtype); - RRsetEntryPtr entry_ptr = rrset_table_.get(HashKey(entry_name, RRClass(class_))); + + RRsetEntryPtr entry_ptr = rrset_table_.get(HashKey(entry_name, + RRClass(class_))); if (entry_ptr) { if (entry_ptr->getExpireTime() > time(NULL)) { // Only touch the non-expired rrset entries rrset_lru_.touch(entry_ptr); return (entry_ptr); } else { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RRSET_EXPIRED).arg(qname). + arg(qtype).arg(RRClass(class_)); // the rrset entry has expired, so just remove it from // hash table and lru list. rrset_table_.remove(entry_ptr->hashKey()); @@ -55,19 +64,31 @@ RRsetCache::lookup(const isc::dns::Name& qname, } } + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RRSET_NOT_FOUND).arg(qname). + arg(qtype).arg(RRClass(class_)); return (RRsetEntryPtr()); } RRsetEntryPtr -RRsetCache::update(const isc::dns::RRset& rrset, const RRsetTrustLevel& level) { +RRsetCache::update(const isc::dns::RRset& rrset, + const RRsetTrustLevel& level) +{ + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RRSET_UPDATE).arg(rrset.getName()). + arg(rrset.getType()).arg(rrset.getClass()); // TODO: If the RRset is an NS, we should update the NSAS as well // lookup first RRsetEntryPtr entry_ptr = lookup(rrset.getName(), rrset.getType()); if (entry_ptr) { if (entry_ptr->getTrustLevel() > level) { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RRSET_UNTRUSTED). + arg(rrset.getName()).arg(rrset.getType()). + arg(rrset.getClass()); // existed rrset entry is more authoritative, just return it return (entry_ptr); } else { + LOG_DEBUG(logger, DBG_TRACE_DATA, CACHE_RRSET_REMOVE_OLD). + arg(rrset.getName()).arg(rrset.getType()). + arg(rrset.getClass()); // Remove the old rrset entry from the lru list. rrset_lru_.remove(entry_ptr); } diff --git a/src/lib/cache/tests/Makefile.am b/src/lib/cache/tests/Makefile.am index 39215d9031..a215c568ae 100644 --- a/src/lib/cache/tests/Makefile.am +++ b/src/lib/cache/tests/Makefile.am @@ -53,8 +53,10 @@ run_unittests_LDADD += -lboost_thread endif run_unittests_LDADD += $(top_builddir)/src/lib/cache/libcache.la +run_unittests_LDADD += $(top_builddir)/src/lib/log/liblog.la run_unittests_LDADD += $(top_builddir)/src/lib/nsas/libnsas.la run_unittests_LDADD += $(top_builddir)/src/lib/dns/libdns++.la +run_unittests_LDADD += $(top_builddir)/src/lib/util/libutil.la run_unittests_LDADD += $(top_builddir)/src/lib/asiolink/libasiolink.la run_unittests_LDADD += $(top_builddir)/src/lib/util/unittests/libutil_unittests.la run_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/lib/cache/tests/run_unittests.cc b/src/lib/cache/tests/run_unittests.cc index b75fc06606..370bc698a8 100644 --- a/src/lib/cache/tests/run_unittests.cc +++ b/src/lib/cache/tests/run_unittests.cc @@ -19,11 +19,15 @@ #include +#include + int main(int argc, char* argv[]) { ::testing::InitGoogleTest(&argc, argv); isc::UnitTestUtil::addDataPath(TEST_DATA_SRCDIR); isc::UnitTestUtil::addDataPath(TEST_DATA_BUILDDIR); + isc::log::initLogger(); + return (isc::util::unittests::run_all()); } diff --git a/src/lib/cc/cc_messages.mes b/src/lib/cc/cc_messages.mes index 8c62ea101b..8370cdd03c 100644 --- a/src/lib/cc/cc_messages.mes +++ b/src/lib/cc/cc_messages.mes @@ -53,11 +53,11 @@ Debug message, we're about to send a message over the command channel. This happens when garbage comes over the command channel or some kind of confusion happens in the program. The data received from the socket make no sense if we interpret it as lengths of message. The first one is total length -of message, the second length of the header. The header and it's length -(2 bytes) is counted in the total length. +of the message; the second is the length of the header. The header +and its length (2 bytes) is counted in the total length. % CC_LENGTH_NOT_READY length not ready -There should be data representing length of message on the socket, but it +There should be data representing the length of message on the socket, but it is not there. % CC_NO_MESSAGE no message ready to be received yet diff --git a/src/lib/cc/data.cc b/src/lib/cc/data.cc index 932bef4590..ffa5346a84 100644 --- a/src/lib/cc/data.cc +++ b/src/lib/cc/data.cc @@ -447,7 +447,9 @@ from_stringstream_map(std::istream &in, const std::string& file, int& line, ElementPtr map = Element::createMap(); skip_chars(in, " \t\n", line, pos); char c = in.peek(); - if (c == '}') { + if (c == EOF) { + throwJSONError(std::string("Unterminated map, or } expected"), file, line, pos); + } else if (c == '}') { // empty map, skip closing curly c = in.get(); } else { @@ -509,6 +511,8 @@ Element::nameToType(const std::string& type_name) { return (Element::list); } else if (type_name == "map") { return (Element::map); + } else if (type_name == "named_set") { + return (Element::map); } else if (type_name == "null") { return (Element::null); } else if (type_name == "any") { diff --git a/src/lib/cc/session.cc b/src/lib/cc/session.cc index 97d5cf14d0..e0e24cf922 100644 --- a/src/lib/cc/session.cc +++ b/src/lib/cc/session.cc @@ -119,7 +119,7 @@ private: void SessionImpl::establish(const char& socket_file) { try { - LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISH).arg(socket_file); + LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISH).arg(&socket_file); socket_.connect(asio::local::stream_protocol::endpoint(&socket_file), error_); LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISHED); diff --git a/src/lib/cc/tests/data_unittests.cc b/src/lib/cc/tests/data_unittests.cc index 2536682288..53d5ab8902 100644 --- a/src/lib/cc/tests/data_unittests.cc +++ b/src/lib/cc/tests/data_unittests.cc @@ -396,9 +396,24 @@ TEST(Element, to_and_from_wire) { EXPECT_EQ("1", Element::fromWire(ss, 1)->str()); // Some malformed JSON input + EXPECT_THROW(Element::fromJSON("{ "), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{ \"a\" "), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{ \"a\": "), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{ \"a\": \"b\""), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{ \"a\": {"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{ \"a\": {}"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{ \"a\": []"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{ \"a\": [ }"), isc::data::JSONError); EXPECT_THROW(Element::fromJSON("{\":"), isc::data::JSONError); EXPECT_THROW(Element::fromJSON("]"), isc::data::JSONError); EXPECT_THROW(Element::fromJSON("[ 1, 2, }"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("[ 1, 2, {}"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("[ 1, 2, { ]"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("[ "), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{{}}"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{[]}"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("{ \"a\", \"b\" }"), isc::data::JSONError); + EXPECT_THROW(Element::fromJSON("[ \"a\": \"b\" ]"), isc::data::JSONError); } ConstElementPtr diff --git a/src/lib/config/ccsession.cc b/src/lib/config/ccsession.cc index 6b094ec8c6..ac8507700f 100644 --- a/src/lib/config/ccsession.cc +++ b/src/lib/config/ccsession.cc @@ -18,12 +18,15 @@ #include #include #include +#include -#include -#include -#include +#include #include +#include +#include #include +#include +#include #include #include @@ -175,6 +178,36 @@ ConstElementPtr getValueOrDefault(ConstElementPtr config_part, } } +// Prefix name with "b10-". +// +// In BIND 10, modules have names taken from the .spec file, which are typically +// names starting with a capital letter (e.g. "Resolver", "Auth" etc.). The +// names of the associated binaries are derived from the module names, being +// prefixed "b10-" and having the first letter of the module name lower-cased +// (e.g. "b10-resolver", "b10-auth"). (It is a required convention that there +// be this relationship between the names.) +// +// Within the binaries the root loggers are named after the binaries themselves. +// (The reason for this is that the name of the logger is included in the +// message logged, so making it clear which message comes from which BIND 10 +// process.) As logging is configured using module names, the configuration code +// has to match these with the corresponding logger names. This function +// converts a module name to a root logger name by lowercasing the first letter +// of the module name and prepending "b10-". +// +// \param instring String to convert. (This may be empty, in which case +// "b10-" will be returned.) +// +// \return Converted string. +std::string +b10Prefix(const std::string& instring) { + std::string result = instring; + if (!result.empty()) { + result[0] = tolower(result[0]); + } + return (std::string("b10-") + result); +} + // Reads a output_option subelement of a logger configuration, // and sets the values thereing to the given OutputOption struct, // or defaults values if they are not provided (from config_data). @@ -215,6 +248,7 @@ readLoggersConf(std::vector& specs, ConstElementPtr logger, const ConfigData& config_data) { + // Read name, adding prefix as required. std::string lname = logger->get("name")->stringValue(); ConstElementPtr severity_el = getValueOrDefault(logger, @@ -247,6 +281,27 @@ readLoggersConf(std::vector& specs, specs.push_back(logger_spec); } +// Copies the map for a logger, changing the name of the logger in the process. +// This is used because the map being copied is "const", so in order to +// change the name we need to create a new one. +// +// \param cur_logger Logger being copied. +// \param new_name New value of the "name" element at the top level. +// +// \return Pointer to the map with the updated element. +ConstElementPtr +copyLogger(ConstElementPtr& cur_logger, const std::string& new_name) { + + // Since we'll only be updating one first-level element and subsequent + // use won't change the contents of the map, a shallow map copy is enough. + ElementPtr new_logger(Element::createMap()); + new_logger->setValue(cur_logger->mapValue()); + new_logger->set("name", Element::create(new_name)); + + return (new_logger); +} + + } // end anonymous namespace @@ -259,38 +314,60 @@ getRelatedLoggers(ConstElementPtr loggers) { ElementPtr result = isc::data::Element::createList(); BOOST_FOREACH(ConstElementPtr cur_logger, loggers->listValue()) { + // Need to add the b10- prefix to names ready from the spec file. const std::string cur_name = cur_logger->get("name")->stringValue(); - if (cur_name == root_name || cur_name.find(root_name + ".") == 0) { - our_names.insert(cur_name); - result->add(cur_logger); + const std::string mod_name = b10Prefix(cur_name); + if (mod_name == root_name || mod_name.find(root_name + ".") == 0) { + + // Note this name so that we don't add a wildcard that matches it. + our_names.insert(mod_name); + + // We want to store the logger with the modified name (i.e. with + // the b10- prefix). As we are dealing with const loggers, we + // store a modified copy of the data. + result->add(copyLogger(cur_logger, mod_name)); + LOG_DEBUG(config_logger, DBG_CONFIG_PROCESS, CONFIG_LOG_EXPLICIT) + .arg(cur_name); + + } else if (!cur_name.empty() && (cur_name[0] != '*')) { + // Not a wildcard logger and we are ignoring it. + LOG_DEBUG(config_logger, DBG_CONFIG_PROCESS, + CONFIG_LOG_IGNORE_EXPLICIT).arg(cur_name); } } - // now find the * names + // Now find the wildcard names (the one that start with "*"). BOOST_FOREACH(ConstElementPtr cur_logger, loggers->listValue()) { std::string cur_name = cur_logger->get("name")->stringValue(); - // if name is '*', or starts with '*.', replace * with root - // logger name + // If name is '*', or starts with '*.', replace * with root + // logger name. if (cur_name == "*" || cur_name.length() > 1 && cur_name[0] == '*' && cur_name[1] == '.') { - cur_name = root_name + cur_name.substr(1); - // now add it to the result list, but only if a logger with - // that name was not configured explicitely - if (our_names.find(cur_name) == our_names.end()) { - // we substitute the name here already, but as - // we are dealing with consts, we copy the data - ElementPtr new_logger(Element::createMap()); - // since we'll only be updating one first-level element, - // and we return as const again, a shallow map copy is - // enough - new_logger->setValue(cur_logger->mapValue()); - new_logger->set("name", Element::create(cur_name)); - result->add(new_logger); + // Substitute the "*" with the root name + std::string mod_name = cur_name; + mod_name.replace(0, 1, root_name); + + // Now add it to the result list, but only if a logger with + // that name was not configured explicitly. + if (our_names.find(mod_name) == our_names.end()) { + + // We substitute the name here, but as we are dealing with + // consts, we need to copy the data. + result->add(copyLogger(cur_logger, mod_name)); + LOG_DEBUG(config_logger, DBG_CONFIG_PROCESS, + CONFIG_LOG_WILD_MATCH).arg(cur_name); + + } else if (!cur_name.empty() && (cur_name[0] == '*')) { + // Is a wildcard and we are ignoring it (because the wildcard + // expands to a specification that we already encountered when + // processing explicit names). + LOG_DEBUG(config_logger, DBG_CONFIG_PROCESS, + CONFIG_LOG_IGNORE_WILD).arg(cur_name); } } } - return result; + return (result); } void @@ -318,7 +395,7 @@ ModuleSpec ModuleCCSession::readModuleSpecification(const std::string& filename) { std::ifstream file; ModuleSpec module_spec; - + // this file should be declared in a @something@ directive file.open(filename.c_str()); if (!file) { @@ -385,7 +462,7 @@ ModuleCCSession::ModuleCCSession( LOG_ERROR(config_logger, CONFIG_MOD_SPEC_REJECT).arg(answer->str()); isc_throw(CCSessionInitError, answer->str()); } - + setLocalConfig(Element::fromJSON("{}")); // get any stored configuration from the manager if (config_handler_) { @@ -511,7 +588,7 @@ int ModuleCCSession::checkCommand() { ConstElementPtr cmd, routing, data; if (session_.group_recvmsg(routing, data, true)) { - + /* ignore result messages (in case we're out of sync, to prevent * pingpongs */ if (data->getType() != Element::map || data->contains("result")) { diff --git a/src/lib/config/ccsession.h b/src/lib/config/ccsession.h index 7dc34ba441..50bb65c83c 100644 --- a/src/lib/config/ccsession.h +++ b/src/lib/config/ccsession.h @@ -179,7 +179,7 @@ public: * We'll need to develop a cleaner solution, and then remove this knob) * @param handle_logging If true, the ModuleCCSession will automatically * take care of logging configuration through the virtual Logging config - * module. + * module. Defaults to true. */ ModuleCCSession(const std::string& spec_file_name, isc::cc::AbstractSession& session, @@ -189,7 +189,7 @@ public: const std::string& command, isc::data::ConstElementPtr args) = NULL, bool start_immediately = true, - bool handle_logging = false + bool handle_logging = true ); /// Start receiving new commands and configuration changes asynchronously. @@ -377,10 +377,10 @@ default_logconfig_handler(const std::string& module_name, /// \brief Returns the loggers related to this module /// /// This function does two things; -/// - it drops the configuration parts for loggers for other modules +/// - it drops the configuration parts for loggers for other modules. /// - it replaces the '*' in the name of the loggers by the name of /// this module, but *only* if the expanded name is not configured -/// explicitely +/// explicitly. /// /// Examples: if this is the module b10-resolver, /// For the config names ['*', 'b10-auth'] diff --git a/src/lib/config/config_log.h b/src/lib/config/config_log.h index 006385586b..74e6a8463c 100644 --- a/src/lib/config/config_log.h +++ b/src/lib/config/config_log.h @@ -32,6 +32,14 @@ namespace config { /// space. extern isc::log::Logger config_logger; // isc::config::config_logger is the CONFIG logger +/// \brief Debug Levels +/// +/// Debug levels used in the configuration library +enum { + DBG_CONFIG_PROCESS = 40 // Enumerate configuration elements as they + // ... are processed. +}; + } // namespace config } // namespace isc diff --git a/src/lib/config/config_messages.mes b/src/lib/config/config_messages.mes index 660ab9a126..c439eddbca 100644 --- a/src/lib/config/config_messages.mes +++ b/src/lib/config/config_messages.mes @@ -37,6 +37,31 @@ manager is appended to the log error. The most likely cause is that the module is of a different (command specification) version than the running configuration manager. +% CONFIG_LOG_EXPLICIT will use logging configuration for explicitly-named logger %1 +This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found an entry for the named +logger that matches the logger specification for the program. The logging +configuration for the program will be updated with the information. + +% CONFIG_LOG_IGNORE_EXPLICIT ignoring logging configuration for explicitly-named logger %1 +This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found an entry for the +named logger. As this does not match the logger specification for the +program, it has been ignored. + +% CONFIG_LOG_IGNORE_WILD ignoring logging configuration for wildcard logger %1 +This is a debug message. When processing the "loggers" part of the +configuration file, the configuration library found the named wildcard +entry (one containing the "*" character) that matched a logger already +matched by an explicitly named entry. The configuration is ignored. + +% CONFIG_LOG_WILD_MATCH will use logging configuration for wildcard logger %1 +This is a debug message. When processing the "loggers" part of +the configuration file, the configuration library found the named +wildcard entry (one containing the "*" character) that matches a logger +specification in the program. The logging configuration for the program +will be updated with the information. + % CONFIG_JSON_PARSE JSON parse error in %1: %2 There was an error parsing the JSON file. The given file does not appear to be in valid JSON format. Please verify that the filename is correct diff --git a/src/lib/config/module_spec.cc b/src/lib/config/module_spec.cc index 1621fe313f..bebe695023 100644 --- a/src/lib/config/module_spec.cc +++ b/src/lib/config/module_spec.cc @@ -1,4 +1,4 @@ -// Copyright (C) 2010 Internet Systems Consortium. +// Copyright (C) 2010, 2011 Internet Systems Consortium. // // Permission to use, copy, modify, and distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -67,10 +67,13 @@ check_config_item(ConstElementPtr spec) { check_leaf_item(spec, "list_item_spec", Element::map, true); check_config_item(spec->get("list_item_spec")); } - // todo: add stuff for type map - if (Element::nameToType(spec->get("item_type")->stringValue()) == Element::map) { + + if (spec->get("item_type")->stringValue() == "map") { check_leaf_item(spec, "map_item_spec", Element::list, true); check_config_item_list(spec->get("map_item_spec")); + } else if (spec->get("item_type")->stringValue() == "named_set") { + check_leaf_item(spec, "named_set_item_spec", Element::map, true); + check_config_item(spec->get("named_set_item_spec")); } } @@ -84,6 +87,61 @@ check_config_item_list(ConstElementPtr spec) { } } +// checks whether the given element is a valid statistics specification +// returns false if the specification is bad +bool +check_format(ConstElementPtr value, ConstElementPtr format_name) { + typedef std::map format_types; + format_types time_formats; + // TODO: should be added other format types if necessary + time_formats.insert( + format_types::value_type("date-time", "%Y-%m-%dT%H:%M:%SZ") ); + time_formats.insert( + format_types::value_type("date", "%Y-%m-%d") ); + time_formats.insert( + format_types::value_type("time", "%H:%M:%S") ); + BOOST_FOREACH (const format_types::value_type& f, time_formats) { + if (format_name->stringValue() == f.first) { + struct tm tm; + std::vector buf(32); + memset(&tm, 0, sizeof(tm)); + // reverse check + return (strptime(value->stringValue().c_str(), + f.second.c_str(), &tm) != NULL + && strftime(&buf[0], buf.size(), + f.second.c_str(), &tm) != 0 + && strncmp(value->stringValue().c_str(), + &buf[0], buf.size()) == 0); + } + } + return (false); +} + +void check_statistics_item_list(ConstElementPtr spec); + +void +check_statistics_item_list(ConstElementPtr spec) { + if (spec->getType() != Element::list) { + throw ModuleSpecError("statistics is not a list of elements"); + } + BOOST_FOREACH(ConstElementPtr item, spec->listValue()) { + check_config_item(item); + // additional checks for statistics + check_leaf_item(item, "item_title", Element::string, true); + check_leaf_item(item, "item_description", Element::string, true); + check_leaf_item(item, "item_format", Element::string, false); + // checks name of item_format and validation of item_default + if (item->contains("item_format") + && item->contains("item_default")) { + if(!check_format(item->get("item_default"), + item->get("item_format"))) { + throw ModuleSpecError( + "item_default not valid type of item_format"); + } + } + } +} + void check_command(ConstElementPtr spec) { check_leaf_item(spec, "command_name", Element::string, true); @@ -113,6 +171,9 @@ check_data_specification(ConstElementPtr spec) { if (spec->contains("commands")) { check_command_list(spec->get("commands")); } + if (spec->contains("statistics")) { + check_statistics_item_list(spec->get("statistics")); + } } // checks whether the given element is a valid module specification @@ -162,6 +223,15 @@ ModuleSpec::getConfigSpec() const { } } +ConstElementPtr +ModuleSpec::getStatisticsSpec() const { + if (module_specification->contains("statistics")) { + return (module_specification->get("statistics")); + } else { + return (ElementPtr()); + } +} + const std::string ModuleSpec::getModuleName() const { return (module_specification->get("module_name")->stringValue()); @@ -182,6 +252,12 @@ ModuleSpec::validateConfig(ConstElementPtr data, const bool full) const { return (validateSpecList(spec, data, full, ElementPtr())); } +bool +ModuleSpec::validateStatistics(ConstElementPtr data, const bool full) const { + ConstElementPtr spec = module_specification->find("statistics"); + return (validateSpecList(spec, data, full, ElementPtr())); +} + bool ModuleSpec::validateCommand(const std::string& command, ConstElementPtr args, @@ -220,6 +296,14 @@ ModuleSpec::validateConfig(ConstElementPtr data, const bool full, return (validateSpecList(spec, data, full, errors)); } +bool +ModuleSpec::validateStatistics(ConstElementPtr data, const bool full, + ElementPtr errors) const +{ + ConstElementPtr spec = module_specification->find("statistics"); + return (validateSpecList(spec, data, full, errors)); +} + ModuleSpec moduleSpecFromFile(const std::string& file_name, const bool check) throw(JSONError, ModuleSpecError) @@ -286,7 +370,8 @@ check_type(ConstElementPtr spec, ConstElementPtr element) { return (cur_item_type == "list"); break; case Element::map: - return (cur_item_type == "map"); + return (cur_item_type == "map" || + cur_item_type == "named_set"); break; } return (false); @@ -323,7 +408,27 @@ ModuleSpec::validateItem(ConstElementPtr spec, ConstElementPtr data, } } if (data->getType() == Element::map) { - if (!validateSpecList(spec->get("map_item_spec"), data, full, errors)) { + // either a normal 'map' or a 'named set' (determined by which + // subspecification it has) + if (spec->contains("map_item_spec")) { + if (!validateSpecList(spec->get("map_item_spec"), data, full, errors)) { + return (false); + } + } else { + typedef std::pair maptype; + + BOOST_FOREACH(maptype m, data->mapValue()) { + if (!validateItem(spec->get("named_set_item_spec"), m.second, full, errors)) { + return (false); + } + } + } + } + if (spec->contains("item_format")) { + if (!check_format(data, spec->get("item_format"))) { + if (errors) { + errors->add(Element::create("Format mismatch")); + } return (false); } } diff --git a/src/lib/config/module_spec.h b/src/lib/config/module_spec.h index ab6e273edd..ce3762f203 100644 --- a/src/lib/config/module_spec.h +++ b/src/lib/config/module_spec.h @@ -1,4 +1,4 @@ -// Copyright (C) 2010 Internet Systems Consortium. +// Copyright (C) 2010, 2011 Internet Systems Consortium. // // Permission to use, copy, modify, and distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -71,6 +71,12 @@ namespace isc { namespace config { /// part of the specification isc::data::ConstElementPtr getConfigSpec() const; + /// Returns the statistics part of the specification as an + /// ElementPtr + /// \return ElementPtr Shared pointer to the statistics + /// part of the specification + isc::data::ConstElementPtr getStatisticsSpec() const; + /// Returns the full module specification as an ElementPtr /// \return ElementPtr Shared pointer to the specification isc::data::ConstElementPtr getFullSpec() const { @@ -95,6 +101,17 @@ namespace isc { namespace config { bool validateConfig(isc::data::ConstElementPtr data, const bool full = false) const; + // returns true if the given element conforms to this data + // statistics specification + /// Validates the given statistics data for this specification. + /// \param data The base \c Element of the data to check + /// \param full If true, all non-optional statistics parameters + /// must be specified. + /// \return true if the data conforms to the specification, + /// false otherwise. + bool validateStatistics(isc::data::ConstElementPtr data, + const bool full = false) const; + /// Validates the arguments for the given command /// /// This checks the command and argument against the @@ -142,6 +159,10 @@ namespace isc { namespace config { bool validateConfig(isc::data::ConstElementPtr data, const bool full, isc::data::ElementPtr errors) const; + /// errors must be of type ListElement + bool validateStatistics(isc::data::ConstElementPtr data, const bool full, + isc::data::ElementPtr errors) const; + private: bool validateItem(isc::data::ConstElementPtr spec, isc::data::ConstElementPtr data, diff --git a/src/lib/config/tests/ccsession_unittests.cc b/src/lib/config/tests/ccsession_unittests.cc index e1a4f9da92..793fa30457 100644 --- a/src/lib/config/tests/ccsession_unittests.cc +++ b/src/lib/config/tests/ccsession_unittests.cc @@ -44,7 +44,9 @@ el(const std::string& str) { class CCSessionTest : public ::testing::Test { protected: - CCSessionTest() : session(el("[]"), el("[]"), el("[]")) { + CCSessionTest() : session(el("[]"), el("[]"), el("[]")), + root_name(isc::log::getRootLoggerName()) + { // upon creation of a ModuleCCSession, the class // sends its specification to the config manager. // it expects an ok answer back, so everytime we @@ -52,8 +54,11 @@ protected: // ok answer. session.getMessages()->add(createAnswer()); } - ~CCSessionTest() {} + ~CCSessionTest() { + isc::log::setRootLoggerName(root_name); + } FakeSession session; + const std::string root_name; }; TEST_F(CCSessionTest, createAnswer) { @@ -151,7 +156,8 @@ TEST_F(CCSessionTest, parseCommand) { TEST_F(CCSessionTest, session1) { EXPECT_FALSE(session.haveSubscription("Spec1", "*")); - ModuleCCSession mccs(ccspecfile("spec1.spec"), session, NULL, NULL); + ModuleCCSession mccs(ccspecfile("spec1.spec"), session, NULL, NULL, + true, false); EXPECT_TRUE(session.haveSubscription("Spec1", "*")); EXPECT_EQ(1, session.getMsgQueue()->size()); @@ -163,21 +169,22 @@ TEST_F(CCSessionTest, session1) { EXPECT_EQ("*", to); EXPECT_EQ(0, session.getMsgQueue()->size()); - // without explicit argument, the session should not automatically + // with this argument, the session should not automatically // subscribe to logging config EXPECT_FALSE(session.haveSubscription("Logging", "*")); } TEST_F(CCSessionTest, session2) { EXPECT_FALSE(session.haveSubscription("Spec2", "*")); - ModuleCCSession mccs(ccspecfile("spec2.spec"), session, NULL, NULL); + ModuleCCSession mccs(ccspecfile("spec2.spec"), session, NULL, NULL, + true, false); EXPECT_TRUE(session.haveSubscription("Spec2", "*")); EXPECT_EQ(1, session.getMsgQueue()->size()); ConstElementPtr msg; std::string group, to; msg = session.getFirstMessage(group, to); - EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\" } ] }", msg->str()); + EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\", \"statistics\": [ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ] } ] }", msg->str()); EXPECT_EQ("ConfigManager", group); EXPECT_EQ("*", to); EXPECT_EQ(0, session.getMsgQueue()->size()); @@ -217,14 +224,14 @@ TEST_F(CCSessionTest, session3) { EXPECT_FALSE(session.haveSubscription("Spec2", "*")); ModuleCCSession mccs(ccspecfile("spec2.spec"), session, my_config_handler, - my_command_handler); + my_command_handler, true, false); EXPECT_TRUE(session.haveSubscription("Spec2", "*")); EXPECT_EQ(2, session.getMsgQueue()->size()); ConstElementPtr msg; std::string group, to; msg = session.getFirstMessage(group, to); - EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\" } ] }", msg->str()); + EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\", \"statistics\": [ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ] } ] }", msg->str()); EXPECT_EQ("ConfigManager", group); EXPECT_EQ("*", to); EXPECT_EQ(1, session.getMsgQueue()->size()); @@ -241,7 +248,7 @@ TEST_F(CCSessionTest, checkCommand) { EXPECT_FALSE(session.haveSubscription("Spec29", "*")); ModuleCCSession mccs(ccspecfile("spec29.spec"), session, my_config_handler, - my_command_handler); + my_command_handler, true, false); EXPECT_TRUE(session.haveSubscription("Spec29", "*")); EXPECT_EQ(2, session.getMsgQueue()->size()); @@ -318,7 +325,7 @@ TEST_F(CCSessionTest, checkCommand2) { session.getMessages()->add(createAnswer(0, el("{}"))); EXPECT_FALSE(session.haveSubscription("Spec29", "*")); ModuleCCSession mccs(ccspecfile("spec29.spec"), session, my_config_handler, - my_command_handler); + my_command_handler, true, false); EXPECT_TRUE(session.haveSubscription("Spec29", "*")); ConstElementPtr msg; std::string group, to; @@ -370,7 +377,8 @@ TEST_F(CCSessionTest, remoteConfig) { std::string module_name; int item1; - ModuleCCSession mccs(ccspecfile("spec1.spec"), session, NULL, NULL, false); + ModuleCCSession mccs(ccspecfile("spec1.spec"), session, NULL, NULL, + false, false); EXPECT_TRUE(session.haveSubscription("Spec1", "*")); // first simply connect, with no config values, and see we get @@ -526,7 +534,7 @@ TEST_F(CCSessionTest, ignoreRemoteConfigCommands) { EXPECT_FALSE(session.haveSubscription("Spec29", "*")); ModuleCCSession mccs(ccspecfile("spec29.spec"), session, my_config_handler, - my_command_handler, false); + my_command_handler, false, false); EXPECT_TRUE(session.haveSubscription("Spec29", "*")); EXPECT_EQ(2, session.getMsgQueue()->size()); @@ -578,14 +586,15 @@ TEST_F(CCSessionTest, initializationFail) { // Test it throws when we try to start it twice (once from the constructor) TEST_F(CCSessionTest, doubleStartImplicit) { - ModuleCCSession mccs(ccspecfile("spec29.spec"), session, NULL, NULL); + ModuleCCSession mccs(ccspecfile("spec29.spec"), session, NULL, NULL, + true, false); EXPECT_THROW(mccs.start(), CCSessionError); } // The same, but both starts are explicit TEST_F(CCSessionTest, doubleStartExplicit) { ModuleCCSession mccs(ccspecfile("spec29.spec"), session, NULL, NULL, - false); + false, false); mccs.start(); EXPECT_THROW(mccs.start(), CCSessionError); } @@ -593,7 +602,8 @@ TEST_F(CCSessionTest, doubleStartExplicit) { // Test we can request synchronous receive before we start the session, // and check there's the mechanism if we do it after TEST_F(CCSessionTest, delayedStart) { - ModuleCCSession mccs(ccspecfile("spec2.spec"), session, NULL, NULL, false); + ModuleCCSession mccs(ccspecfile("spec2.spec"), session, NULL, NULL, + false, false); session.getMessages()->add(createAnswer()); ConstElementPtr env, answer; EXPECT_NO_THROW(session.group_recvmsg(env, answer, false, 3)); @@ -620,7 +630,7 @@ TEST_F(CCSessionTest, loggingStartBadSpec) { // just give an empty config session.getMessages()->add(createAnswer(0, el("{}"))); EXPECT_THROW(new ModuleCCSession(ccspecfile("spec2.spec"), session, - NULL, NULL, true, true), ModuleSpecError); + NULL, NULL), ModuleSpecError); EXPECT_FALSE(session.haveSubscription("Logging", "*")); } @@ -629,7 +639,8 @@ TEST_F(CCSessionTest, loggingStartBadSpec) { // if we need to call addRemoteConfig(). // The correct cases are covered in remoteConfig test. TEST_F(CCSessionTest, doubleStartWithAddRemoteConfig) { - ModuleCCSession mccs(ccspecfile("spec29.spec"), session, NULL, NULL); + ModuleCCSession mccs(ccspecfile("spec29.spec"), session, NULL, NULL, + true, false); session.getMessages()->add(createAnswer(0, el("{}"))); EXPECT_THROW(mccs.addRemoteConfig(ccspecfile("spec2.spec")), FakeSession::DoubleRead); @@ -646,41 +657,44 @@ void doRelatedLoggersTest(const char* input, const char* expected) { TEST(LogConfigTest, relatedLoggersTest) { // make sure logger configs for 'other' programs are ignored, // and that * is substituted correctly - // The default root logger name is "bind10" + // We'll use a root logger name of "b10-test". + isc::log::setRootLoggerName("b10-test"); + doRelatedLoggersTest("[{ \"name\": \"other_module\" }]", "[]"); doRelatedLoggersTest("[{ \"name\": \"other_module.somelib\" }]", "[]"); - doRelatedLoggersTest("[{ \"name\": \"bind10_other\" }]", + doRelatedLoggersTest("[{ \"name\": \"test_other\" }]", "[]"); - doRelatedLoggersTest("[{ \"name\": \"bind10_other.somelib\" }]", + doRelatedLoggersTest("[{ \"name\": \"test_other.somelib\" }]", "[]"); doRelatedLoggersTest("[ { \"name\": \"other_module\" }," - " { \"name\": \"bind10\" }]", - "[ { \"name\": \"bind10\" } ]"); - doRelatedLoggersTest("[ { \"name\": \"bind10\" }]", - "[ { \"name\": \"bind10\" } ]"); - doRelatedLoggersTest("[ { \"name\": \"bind10.somelib\" }]", - "[ { \"name\": \"bind10.somelib\" } ]"); + " { \"name\": \"test\" }]", + "[ { \"name\": \"b10-test\" } ]"); + doRelatedLoggersTest("[ { \"name\": \"test\" }]", + "[ { \"name\": \"b10-test\" } ]"); + doRelatedLoggersTest("[ { \"name\": \"test.somelib\" }]", + "[ { \"name\": \"b10-test.somelib\" } ]"); doRelatedLoggersTest("[ { \"name\": \"other_module.somelib\" }," - " { \"name\": \"bind10.somelib\" }]", - "[ { \"name\": \"bind10.somelib\" } ]"); + " { \"name\": \"test.somelib\" }]", + "[ { \"name\": \"b10-test.somelib\" } ]"); doRelatedLoggersTest("[ { \"name\": \"other_module.somelib\" }," - " { \"name\": \"bind10\" }," - " { \"name\": \"bind10.somelib\" }]", - "[ { \"name\": \"bind10\" }," - " { \"name\": \"bind10.somelib\" } ]"); + " { \"name\": \"test\" }," + " { \"name\": \"test.somelib\" }]", + "[ { \"name\": \"b10-test\" }," + " { \"name\": \"b10-test.somelib\" } ]"); doRelatedLoggersTest("[ { \"name\": \"*\" }]", - "[ { \"name\": \"bind10\" } ]"); + "[ { \"name\": \"b10-test\" } ]"); doRelatedLoggersTest("[ { \"name\": \"*.somelib\" }]", - "[ { \"name\": \"bind10.somelib\" } ]"); + "[ { \"name\": \"b10-test.somelib\" } ]"); doRelatedLoggersTest("[ { \"name\": \"*\", \"severity\": \"DEBUG\" }," - " { \"name\": \"bind10\", \"severity\": \"WARN\"}]", - "[ { \"name\": \"bind10\", \"severity\": \"WARN\"} ]"); + " { \"name\": \"test\", \"severity\": \"WARN\"}]", + "[ { \"name\": \"b10-test\", \"severity\": \"WARN\"} ]"); doRelatedLoggersTest("[ { \"name\": \"*\", \"severity\": \"DEBUG\" }," " { \"name\": \"some_module\", \"severity\": \"WARN\"}]", - "[ { \"name\": \"bind10\", \"severity\": \"DEBUG\"} ]"); - + "[ { \"name\": \"b10-test\", \"severity\": \"DEBUG\"} ]"); + doRelatedLoggersTest("[ { \"name\": \"b10-test\" }]", + "[]"); // make sure 'bad' things like '*foo.x' or '*lib' are ignored // (cfgmgr should have already caught it in the logconfig plugin // check, and is responsible for reporting the error) @@ -690,8 +704,8 @@ TEST(LogConfigTest, relatedLoggersTest) { "[ ]"); doRelatedLoggersTest("[ { \"name\": \"*foo\" }," " { \"name\": \"*foo.lib\" }," - " { \"name\": \"bind10\" } ]", - "[ { \"name\": \"bind10\" } ]"); + " { \"name\": \"test\" } ]", + "[ { \"name\": \"b10-test\" } ]"); } } diff --git a/src/lib/config/tests/module_spec_unittests.cc b/src/lib/config/tests/module_spec_unittests.cc index 1b43350f6a..b2ca7b45f4 100644 --- a/src/lib/config/tests/module_spec_unittests.cc +++ b/src/lib/config/tests/module_spec_unittests.cc @@ -1,4 +1,4 @@ -// Copyright (C) 2009 Internet Systems Consortium, Inc. ("ISC") +// Copyright (C) 2009, 2011 Internet Systems Consortium, Inc. ("ISC") // // Permission to use, copy, modify, and/or distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -18,6 +18,8 @@ #include +#include + #include using namespace isc::data; @@ -57,6 +59,7 @@ TEST(ModuleSpec, ReadingSpecfiles) { dd = moduleSpecFromFile(specfile("spec2.spec")); EXPECT_EQ("[ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ]", dd.getCommandsSpec()->str()); + EXPECT_EQ("[ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ]", dd.getStatisticsSpec()->str()); EXPECT_EQ("Spec2", dd.getModuleName()); EXPECT_EQ("", dd.getModuleDescription()); @@ -64,6 +67,11 @@ TEST(ModuleSpec, ReadingSpecfiles) { EXPECT_EQ("Spec25", dd.getModuleName()); EXPECT_EQ("Just an empty module", dd.getModuleDescription()); EXPECT_THROW(moduleSpecFromFile(specfile("spec26.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec34.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec35.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec36.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec37.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec38.spec")), ModuleSpecError); std::ifstream file; file.open(specfile("spec1.spec").c_str()); @@ -71,6 +79,7 @@ TEST(ModuleSpec, ReadingSpecfiles) { EXPECT_EQ(dd.getFullSpec()->get("module_name") ->stringValue(), "Spec1"); EXPECT_TRUE(isNull(dd.getCommandsSpec())); + EXPECT_TRUE(isNull(dd.getStatisticsSpec())); std::ifstream file2; file2.open(specfile("spec8.spec").c_str()); @@ -114,6 +123,12 @@ TEST(ModuleSpec, SpecfileConfigData) { "commands is not a list of elements"); } +TEST(ModuleSpec, SpecfileStatistics) { + moduleSpecError("spec36.spec", "item_default not valid type of item_format"); + moduleSpecError("spec37.spec", "statistics is not a list of elements"); + moduleSpecError("spec38.spec", "item_default not valid type of item_format"); +} + TEST(ModuleSpec, SpecfileCommands) { moduleSpecError("spec17.spec", "command_name missing in { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\" }"); @@ -136,6 +151,17 @@ dataTest(const ModuleSpec& dd, const std::string& data_file_name) { return (dd.validateConfig(data)); } +bool +statisticsTest(const ModuleSpec& dd, const std::string& data_file_name) { + std::ifstream data_file; + + data_file.open(specfile(data_file_name).c_str()); + ConstElementPtr data = Element::fromJSON(data_file, data_file_name); + data_file.close(); + + return (dd.validateStatistics(data)); +} + bool dataTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, ElementPtr errors) @@ -149,6 +175,19 @@ dataTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, return (dd.validateConfig(data, true, errors)); } +bool +statisticsTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, + ElementPtr errors) +{ + std::ifstream data_file; + + data_file.open(specfile(data_file_name).c_str()); + ConstElementPtr data = Element::fromJSON(data_file, data_file_name); + data_file.close(); + + return (dd.validateStatistics(data, true, errors)); +} + TEST(ModuleSpec, DataValidation) { ModuleSpec dd = moduleSpecFromFile(specfile("spec22.spec")); @@ -175,6 +214,17 @@ TEST(ModuleSpec, DataValidation) { EXPECT_EQ("[ \"Unknown item value_does_not_exist\" ]", errors->str()); } +TEST(ModuleSpec, StatisticsValidation) { + ModuleSpec dd = moduleSpecFromFile(specfile("spec33.spec")); + + EXPECT_TRUE(statisticsTest(dd, "data33_1.data")); + EXPECT_FALSE(statisticsTest(dd, "data33_2.data")); + + ElementPtr errors = Element::createList(); + EXPECT_FALSE(statisticsTestWithErrors(dd, "data33_2.data", errors)); + EXPECT_EQ("[ \"Format mismatch\", \"Format mismatch\", \"Format mismatch\" ]", errors->str()); +} + TEST(ModuleSpec, CommandValidation) { ModuleSpec dd = moduleSpecFromFile(specfile("spec2.spec")); ConstElementPtr arg = Element::fromJSON("{}"); @@ -211,3 +261,118 @@ TEST(ModuleSpec, CommandValidation) { EXPECT_EQ(errors->get(0)->stringValue(), "Type mismatch"); } + +TEST(ModuleSpec, NamedSetValidation) { + ModuleSpec dd = moduleSpecFromFile(specfile("spec32.spec")); + + ElementPtr errors = Element::createList(); + EXPECT_TRUE(dataTestWithErrors(dd, "data32_1.data", errors)); + EXPECT_FALSE(dataTest(dd, "data32_2.data")); + EXPECT_FALSE(dataTest(dd, "data32_3.data")); +} + +TEST(ModuleSpec, CheckFormat) { + + const std::string json_begin = "{ \"module_spec\": { \"module_name\": \"Foo\", \"statistics\": [ { \"item_name\": \"dummy_time\", \"item_type\": \"string\", \"item_optional\": true, \"item_title\": \"Dummy Time\", \"item_description\": \"A dummy date time\""; + const std::string json_end = " } ] } }"; + std::string item_default; + std::string item_format; + std::vector specs; + ConstElementPtr el; + + specs.clear(); + item_default = "\"item_default\": \"2011-05-27T19:42:57Z\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-05-27\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"19:42:57\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_format); + item_default = ""; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_format); + item_default = ""; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_format); + + item_default = "\"item_default\": \"a\""; + specs.push_back("," + item_default); + item_default = "\"item_default\": \"b\""; + specs.push_back("," + item_default); + item_default = "\"item_default\": \"c\""; + specs.push_back("," + item_default); + + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_format); + + specs.push_back(""); + + BOOST_FOREACH(std::string s, specs) { + el = Element::fromJSON(json_begin + s + json_end)->get("module_spec"); + EXPECT_NO_THROW(ModuleSpec(el, true)); + } + + specs.clear(); + item_default = "\"item_default\": \"2011-05-27T19:42:57Z\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-05-27\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"19:42:57Z\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"2011-13-99T99:99:99Z\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-13-99\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"99:99:99Z\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + // wrong date-time-type format not ending with "Z" + item_default = "\"item_default\": \"2011-05-27T19:42:57\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + // wrong date-type format ending with "T" + item_default = "\"item_default\": \"2011-05-27T\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + // wrong time-type format ending with "Z" + item_default = "\"item_default\": \"19:42:57Z\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + BOOST_FOREACH(std::string s, specs) { + el = Element::fromJSON(json_begin + s + json_end)->get("module_spec"); + EXPECT_THROW(ModuleSpec(el, true), ModuleSpecError); + } +} diff --git a/src/lib/config/tests/testdata/Makefile.am b/src/lib/config/tests/testdata/Makefile.am index 57d1ed30ec..0d8b92ecb5 100644 --- a/src/lib/config/tests/testdata/Makefile.am +++ b/src/lib/config/tests/testdata/Makefile.am @@ -22,6 +22,11 @@ EXTRA_DIST += data22_7.data EXTRA_DIST += data22_8.data EXTRA_DIST += data22_9.data EXTRA_DIST += data22_10.data +EXTRA_DIST += data32_1.data +EXTRA_DIST += data32_2.data +EXTRA_DIST += data32_3.data +EXTRA_DIST += data33_1.data +EXTRA_DIST += data33_2.data EXTRA_DIST += spec1.spec EXTRA_DIST += spec2.spec EXTRA_DIST += spec3.spec @@ -53,3 +58,10 @@ EXTRA_DIST += spec28.spec EXTRA_DIST += spec29.spec EXTRA_DIST += spec30.spec EXTRA_DIST += spec31.spec +EXTRA_DIST += spec32.spec +EXTRA_DIST += spec33.spec +EXTRA_DIST += spec34.spec +EXTRA_DIST += spec35.spec +EXTRA_DIST += spec36.spec +EXTRA_DIST += spec37.spec +EXTRA_DIST += spec38.spec diff --git a/src/lib/config/tests/testdata/data32_1.data b/src/lib/config/tests/testdata/data32_1.data new file mode 100644 index 0000000000..5695b523a9 --- /dev/null +++ b/src/lib/config/tests/testdata/data32_1.data @@ -0,0 +1,3 @@ +{ + "named_set_item": { "foo": 1, "bar": 2 } +} diff --git a/src/lib/config/tests/testdata/data32_2.data b/src/lib/config/tests/testdata/data32_2.data new file mode 100644 index 0000000000..d5b9765ffb --- /dev/null +++ b/src/lib/config/tests/testdata/data32_2.data @@ -0,0 +1,3 @@ +{ + "named_set_item": { "foo": "wrongtype", "bar": 2 } +} diff --git a/src/lib/config/tests/testdata/data32_3.data b/src/lib/config/tests/testdata/data32_3.data new file mode 100644 index 0000000000..85f32feed6 --- /dev/null +++ b/src/lib/config/tests/testdata/data32_3.data @@ -0,0 +1,3 @@ +{ + "named_set_item": [] +} diff --git a/src/lib/config/tests/testdata/data33_1.data b/src/lib/config/tests/testdata/data33_1.data new file mode 100644 index 0000000000..429852c974 --- /dev/null +++ b/src/lib/config/tests/testdata/data33_1.data @@ -0,0 +1,7 @@ +{ + "dummy_str": "Dummy String", + "dummy_int": 118, + "dummy_datetime": "2011-05-27T19:42:57Z", + "dummy_date": "2011-05-27", + "dummy_time": "19:42:57" +} diff --git a/src/lib/config/tests/testdata/data33_2.data b/src/lib/config/tests/testdata/data33_2.data new file mode 100644 index 0000000000..eb0615c1c9 --- /dev/null +++ b/src/lib/config/tests/testdata/data33_2.data @@ -0,0 +1,7 @@ +{ + "dummy_str": "Dummy String", + "dummy_int": 118, + "dummy_datetime": "xxxx", + "dummy_date": "xxxx", + "dummy_time": "xxxx" +} diff --git a/src/lib/config/tests/testdata/spec2.spec b/src/lib/config/tests/testdata/spec2.spec index 59b8ebcbbb..43524224a2 100644 --- a/src/lib/config/tests/testdata/spec2.spec +++ b/src/lib/config/tests/testdata/spec2.spec @@ -66,6 +66,17 @@ "command_description": "Shut down BIND 10", "command_args": [] } + ], + "statistics": [ + { + "item_name": "dummy_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Dummy Time", + "item_description": "A dummy date time", + "item_format": "date-time" + } ] } } diff --git a/src/lib/config/tests/testdata/spec32.spec b/src/lib/config/tests/testdata/spec32.spec new file mode 100644 index 0000000000..68e774e00a --- /dev/null +++ b/src/lib/config/tests/testdata/spec32.spec @@ -0,0 +1,19 @@ +{ + "module_spec": { + "module_name": "Spec32", + "config_data": [ + { "item_name": "named_set_item", + "item_type": "named_set", + "item_optional": false, + "item_default": { "a": 1, "b": 2 }, + "named_set_item_spec": { + "item_name": "named_set_element", + "item_type": "integer", + "item_optional": false, + "item_default": 3 + } + } + ] + } +} + diff --git a/src/lib/config/tests/testdata/spec33.spec b/src/lib/config/tests/testdata/spec33.spec new file mode 100644 index 0000000000..3002488b72 --- /dev/null +++ b/src/lib/config/tests/testdata/spec33.spec @@ -0,0 +1,50 @@ +{ + "module_spec": { + "module_name": "Spec33", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String", + "item_description": "A dummy string" + }, + { + "item_name": "dummy_int", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Dummy Integer", + "item_description": "A dummy integer" + }, + { + "item_name": "dummy_datetime", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Dummy DateTime", + "item_description": "A dummy datetime", + "item_format": "date-time" + }, + { + "item_name": "dummy_date", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01", + "item_title": "Dummy Date", + "item_description": "A dummy date", + "item_format": "date" + }, + { + "item_name": "dummy_time", + "item_type": "string", + "item_optional": false, + "item_default": "00:00:00", + "item_title": "Dummy Time", + "item_description": "A dummy time", + "item_format": "time" + } + ] + } +} diff --git a/src/lib/config/tests/testdata/spec34.spec b/src/lib/config/tests/testdata/spec34.spec new file mode 100644 index 0000000000..dd1f3ca952 --- /dev/null +++ b/src/lib/config/tests/testdata/spec34.spec @@ -0,0 +1,14 @@ +{ + "module_spec": { + "module_name": "Spec34", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_description": "A dummy string" + } + ] + } +} diff --git a/src/lib/config/tests/testdata/spec35.spec b/src/lib/config/tests/testdata/spec35.spec new file mode 100644 index 0000000000..86aaf145a0 --- /dev/null +++ b/src/lib/config/tests/testdata/spec35.spec @@ -0,0 +1,15 @@ +{ + "module_spec": { + "module_name": "Spec35", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String" + } + ] + } +} + diff --git a/src/lib/config/tests/testdata/spec36.spec b/src/lib/config/tests/testdata/spec36.spec new file mode 100644 index 0000000000..fb9ce26084 --- /dev/null +++ b/src/lib/config/tests/testdata/spec36.spec @@ -0,0 +1,17 @@ +{ + "module_spec": { + "module_name": "Spec36", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String", + "item_description": "A dummy string", + "item_format": "dummy" + } + ] + } +} + diff --git a/src/lib/config/tests/testdata/spec37.spec b/src/lib/config/tests/testdata/spec37.spec new file mode 100644 index 0000000000..bc444d107c --- /dev/null +++ b/src/lib/config/tests/testdata/spec37.spec @@ -0,0 +1,7 @@ +{ + "module_spec": { + "module_name": "Spec37", + "statistics": 8 + } +} + diff --git a/src/lib/config/tests/testdata/spec38.spec b/src/lib/config/tests/testdata/spec38.spec new file mode 100644 index 0000000000..1892e887fb --- /dev/null +++ b/src/lib/config/tests/testdata/spec38.spec @@ -0,0 +1,17 @@ +{ + "module_spec": { + "module_name": "Spec38", + "statistics": [ + { + "item_name": "dummy_datetime", + "item_type": "string", + "item_optional": false, + "item_default": "11", + "item_title": "Dummy DateTime", + "item_description": "A dummy datetime", + "item_format": "date-time" + } + ] + } +} + diff --git a/src/lib/datasrc/Makefile.am b/src/lib/datasrc/Makefile.am index 457d5b069b..5e193d2afc 100644 --- a/src/lib/datasrc/Makefile.am +++ b/src/lib/datasrc/Makefile.am @@ -9,7 +9,7 @@ AM_CXXFLAGS = $(B10_CXXFLAGS) CLEANFILES = *.gcno *.gcda datasrc_messages.h datasrc_messages.cc -lib_LTLIBRARIES = libdatasrc.la +lib_LTLIBRARIES = libdatasrc.la sqlite3_ds.la memory_ds.la libdatasrc_la_SOURCES = data_source.h data_source.cc libdatasrc_la_SOURCES += static_datasrc.h static_datasrc.cc libdatasrc_la_SOURCES += sqlite3_datasrc.h sqlite3_datasrc.cc @@ -17,16 +17,26 @@ libdatasrc_la_SOURCES += query.h query.cc libdatasrc_la_SOURCES += cache.h cache.cc libdatasrc_la_SOURCES += rbtree.h libdatasrc_la_SOURCES += zonetable.h zonetable.cc -libdatasrc_la_SOURCES += memory_datasrc.h memory_datasrc.cc libdatasrc_la_SOURCES += zone.h libdatasrc_la_SOURCES += result.h libdatasrc_la_SOURCES += logger.h logger.cc +libdatasrc_la_SOURCES += client.h iterator.h +libdatasrc_la_SOURCES += database.h database.cc +#libdatasrc_la_SOURCES += sqlite3_accessor.h sqlite3_accessor.cc +libdatasrc_la_SOURCES += factory.h factory.cc nodist_libdatasrc_la_SOURCES = datasrc_messages.h datasrc_messages.cc +sqlite3_ds_la_SOURCES = sqlite3_accessor.h sqlite3_accessor.cc +sqlite3_ds_la_LDFLAGS = -module + +memory_ds_la_SOURCES = memory_datasrc.h memory_datasrc.cc +memory_ds_la_LDFLAGS = -module + libdatasrc_la_LIBADD = $(top_builddir)/src/lib/exceptions/libexceptions.la libdatasrc_la_LIBADD += $(top_builddir)/src/lib/dns/libdns++.la libdatasrc_la_LIBADD += $(top_builddir)/src/lib/log/liblog.la libdatasrc_la_LIBADD += $(top_builddir)/src/lib/cc/libcc.la +libdatasrc_la_LIBADD += $(SQLITE_LIBS) BUILT_SOURCES = datasrc_messages.h datasrc_messages.cc datasrc_messages.h datasrc_messages.cc: Makefile datasrc_messages.mes diff --git a/src/lib/datasrc/cache.cc b/src/lib/datasrc/cache.cc index 9082a6b4ce..d88e649266 100644 --- a/src/lib/datasrc/cache.cc +++ b/src/lib/datasrc/cache.cc @@ -232,7 +232,8 @@ HotCacheImpl::insert(const CacheNodePtr node) { if (iter != map_.end()) { CacheNodePtr old = iter->second; if (old && old->isValid()) { - LOG_DEBUG(logger, DBG_TRACE_DATA, DATASRC_CACHE_OLD_FOUND); + LOG_DEBUG(logger, DBG_TRACE_DATA, DATASRC_CACHE_OLD_FOUND) + .arg(node->getNodeName()); remove(old); } } diff --git a/src/lib/datasrc/client.h b/src/lib/datasrc/client.h new file mode 100644 index 0000000000..40b7a3f307 --- /dev/null +++ b/src/lib/datasrc/client.h @@ -0,0 +1,292 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __DATA_SOURCE_CLIENT_H +#define __DATA_SOURCE_CLIENT_H 1 + +#include +#include + +#include + +#include + +/// \file +/// Datasource clients +/// +/// The data source client API is specified in client.h, and provides the +/// functionality to query and modify data in the data sources. There are +/// multiple datasource implementations, and by subclassing DataSourceClient or +/// DatabaseClient, more can be added. +/// +/// All datasources are implemented as loadable modules, with a name of the +/// form "_ds.so". This has been chosen intentionally, to minimize +/// confusion and potential mistakes. +/// +/// In order to use a datasource client backend, the class +/// DataSourceClientContainer is provided in factory.h; this will load the +/// library, set up the instance, and clean everything up once it is destroyed. +/// +/// Access to the actual instance is provided with the getInstance() method +/// in DataSourceClientContainer +/// +/// \note Depending on actual usage, we might consider making the container +/// a transparent abstraction layer, so it can be used as a DataSourceClient +/// directly. This has some other implications though so for now the only access +/// provided is through getInstance()). +/// +/// For datasource backends, we use a dynamically loaded library system (with +/// dlopen()). This library must contain the following things; +/// - A subclass of DataSourceClient or DatabaseClient (which itself is a +/// subclass of DataSourceClient) +/// - A creator function for an instance of that subclass, of the form: +/// \code +/// extern "C" DataSourceClient* createInstance(isc::data::ConstElementPtr cfg); +/// \endcode +/// - A destructor for said instance, of the form: +/// \code +/// extern "C" void destroyInstance(isc::data::DataSourceClient* instance); +/// \endcode +/// +/// See the documentation for the \link DataSourceClient \endlink class for +/// more information on implementing subclasses of it. +/// + +namespace isc { +namespace datasrc { + +// The iterator.h is not included on purpose, most application won't need it +class ZoneIterator; +typedef boost::shared_ptr ZoneIteratorPtr; + +/// \brief The base class of data source clients. +/// +/// This is an abstract base class that defines the common interface for +/// various types of data source clients. A data source client is a top level +/// access point to a data source, allowing various operations on the data +/// source such as lookups, traversing or updates. The client class itself +/// has limited focus and delegates the responsibility for these specific +/// operations to other classes; in general methods of this class act as +/// factories of these other classes. +/// +/// See \link datasrc/client.h datasrc/client.h \endlink for more information +/// on adding datasource implementations. +/// +/// The following derived classes are currently (expected to be) provided: +/// - \c InMemoryClient: A client of a conceptual data source that stores +/// all necessary data in memory for faster lookups +/// - \c DatabaseClient: A client that uses a real database backend (such as +/// an SQL database). It would internally hold a connection to the underlying +/// database system. +/// +/// \note It is intentional that while the term these derived classes don't +/// contain "DataSource" unlike their base class. It's also noteworthy +/// that the naming of the base class is somewhat redundant because the +/// namespace \c datasrc would indicate that it's related to a data source. +/// The redundant naming comes from the observation that namespaces are +/// often omitted with \c using directives, in which case "Client" +/// would be too generic. On the other hand, concrete derived classes are +/// generally not expected to be referenced directly from other modules and +/// applications, so we'll give them more concise names such as InMemoryClient. +/// +/// A single \c DataSourceClient object is expected to handle only a single +/// RR class even if the underlying data source contains records for multiple +/// RR classes. Likewise, (when we support views) a \c DataSourceClient +/// object is expected to handle only a single view. +/// +/// If the application uses multiple threads, each thread will need to +/// create and use a separate DataSourceClient. This is because some +/// database backend doesn't allow multiple threads to share the same +/// connection to the database. +/// +/// \note For a client using an in memory backend, this may result in +/// having a multiple copies of the same data in memory, increasing the +/// memory footprint substantially. Depending on how to support multiple +/// CPU cores for concurrent lookups on the same single data source (which +/// is not fully fixed yet, and for which multiple threads may be used), +/// this design may have to be revisited. +/// +/// This class (and therefore its derived classes) are not copyable. +/// This is because the derived classes would generally contain attributes +/// that are not easy to copy (such as a large size of in memory data or a +/// network connection to a database server). In order to avoid a surprising +/// disruption with a naive copy it's prohibited explicitly. For the expected +/// usage of the client classes the restriction should be acceptable. +/// +/// \todo This class is still not complete. It will need more factory methods, +/// e.g. for (re)loading a zone. +class DataSourceClient : boost::noncopyable { +public: + /// \brief A helper structure to represent the search result of + /// \c find(). + /// + /// This is a straightforward pair of the result code and a share pointer + /// to the found zone to represent the result of \c find(). + /// We use this in order to avoid overloading the return value for both + /// the result code ("success" or "not found") and the found object, + /// i.e., avoid using \c NULL to mean "not found", etc. + /// + /// This is a simple value class with no internal state, so for + /// convenience we allow the applications to refer to the members + /// directly. + /// + /// See the description of \c find() for the semantics of the member + /// variables. + struct FindResult { + FindResult(result::Result param_code, + const ZoneFinderPtr param_zone_finder) : + code(param_code), zone_finder(param_zone_finder) + {} + const result::Result code; + const ZoneFinderPtr zone_finder; + }; + + /// + /// \name Constructors and Destructor. + /// +protected: + /// Default constructor. + /// + /// This is intentionally defined as protected as this base class + /// should never be instantiated directly. + /// + /// The constructor of a concrete derived class may throw an exception. + /// This interface does not specify which exceptions can happen (at least + /// at this moment), and the caller should expect any type of exception + /// and react accordingly. + DataSourceClient() {} + +public: + /// The destructor. + virtual ~DataSourceClient() {} + //@} + + /// Returns a \c ZoneFinder for a zone that best matches the given name. + /// + /// A concrete derived version of this method gets access to its backend + /// data source to search for a zone whose origin gives the longest match + /// against \c name. It returns the search result in the form of a + /// \c FindResult object as follows: + /// - \c code: The result code of the operation. + /// - \c result::SUCCESS: A zone that gives an exact match is found + /// - \c result::PARTIALMATCH: A zone whose origin is a + /// super domain of \c name is found (but there is no exact match) + /// - \c result::NOTFOUND: For all other cases. + /// - \c zone_finder: Pointer to a \c ZoneFinder object for the found zone + /// if one is found; otherwise \c NULL. + /// + /// A specific derived version of this method may throw an exception. + /// This interface does not specify which exceptions can happen (at least + /// at this moment), and the caller should expect any type of exception + /// and react accordingly. + /// + /// \param name A domain name for which the search is performed. + /// \return A \c FindResult object enclosing the search result (see above). + virtual FindResult findZone(const isc::dns::Name& name) const = 0; + + /// \brief Returns an iterator to the given zone + /// + /// This allows for traversing the whole zone. The returned object can + /// provide the RRsets one by one. + /// + /// This throws DataSourceError when the zone does not exist in the + /// datasource. + /// + /// The default implementation throws isc::NotImplemented. This allows + /// for easy and fast deployment of minimal custom data sources, where + /// the user/implementator doesn't have to care about anything else but + /// the actual queries. Also, in some cases, it isn't possible to traverse + /// the zone from logic point of view (eg. dynamically generated zone + /// data). + /// + /// It is not fixed if a concrete implementation of this method can throw + /// anything else. + /// + /// \param name The name of zone apex to be traversed. It doesn't do + /// nearest match as findZone. + /// \return Pointer to the iterator. + virtual ZoneIteratorPtr getIterator(const isc::dns::Name& name) const { + // This is here to both document the parameter in doxygen (therefore it + // needs a name) and avoid unused parameter warning. + static_cast(name); + + isc_throw(isc::NotImplemented, + "Data source doesn't support iteration"); + } + + /// Return an updater to make updates to a specific zone. + /// + /// The RR class of the zone is the one that the client is expected to + /// handle (see the detailed description of this class). + /// + /// If the specified zone is not found via the client, a NULL pointer + /// will be returned; in other words a completely new zone cannot be + /// created using an updater. It must be created beforehand (even if + /// it's an empty placeholder) in a way specific to the underlying data + /// source. + /// + /// Conceptually, the updater will trigger a separate transaction for + /// subsequent updates to the zone within the context of the updater + /// (the actual implementation of the "transaction" may vary for the + /// specific underlying data source). Until \c commit() is performed + /// on the updater, the intermediate updates won't affect the results + /// of other methods (and the result of the object's methods created + /// by other factory methods). Likewise, if the updater is destructed + /// without performing \c commit(), the intermediate updates will be + /// effectively canceled and will never affect other methods. + /// + /// If the underlying data source allows concurrent updates, this method + /// can be called multiple times while the previously returned updater(s) + /// are still active. In this case each updater triggers a different + /// "transaction". Normally it would be for different zones for such a + /// case as handling multiple incoming AXFR streams concurrently, but + /// this interface does not even prohibit an attempt of getting more than + /// one updater for the same zone, as long as the underlying data source + /// allows such an operation (and any conflict resolution is left to the + /// specific derived class implementation). + /// + /// If \c replace is true, any existing RRs of the zone will be + /// deleted on successful completion of updates (after \c commit() on + /// the updater); if it's false, the existing RRs will be + /// intact unless explicitly deleted by \c deleteRRset() on the updater. + /// + /// A data source can be "read only" or can prohibit partial updates. + /// In such cases this method will result in an \c isc::NotImplemented + /// exception unconditionally or when \c replace is false). + /// + /// \note To avoid throwing the exception accidentally with a lazy + /// implementation, we still keep this method pure virtual without + /// an implementation. All derived classes must explicitly define this + /// method, even if it simply throws the NotImplemented exception. + /// + /// \exception NotImplemented The underlying data source does not support + /// updates. + /// \exception DataSourceError Internal error in the underlying data + /// source. + /// \exception std::bad_alloc Resource allocation failure. + /// + /// \param name The zone name to be updated + /// \param replace Whether to delete existing RRs before making updates + /// + /// \return A pointer to the updater; it will be NULL if the specified + /// zone isn't found. + virtual ZoneUpdaterPtr getUpdater(const isc::dns::Name& name, + bool replace) const = 0; +}; +} +} +#endif // DATA_SOURCE_CLIENT_H +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/datasrc/data_source.cc b/src/lib/datasrc/data_source.cc index 4e1fcde202..94dec89352 100644 --- a/src/lib/datasrc/data_source.cc +++ b/src/lib/datasrc/data_source.cc @@ -903,7 +903,7 @@ tryWildcard(Query& q, QueryTaskPtr task, ZoneInfo& zoneinfo, bool& found) { result = proveNX(q, task, zoneinfo, true); if (result != DataSrc::SUCCESS) { m.setRcode(Rcode::SERVFAIL()); - logger.error(DATASRC_QUERY_WILDCARD_PROVENX_FAIL). + logger.error(DATASRC_QUERY_WILDCARD_PROVE_NX_FAIL). arg(task->qname).arg(result); return (DataSrc::ERROR); } @@ -945,7 +945,7 @@ tryWildcard(Query& q, QueryTaskPtr task, ZoneInfo& zoneinfo, bool& found) { void DataSrc::doQuery(Query& q) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_QUERY_PROCESS).arg(q.qname()). - arg(q.qclass()); + arg(q.qtype()).arg(q.qclass()); Message& m = q.message(); vector additional; @@ -1162,7 +1162,7 @@ DataSrc::doQuery(Query& q) { result = proveNX(q, task, zoneinfo, false); if (result != DataSrc::SUCCESS) { m.setRcode(Rcode::SERVFAIL()); - logger.error(DATASRC_QUERY_PROVENX_FAIL).arg(task->qname); + logger.error(DATASRC_QUERY_PROVE_NX_FAIL).arg(task->qname); return; } } diff --git a/src/lib/datasrc/data_source.h b/src/lib/datasrc/data_source.h index ff695da6e8..a7a15a9242 100644 --- a/src/lib/datasrc/data_source.h +++ b/src/lib/datasrc/data_source.h @@ -184,9 +184,9 @@ public: void setClass(isc::dns::RRClass& c) { rrclass = c; } void setClass(const isc::dns::RRClass& c) { rrclass = c; } - Result init() { return (NOT_IMPLEMENTED); } - Result init(isc::data::ConstElementPtr config); - Result close() { return (NOT_IMPLEMENTED); } + virtual Result init() { return (NOT_IMPLEMENTED); } + virtual Result init(isc::data::ConstElementPtr config); + virtual Result close() { return (NOT_IMPLEMENTED); } virtual Result findRRset(const isc::dns::Name& qname, const isc::dns::RRClass& qclass, @@ -351,7 +351,7 @@ public: /// \brief Returns the best enclosing zone name found for the given // name and RR class so far. - /// + /// /// \return A pointer to the zone apex \c Name, NULL if none found yet. /// /// This method never throws an exception. @@ -413,6 +413,6 @@ private: #endif -// Local Variables: +// Local Variables: // mode: c++ -// End: +// End: diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc new file mode 100644 index 0000000000..e476297885 --- /dev/null +++ b/src/lib/datasrc/database.cc @@ -0,0 +1,960 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include + +using namespace isc::dns; +using namespace std; +using boost::shared_ptr; +using namespace isc::dns::rdata; + +namespace isc { +namespace datasrc { + +DatabaseClient::DatabaseClient(RRClass rrclass, + boost::shared_ptr + accessor) : + rrclass_(rrclass), accessor_(accessor) +{ + if (!accessor_) { + isc_throw(isc::InvalidParameter, + "No database provided to DatabaseClient"); + } +} + +DataSourceClient::FindResult +DatabaseClient::findZone(const Name& name) const { + std::pair zone(accessor_->getZone(name.toText())); + // Try exact first + if (zone.first) { + return (FindResult(result::SUCCESS, + ZoneFinderPtr(new Finder(accessor_, + zone.second, name)))); + } + // Then super domains + // Start from 1, as 0 is covered above + for (size_t i(1); i < name.getLabelCount(); ++i) { + isc::dns::Name superdomain(name.split(i)); + zone = accessor_->getZone(superdomain.toText()); + if (zone.first) { + return (FindResult(result::PARTIALMATCH, + ZoneFinderPtr(new Finder(accessor_, + zone.second, + superdomain)))); + } + } + // No, really nothing + return (FindResult(result::NOTFOUND, ZoneFinderPtr())); +} + +DatabaseClient::Finder::Finder(boost::shared_ptr accessor, + int zone_id, const isc::dns::Name& origin) : + accessor_(accessor), + zone_id_(zone_id), + origin_(origin) +{ } + +namespace { +// Adds the given Rdata to the given RRset +// If the rrset is an empty pointer, a new one is +// created with the given name, class, type and ttl +// The type is checked if the rrset exists, but the +// name is not. +// +// Then adds the given rdata to the set +// +// Raises a DataSourceError if the type does not +// match, or if the given rdata string does not +// parse correctly for the given type and class +// +// The DatabaseAccessor is passed to print the +// database name in the log message if the TTL is +// modified +void addOrCreate(isc::dns::RRsetPtr& rrset, + const isc::dns::Name& name, + const isc::dns::RRClass& cls, + const isc::dns::RRType& type, + const isc::dns::RRTTL& ttl, + const std::string& rdata_str, + const DatabaseAccessor& db + ) +{ + if (!rrset) { + rrset.reset(new isc::dns::RRset(name, cls, type, ttl)); + } else { + // This is a check to make sure find() is not messing things up + assert(type == rrset->getType()); + if (ttl != rrset->getTTL()) { + if (ttl < rrset->getTTL()) { + rrset->setTTL(ttl); + } + logger.warn(DATASRC_DATABASE_FIND_TTL_MISMATCH) + .arg(db.getDBName()).arg(name).arg(cls) + .arg(type).arg(rrset->getTTL()); + } + } + try { + rrset->addRdata(isc::dns::rdata::createRdata(type, cls, rdata_str)); + } catch (const isc::dns::rdata::InvalidRdataText& ivrt) { + // at this point, rrset may have been initialised for no reason, + // and won't be used. But the caller would drop the shared_ptr + // on such an error anyway, so we don't care. + isc_throw(DataSourceError, + "bad rdata in database for " << name << " " + << type << ": " << ivrt.what()); + } +} + +// This class keeps a short-lived store of RRSIG records encountered +// during a call to find(). If the backend happens to return signatures +// before the actual data, we might not know which signatures we will need +// So if they may be relevant, we store the in this class. +// +// (If this class seems useful in other places, we might want to move +// it to util. That would also provide an opportunity to add unit tests) +class RRsigStore { +public: + // Adds the given signature Rdata to the store + // The signature rdata MUST be of the RRSIG rdata type + // (the caller must make sure of this). + // NOTE: if we move this class to a public namespace, + // we should add a type_covered argument, so as not + // to have to do this cast here. + void addSig(isc::dns::rdata::RdataPtr sig_rdata) { + const isc::dns::RRType& type_covered = + static_cast( + sig_rdata.get())->typeCovered(); + sigs[type_covered].push_back(sig_rdata); + } + + // If the store contains signatures for the type of the given + // rrset, they are appended to it. + void appendSignatures(isc::dns::RRsetPtr& rrset) const { + std::map >::const_iterator + found = sigs.find(rrset->getType()); + if (found != sigs.end()) { + BOOST_FOREACH(isc::dns::rdata::RdataPtr sig, found->second) { + rrset->addRRsig(sig); + } + } + } + +private: + std::map > sigs; +}; +} + +DatabaseClient::Finder::FoundRRsets +DatabaseClient::Finder::getRRsets(const string& name, const WantedTypes& types, + bool check_ns, const string* construct_name) +{ + RRsigStore sig_store; + bool records_found = false; + std::map result; + + // Request the context + DatabaseAccessor::IteratorContextPtr + context(accessor_->getRecords(name, zone_id_)); + // It must not return NULL, that's a bug of the implementation + if (!context) { + isc_throw(isc::Unexpected, "Iterator context null at " + name); + } + + std::string columns[DatabaseAccessor::COLUMN_COUNT]; + if (construct_name == NULL) { + construct_name = &name; + } + + const Name construct_name_object(*construct_name); + + bool seen_cname(false); + bool seen_ds(false); + bool seen_other(false); + bool seen_ns(false); + + while (context->getNext(columns)) { + // The domain is not empty + records_found = true; + + try { + const RRType cur_type(columns[DatabaseAccessor::TYPE_COLUMN]); + + if (cur_type == RRType::RRSIG()) { + // If we get signatures before we get the actual data, we + // can't know which ones to keep and which to drop... + // So we keep a separate store of any signature that may be + // relevant and add them to the final RRset when we are + // done. + // A possible optimization here is to not store them for + // types we are certain we don't need + sig_store.addSig(rdata::createRdata(cur_type, getClass(), + columns[DatabaseAccessor::RDATA_COLUMN])); + } + + if (types.find(cur_type) != types.end()) { + // This type is requested, so put it into result + const RRTTL cur_ttl(columns[DatabaseAccessor::TTL_COLUMN]); + // Ths sigtype column was an optimization for finding the + // relevant RRSIG RRs for a lookup. Currently this column is + // not used in this revised datasource implementation. We + // should either start using it again, or remove it from use + // completely (i.e. also remove it from the schema and the + // backend implementation). + // Note that because we don't use it now, we also won't notice + // it if the value is wrong (i.e. if the sigtype column + // contains an rrtype that is different from the actual value + // of the 'type covered' field in the RRSIG Rdata). + //cur_sigtype(columns[SIGTYPE_COLUMN]); + addOrCreate(result[cur_type], construct_name_object, + getClass(), cur_type, cur_ttl, + columns[DatabaseAccessor::RDATA_COLUMN], + *accessor_); + } + + if (cur_type == RRType::CNAME()) { + seen_cname = true; + } else if (cur_type == RRType::NS()) { + seen_ns = true; + } else if (cur_type == RRType::DS()) { + seen_ds = true; + } else if (cur_type != RRType::RRSIG() && + cur_type != RRType::NSEC3() && + cur_type != RRType::NSEC()) { + // NSEC and RRSIG can coexist with anything, otherwise + // we've seen something that can't live together with potential + // CNAME or NS + // + // NSEC3 lives in separate namespace from everything, therefore + // we just ignore it here for these checks as well. + seen_other = true; + } + } catch (const InvalidRRType&) { + isc_throw(DataSourceError, "Invalid RRType in database for " << + name << ": " << columns[DatabaseAccessor:: + TYPE_COLUMN]); + } catch (const InvalidRRTTL&) { + isc_throw(DataSourceError, "Invalid TTL in database for " << + name << ": " << columns[DatabaseAccessor:: + TTL_COLUMN]); + } catch (const rdata::InvalidRdataText&) { + isc_throw(DataSourceError, "Invalid rdata in database for " << + name << ": " << columns[DatabaseAccessor:: + RDATA_COLUMN]); + } + } + if (seen_cname && (seen_other || seen_ns || seen_ds)) { + isc_throw(DataSourceError, "CNAME shares domain " << name << + " with something else"); + } + if (check_ns && seen_ns && seen_other) { + isc_throw(DataSourceError, "NS shares domain " << name << + " with something else"); + } + // Add signatures to all found RRsets + for (std::map::iterator i(result.begin()); + i != result.end(); ++ i) { + sig_store.appendSignatures(i->second); + } + + return (FoundRRsets(records_found, result)); +} + +bool +DatabaseClient::Finder::hasSubdomains(const std::string& name) { + // Request the context + DatabaseAccessor::IteratorContextPtr + context(accessor_->getRecords(name, zone_id_, true)); + // It must not return NULL, that's a bug of the implementation + if (!context) { + isc_throw(isc::Unexpected, "Iterator context null at " + name); + } + + std::string columns[DatabaseAccessor::COLUMN_COUNT]; + return (context->getNext(columns)); +} + +// Some manipulation with RRType sets +namespace { + +// Bunch of functions to construct specific sets of RRTypes we will +// ask from it. +typedef std::set WantedTypes; + +const WantedTypes& +NSEC_TYPES() { + static bool initialized(false); + static WantedTypes result; + + if (!initialized) { + result.insert(RRType::NSEC()); + initialized = true; + } + return (result); +} + +const WantedTypes& +DELEGATION_TYPES() { + static bool initialized(false); + static WantedTypes result; + + if (!initialized) { + result.insert(RRType::DNAME()); + result.insert(RRType::NS()); + initialized = true; + } + return (result); +} + +const WantedTypes& +FINAL_TYPES() { + static bool initialized(false); + static WantedTypes result; + + if (!initialized) { + result.insert(RRType::CNAME()); + result.insert(RRType::NS()); + result.insert(RRType::NSEC()); + initialized = true; + } + return (result); +} + +} + +RRsetPtr +DatabaseClient::Finder::findNSECCover(const Name& name) { + try { + // Which one should contain the NSEC record? + const Name coverName(findPreviousName(name)); + // Get the record and copy it out + const FoundRRsets found = getRRsets(coverName.toText(), NSEC_TYPES(), + coverName != getOrigin()); + const FoundIterator + nci(found.second.find(RRType::NSEC())); + if (nci != found.second.end()) { + return (nci->second); + } else { + // The previous doesn't contain NSEC. + // Badly signed zone or a bug? + + // FIXME: Currently, if the zone is not signed, we could get + // here. In that case we can't really throw, but for now, we can't + // recognize it. So we don't throw at all, enable it once + // we have a is_signed flag or something. +#if 0 + isc_throw(DataSourceError, "No NSEC in " + + coverName.toText() + ", but it was " + "returned as previous - " + "accessor error? Badly signed zone?"); +#endif + } + } + catch (const isc::NotImplemented&) { + // Well, they want DNSSEC, but there is no available. + // So we don't provide anything. + LOG_INFO(logger, DATASRC_DATABASE_COVER_NSEC_UNSUPPORTED). + arg(accessor_->getDBName()).arg(name); + } + // We didn't find it, return nothing + return (RRsetPtr()); +} + +ZoneFinder::FindResult +DatabaseClient::Finder::find(const isc::dns::Name& name, + const isc::dns::RRType& type, + isc::dns::RRsetList*, + const FindOptions options) +{ + // This variable is used to determine the difference between + // NXDOMAIN and NXRRSET + bool records_found = false; + bool glue_ok((options & FIND_GLUE_OK) != 0); + const bool dnssec_data((options & FIND_DNSSEC) != 0); + bool get_cover(false); + isc::dns::RRsetPtr result_rrset; + ZoneFinder::Result result_status = SUCCESS; + FoundRRsets found; + logger.debug(DBG_TRACE_DETAILED, DATASRC_DATABASE_FIND_RECORDS) + .arg(accessor_->getDBName()).arg(name).arg(type); + // In case we are in GLUE_OK mode and start matching wildcards, + // we can't do it under NS, so we store it here to check + isc::dns::RRsetPtr first_ns; + + // First, do we have any kind of delegation (NS/DNAME) here? + const Name origin(getOrigin()); + const size_t origin_label_count(origin.getLabelCount()); + // Number of labels in the last known non-empty domain + size_t last_known(origin_label_count); + const size_t current_label_count(name.getLabelCount()); + // This is how many labels we remove to get origin + size_t remove_labels(current_label_count - origin_label_count); + + // Now go trough all superdomains from origin down + for (int i(remove_labels); i > 0; --i) { + Name superdomain(name.split(i)); + // Look if there's NS or DNAME (but ignore the NS in origin) + found = getRRsets(superdomain.toText(), DELEGATION_TYPES(), + i != remove_labels); + if (found.first) { + // It contains some RRs, so it exists. + last_known = superdomain.getLabelCount(); + + const FoundIterator nsi(found.second.find(RRType::NS())); + const FoundIterator dni(found.second.find(RRType::DNAME())); + // In case we are in GLUE_OK mode, we want to store the + // highest encountered NS (but not apex) + if (glue_ok && !first_ns && i != remove_labels && + nsi != found.second.end()) { + first_ns = nsi->second; + } else if (!glue_ok && i != remove_labels && + nsi != found.second.end()) { + // Do a NS delegation, but ignore NS in glue_ok mode. Ignore + // delegation in apex + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DELEGATION). + arg(accessor_->getDBName()).arg(superdomain); + result_rrset = nsi->second; + result_status = DELEGATION; + // No need to go lower, found + break; + } else if (dni != found.second.end()) { + // Very similar with DNAME + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DNAME). + arg(accessor_->getDBName()).arg(superdomain); + result_rrset = dni->second; + result_status = DNAME; + if (result_rrset->getRdataCount() != 1) { + isc_throw(DataSourceError, "DNAME at " << superdomain << + " has " << result_rrset->getRdataCount() << + " rdata, 1 expected"); + } + break; + } + } + } + + if (!result_rrset) { // Only if we didn't find a redirect already + // Try getting the final result and extract it + // It is special if there's a CNAME or NS, DNAME is ignored here + // And we don't consider the NS in origin + + WantedTypes final_types(FINAL_TYPES()); + final_types.insert(type); + found = getRRsets(name.toText(), final_types, name != origin); + records_found = found.first; + + // NS records, CNAME record and Wanted Type records + const FoundIterator nsi(found.second.find(RRType::NS())); + const FoundIterator cni(found.second.find(RRType::CNAME())); + const FoundIterator wti(found.second.find(type)); + if (name != origin && !glue_ok && nsi != found.second.end()) { + // There's a delegation at the exact node. + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DELEGATION_EXACT). + arg(accessor_->getDBName()).arg(name); + result_status = DELEGATION; + result_rrset = nsi->second; + } else if (type != isc::dns::RRType::CNAME() && + cni != found.second.end()) { + // A CNAME here + result_status = CNAME; + result_rrset = cni->second; + if (result_rrset->getRdataCount() != 1) { + isc_throw(DataSourceError, "CNAME with " << + result_rrset->getRdataCount() << + " rdata at " << name << ", expected 1"); + } + } else if (wti != found.second.end()) { + // Just get the answer + result_rrset = wti->second; + } else if (!records_found) { + // Nothing lives here. + // But check if something lives below this + // domain and if so, pretend something is here as well. + if (hasSubdomains(name.toText())) { + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_EMPTY_NONTERMINAL). + arg(accessor_->getDBName()).arg(name); + records_found = true; + get_cover = dnssec_data; + } else { + // It's not empty non-terminal. So check for wildcards. + // We remove labels one by one and look for the wildcard there. + // Go up to first non-empty domain. + + remove_labels = current_label_count - last_known; + for (size_t i(1); i <= remove_labels; ++ i) { + // Construct the name with * + const Name superdomain(name.split(i)); + const string wildcard("*." + superdomain.toText()); + const string construct_name(name.toText()); + // TODO What do we do about DNAME here? + // The types are the same as with original query + found = getRRsets(wildcard, final_types, true, + &construct_name); + if (found.first) { + if (first_ns) { + // In case we are under NS, we don't + // wildcard-match, but return delegation + result_rrset = first_ns; + result_status = DELEGATION; + records_found = true; + // We pretend to switch to non-glue_ok mode + glue_ok = false; + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_WILDCARD_CANCEL_NS). + arg(accessor_->getDBName()).arg(wildcard). + arg(first_ns->getName()); + } else if (!hasSubdomains(name.split(i - 1).toText())) + { + // Nothing we added as part of the * can exist + // directly, as we go up only to first existing + // domain, but it could be empty non-terminal. In + // that case, we need to cancel the match. + records_found = true; + const FoundIterator + cni(found.second.find(RRType::CNAME())); + const FoundIterator + nsi(found.second.find(RRType::NS())); + const FoundIterator + nci(found.second.find(RRType::NSEC())); + const FoundIterator wti(found.second.find(type)); + if (cni != found.second.end() && + type != RRType::CNAME()) { + result_rrset = cni->second; + result_status = CNAME; + } else if (nsi != found.second.end()) { + result_rrset = nsi->second; + result_status = DELEGATION; + } else if (wti != found.second.end()) { + result_rrset = wti->second; + result_status = WILDCARD; + } else { + // NXRRSET case in the wildcard + result_status = WILDCARD_NXRRSET; + if (dnssec_data && + nci != found.second.end()) { + // User wants a proof the wildcard doesn't + // contain it + // + // However, we need to get the RRset in the + // name of the wildcard, not the constructed + // one, so we walk it again + found = getRRsets(wildcard, NSEC_TYPES(), + true); + result_rrset = + found.second.find(RRType::NSEC())-> + second; + } + } + + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_WILDCARD). + arg(accessor_->getDBName()).arg(wildcard). + arg(name); + } else { + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_WILDCARD_CANCEL_SUB). + arg(accessor_->getDBName()).arg(wildcard). + arg(name).arg(superdomain); + } + break; + } else if (hasSubdomains(wildcard)) { + // Empty non-terminal asterisk + records_found = true; + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_WILDCARD_EMPTY). + arg(accessor_->getDBName()).arg(wildcard). + arg(name); + if (dnssec_data) { + result_rrset = findNSECCover(Name(wildcard)); + if (result_rrset) { + result_status = WILDCARD_NXRRSET; + } + } + break; + } + } + // This is the NXDOMAIN case (nothing found anywhere). If + // they want DNSSEC data, try getting the NSEC record + if (dnssec_data && !records_found) { + get_cover = true; + } + } + } else if (dnssec_data) { + // This is the "usual" NXRRSET case + // So in case they want DNSSEC, provide the NSEC + // (which should be available already here) + result_status = NXRRSET; + const FoundIterator nci(found.second.find(RRType::NSEC())); + if (nci != found.second.end()) { + result_rrset = nci->second; + } + } + } + + if (!result_rrset) { + if (result_status == SUCCESS) { + // Should we look for NSEC covering the name? + if (get_cover) { + result_rrset = findNSECCover(name); + if (result_rrset) { + result_status = NXDOMAIN; + } + } + // Something is not here and we didn't decide yet what + if (records_found) { + logger.debug(DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_NXRRSET) + .arg(accessor_->getDBName()).arg(name) + .arg(getClass()).arg(type); + result_status = NXRRSET; + } else { + logger.debug(DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_NXDOMAIN) + .arg(accessor_->getDBName()).arg(name) + .arg(getClass()).arg(type); + result_status = NXDOMAIN; + } + } + } else { + logger.debug(DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_RRSET) + .arg(accessor_->getDBName()).arg(*result_rrset); + } + return (FindResult(result_status, result_rrset)); +} + +Name +DatabaseClient::Finder::findPreviousName(const Name& name) const { + const string str(accessor_->findPreviousName(zone_id_, + name.reverse().toText())); + try { + return (Name(str)); + } + /* + * To avoid having the same code many times, we just catch all the + * exceptions and handle them in a common code below + */ + catch (const isc::dns::EmptyLabel&) {} + catch (const isc::dns::TooLongLabel&) {} + catch (const isc::dns::BadLabelType&) {} + catch (const isc::dns::BadEscape&) {} + catch (const isc::dns::TooLongName&) {} + catch (const isc::dns::IncompleteName&) {} + isc_throw(DataSourceError, "Bad name " + str + " from findPreviousName"); +} + +Name +DatabaseClient::Finder::getOrigin() const { + return (origin_); +} + +isc::dns::RRClass +DatabaseClient::Finder::getClass() const { + // TODO Implement + return isc::dns::RRClass::IN(); +} + +namespace { + +/* + * This needs, beside of converting all data from textual representation, group + * together rdata of the same RRsets. To do this, we hold one row of data ahead + * of iteration. When we get a request to provide data, we create it from this + * data and load a new one. If it is to be put to the same rrset, we add it. + * Otherwise we just return what we have and keep the row as the one ahead + * for next time. + */ +class DatabaseIterator : public ZoneIterator { +public: + DatabaseIterator(const DatabaseAccessor::IteratorContextPtr& context, + const RRClass& rrclass) : + context_(context), + class_(rrclass), + ready_(true) + { + // Prepare data for the next time + getData(); + } + + virtual isc::dns::ConstRRsetPtr getNextRRset() { + if (!ready_) { + isc_throw(isc::Unexpected, "Iterating past the zone end"); + } + if (!data_ready_) { + // At the end of zone + ready_ = false; + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_ITERATE_END); + return (ConstRRsetPtr()); + } + string name_str(name_), rtype_str(rtype_), ttl(ttl_); + Name name(name_str); + RRType rtype(rtype_str); + RRsetPtr rrset(new RRset(name, class_, rtype, RRTTL(ttl))); + while (data_ready_ && name_ == name_str && rtype_str == rtype_) { + if (ttl_ != ttl) { + if (ttl < ttl_) { + ttl_ = ttl; + rrset->setTTL(RRTTL(ttl)); + } + LOG_WARN(logger, DATASRC_DATABASE_ITERATE_TTL_MISMATCH). + arg(name_).arg(class_).arg(rtype_).arg(rrset->getTTL()); + } + rrset->addRdata(rdata::createRdata(rtype, class_, rdata_)); + getData(); + } + LOG_DEBUG(logger, DBG_TRACE_DETAILED, DATASRC_DATABASE_ITERATE_NEXT). + arg(rrset->getName()).arg(rrset->getType()); + return (rrset); + } +private: + // Load next row of data + void getData() { + string data[DatabaseAccessor::COLUMN_COUNT]; + data_ready_ = context_->getNext(data); + name_ = data[DatabaseAccessor::NAME_COLUMN]; + rtype_ = data[DatabaseAccessor::TYPE_COLUMN]; + ttl_ = data[DatabaseAccessor::TTL_COLUMN]; + rdata_ = data[DatabaseAccessor::RDATA_COLUMN]; + } + + // The context + const DatabaseAccessor::IteratorContextPtr context_; + // Class of the zone + RRClass class_; + // Status + bool ready_, data_ready_; + // Data of the next row + string name_, rtype_, rdata_, ttl_; +}; + +} + +ZoneIteratorPtr +DatabaseClient::getIterator(const isc::dns::Name& name) const { + // Get the zone + std::pair zone(accessor_->getZone(name.toText())); + if (!zone.first) { + // No such zone, can't continue + isc_throw(DataSourceError, "Zone " + name.toText() + + " can not be iterated, because it doesn't exist " + "in this data source"); + } + // Request the context + DatabaseAccessor::IteratorContextPtr + context(accessor_->getAllRecords(zone.second)); + // It must not return NULL, that's a bug of the implementation + if (context == DatabaseAccessor::IteratorContextPtr()) { + isc_throw(isc::Unexpected, "Iterator context null at " + + name.toText()); + } + // Create the iterator and return it + // TODO: Once #1062 is merged with this, we need to get the + // actual zone class from the connection, as the DatabaseClient + // doesn't know it and the iterator needs it (so it wouldn't query + // it each time) + LOG_DEBUG(logger, DBG_TRACE_DETAILED, DATASRC_DATABASE_ITERATE). + arg(name); + return (ZoneIteratorPtr(new DatabaseIterator(context, RRClass::IN()))); +} + +// +// Zone updater using some database system as the underlying data source. +// +class DatabaseUpdater : public ZoneUpdater { +public: + DatabaseUpdater(shared_ptr accessor, int zone_id, + const Name& zone_name, const RRClass& zone_class) : + committed_(false), accessor_(accessor), zone_id_(zone_id), + db_name_(accessor->getDBName()), zone_name_(zone_name.toText()), + zone_class_(zone_class), + finder_(new DatabaseClient::Finder(accessor_, zone_id_, zone_name)) + { + logger.debug(DBG_TRACE_DATA, DATASRC_DATABASE_UPDATER_CREATED) + .arg(zone_name_).arg(zone_class_).arg(db_name_); + } + + virtual ~DatabaseUpdater() { + if (!committed_) { + try { + accessor_->rollbackUpdateZone(); + logger.info(DATASRC_DATABASE_UPDATER_ROLLBACK) + .arg(zone_name_).arg(zone_class_).arg(db_name_); + } catch (const DataSourceError& e) { + // We generally expect that rollback always succeeds, and + // it should in fact succeed in a way we execute it. But + // as the public API allows rollbackUpdateZone() to fail and + // throw, we should expect it. Obviously we cannot re-throw + // it. The best we can do is to log it as a critical error. + logger.error(DATASRC_DATABASE_UPDATER_ROLLBACKFAIL) + .arg(zone_name_).arg(zone_class_).arg(db_name_) + .arg(e.what()); + } + } + + logger.debug(DBG_TRACE_DATA, DATASRC_DATABASE_UPDATER_DESTROYED) + .arg(zone_name_).arg(zone_class_).arg(db_name_); + } + + virtual ZoneFinder& getFinder() { return (*finder_); } + + virtual void addRRset(const RRset& rrset); + virtual void deleteRRset(const RRset& rrset); + virtual void commit(); + +private: + bool committed_; + shared_ptr accessor_; + const int zone_id_; + const string db_name_; + const string zone_name_; + const RRClass zone_class_; + boost::scoped_ptr finder_; +}; + +void +DatabaseUpdater::addRRset(const RRset& rrset) { + if (committed_) { + isc_throw(DataSourceError, "Add attempt after commit to zone: " + << zone_name_ << "/" << zone_class_); + } + if (rrset.getClass() != zone_class_) { + isc_throw(DataSourceError, "An RRset of a different class is being " + << "added to " << zone_name_ << "/" << zone_class_ << ": " + << rrset.toText()); + } + if (rrset.getRRsig()) { + isc_throw(DataSourceError, "An RRset with RRSIG is being added to " + << zone_name_ << "/" << zone_class_ << ": " + << rrset.toText()); + } + + RdataIteratorPtr it = rrset.getRdataIterator(); + if (it->isLast()) { + isc_throw(DataSourceError, "An empty RRset is being added for " + << rrset.getName() << "/" << zone_class_ << "/" + << rrset.getType()); + } + + string columns[DatabaseAccessor::ADD_COLUMN_COUNT]; // initialized with "" + columns[DatabaseAccessor::ADD_NAME] = rrset.getName().toText(); + columns[DatabaseAccessor::ADD_REV_NAME] = + rrset.getName().reverse().toText(); + columns[DatabaseAccessor::ADD_TTL] = rrset.getTTL().toText(); + columns[DatabaseAccessor::ADD_TYPE] = rrset.getType().toText(); + for (; !it->isLast(); it->next()) { + if (rrset.getType() == RRType::RRSIG()) { + // XXX: the current interface (based on the current sqlite3 + // data source schema) requires a separate "sigtype" column, + // even though it won't be used in a newer implementation. + // We should eventually clean up the schema design and simplify + // the interface, but until then we have to conform to the schema. + const generic::RRSIG& rrsig_rdata = + dynamic_cast(it->getCurrent()); + columns[DatabaseAccessor::ADD_SIGTYPE] = + rrsig_rdata.typeCovered().toText(); + } + columns[DatabaseAccessor::ADD_RDATA] = it->getCurrent().toText(); + accessor_->addRecordToZone(columns); + } +} + +void +DatabaseUpdater::deleteRRset(const RRset& rrset) { + if (committed_) { + isc_throw(DataSourceError, "Delete attempt after commit on zone: " + << zone_name_ << "/" << zone_class_); + } + if (rrset.getClass() != zone_class_) { + isc_throw(DataSourceError, "An RRset of a different class is being " + << "deleted from " << zone_name_ << "/" << zone_class_ + << ": " << rrset.toText()); + } + if (rrset.getRRsig()) { + isc_throw(DataSourceError, "An RRset with RRSIG is being deleted from " + << zone_name_ << "/" << zone_class_ << ": " + << rrset.toText()); + } + + RdataIteratorPtr it = rrset.getRdataIterator(); + if (it->isLast()) { + isc_throw(DataSourceError, "An empty RRset is being deleted for " + << rrset.getName() << "/" << zone_class_ << "/" + << rrset.getType()); + } + + string params[DatabaseAccessor::DEL_PARAM_COUNT]; // initialized with "" + params[DatabaseAccessor::DEL_NAME] = rrset.getName().toText(); + params[DatabaseAccessor::DEL_TYPE] = rrset.getType().toText(); + for (; !it->isLast(); it->next()) { + params[DatabaseAccessor::DEL_RDATA] = it->getCurrent().toText(); + accessor_->deleteRecordInZone(params); + } +} + +void +DatabaseUpdater::commit() { + if (committed_) { + isc_throw(DataSourceError, "Duplicate commit attempt for " + << zone_name_ << "/" << zone_class_ << " on " + << db_name_); + } + accessor_->commitUpdateZone(); + committed_ = true; // make sure the destructor won't trigger rollback + + // We release the accessor immediately after commit is completed so that + // we don't hold the possible internal resource any longer. + accessor_.reset(); + + logger.debug(DBG_TRACE_DATA, DATASRC_DATABASE_UPDATER_COMMIT) + .arg(zone_name_).arg(zone_class_).arg(db_name_); +} + +// The updater factory +ZoneUpdaterPtr +DatabaseClient::getUpdater(const isc::dns::Name& name, bool replace) const { + shared_ptr update_accessor(accessor_->clone()); + const std::pair zone(update_accessor->startUpdateZone( + name.toText(), replace)); + if (!zone.first) { + return (ZoneUpdaterPtr()); + } + + return (ZoneUpdaterPtr(new DatabaseUpdater(update_accessor, zone.second, + name, rrclass_))); +} +} +} diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h new file mode 100644 index 0000000000..8295779a2c --- /dev/null +++ b/src/lib/datasrc/database.h @@ -0,0 +1,770 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __DATABASE_DATASRC_H +#define __DATABASE_DATASRC_H + +#include + +#include + +#include +#include +#include + +#include + +#include +#include + +#include +#include + +namespace isc { +namespace datasrc { + +/** + * \brief Abstraction of lowlevel database with DNS data + * + * This class is defines interface to databases. Each supported database + * will provide methods for accessing the data stored there in a generic + * manner. The methods are meant to be low-level, without much or any knowledge + * about DNS and should be possible to translate directly to queries. + * + * On the other hand, how the communication with database is done and in what + * schema (in case of relational/SQL database) is up to the concrete classes. + * + * This class is non-copyable, as copying connections to database makes little + * sense and will not be needed. + * + * \todo Is it true this does not need to be copied? For example the zone + * iterator might need it's own copy. But a virtual clone() method might + * be better for that than copy constructor. + * + * \note The same application may create multiple connections to the same + * database, having multiple instances of this class. If the database + * allows having multiple open queries at one connection, the connection + * class may share it. + */ +class DatabaseAccessor : boost::noncopyable { +public: + /** + * Definitions of the fields as they are required to be filled in + * by IteratorContext::getNext() + * + * When implementing getNext(), the columns array should + * be filled with the values as described in this enumeration, + * in this order, i.e. TYPE_COLUMN should be the first element + * (index 0) of the array, TTL_COLUMN should be the second element + * (index 1), etc. + */ + enum RecordColumns { + TYPE_COLUMN = 0, ///< The RRType of the record (A/NS/TXT etc.) + TTL_COLUMN = 1, ///< The TTL of the record (a + SIGTYPE_COLUMN = 2, ///< For RRSIG records, this contains the RRTYPE + ///< the RRSIG covers. In the current implementation, + ///< this field is ignored. + RDATA_COLUMN = 3, ///< Full text representation of the record's RDATA + NAME_COLUMN = 4, ///< The domain name of this RR + COLUMN_COUNT = 5 ///< The total number of columns, MUST be value of + ///< the largest other element in this enum plus 1. + }; + + /** + * Definitions of the fields to be passed to addRecordToZone(). + * + * Each derived implementation of addRecordToZone() should expect + * the "columns" vector to be filled with the values as described in this + * enumeration, in this order. + */ + enum AddRecordColumns { + ADD_NAME = 0, ///< The owner name of the record (a domain name) + ADD_REV_NAME = 1, ///< Reversed name of NAME (used for DNSSEC) + ADD_TTL = 2, ///< The TTL of the record (in numeric form) + ADD_TYPE = 3, ///< The RRType of the record (A/NS/TXT etc.) + ADD_SIGTYPE = 4, ///< For RRSIG records, this contains the RRTYPE + ///< the RRSIG covers. + ADD_RDATA = 5, ///< Full text representation of the record's RDATA + ADD_COLUMN_COUNT = 6 ///< Number of columns + }; + + /** + * Definitions of the fields to be passed to deleteRecordInZone(). + * + * Each derived implementation of deleteRecordInZone() should expect + * the "params" vector to be filled with the values as described in this + * enumeration, in this order. + */ + enum DeleteRecordParams { + DEL_NAME = 0, ///< The owner name of the record (a domain name) + DEL_TYPE = 1, ///< The RRType of the record (A/NS/TXT etc.) + DEL_RDATA = 2, ///< Full text representation of the record's RDATA + DEL_PARAM_COUNT = 3 ///< Number of parameters + }; + + /** + * \brief Destructor + * + * It is empty, but needs a virtual one, since we will use the derived + * classes in polymorphic way. + */ + virtual ~DatabaseAccessor() { } + + /** + * \brief Retrieve a zone identifier + * + * This method looks up a zone for the given name in the database. It + * should match only exact zone name (eg. name is equal to the zone's + * apex), as the DatabaseClient will loop trough the labels itself and + * find the most suitable zone. + * + * It is not specified if and what implementation of this method may throw, + * so code should expect anything. + * + * \param name The (fully qualified) domain name of the zone's apex to be + * looked up. + * \return The first part of the result indicates if a matching zone + * was found. In case it was, the second part is internal zone ID. + * This one will be passed to methods finding data in the zone. + * It is not required to keep them, in which case whatever might + * be returned - the ID is only passed back to the database as + * an opaque handle. + */ + virtual std::pair getZone(const std::string& name) const = 0; + + /** + * \brief This holds the internal context of ZoneIterator for databases + * + * While the ZoneIterator implementation from DatabaseClient does all the + * translation from strings to DNS classes and validation, this class + * holds the pointer to where the database is at reading the data. + * + * It can either hold shared pointer to the connection which created it + * and have some kind of statement inside (in case single database + * connection can handle multiple concurrent SQL statements) or it can + * create a new connection (or, if it is more convenient, the connection + * itself can inherit both from DatabaseConnection and IteratorContext + * and just clone itself). + */ + class IteratorContext : public boost::noncopyable { + public: + /** + * \brief Destructor + * + * Virtual destructor, so any descendand class is destroyed correctly. + */ + virtual ~IteratorContext() { } + + /** + * \brief Function to provide next resource record + * + * This function should provide data about the next resource record + * from the data that is searched. The data is not converted yet. + * + * Depending on how the iterator was constructed, there is a difference + * in behaviour; for a 'full zone iterator', created with + * getAllRecords(), all COLUMN_COUNT elements of the array are + * overwritten. + * For a 'name iterator', created with getRecords(), the column + * NAME_COLUMN is untouched, since what would be added here is by + * definition already known to the caller (it already passes it as + * an argument to getRecords()). + * + * Once this function returns false, any subsequent call to it should + * result in false. The implementation of a derived class must ensure + * it doesn't cause any disruption due to that such as a crash or + * exception. + * + * \note The order of RRs is not strictly set, but the RRs for single + * RRset must not be interleaved with any other RRs (eg. RRsets must be + * "together"). + * + * \param columns The data will be returned through here. The order + * is specified by the RecordColumns enum, and the size must be + * COLUMN_COUNT + * \todo Do we consider databases where it is stored in binary blob + * format? + * \throw DataSourceError if there's database-related error. If the + * exception (or any other in case of derived class) is thrown, + * the iterator can't be safely used any more. + * \return true if a record was found, and the columns array was + * updated. false if there was no more data, in which case + * the columns array is untouched. + */ + virtual bool getNext(std::string (&columns)[COLUMN_COUNT]) = 0; + }; + + typedef boost::shared_ptr IteratorContextPtr; + + /** + * \brief Creates an iterator context for a specific name. + * + * Returns an IteratorContextPtr that contains all records of the + * given name from the given zone. + * + * The implementation of the iterator that is returned may leave the + * NAME_COLUMN column of the array passed to getNext() untouched, as that + * data is already known (it is the same as the name argument here) + * + * \exception any Since any implementation can be used, the caller should + * expect any exception to be thrown. + * + * \param name The name to search for. This should be a FQDN. + * \param id The ID of the zone, returned from getZone(). + * \param subdomains If set to true, match subdomains of name instead + * of name itself. It is used to find empty domains and match + * wildcards. + * \return Newly created iterator context. Must not be NULL. + */ + virtual IteratorContextPtr getRecords(const std::string& name, + int id, + bool subdomains = false) const = 0; + + /** + * \brief Creates an iterator context for the whole zone. + * + * Returns an IteratorContextPtr that contains all records of the + * zone with the given zone id. + * + * Each call to getNext() on the returned iterator should copy all + * column fields of the array that is passed, as defined in the + * RecordColumns enum. + * + * \exception any Since any implementation can be used, the caller should + * expect any exception to be thrown. + * + * \param id The ID of the zone, returned from getZone(). + * \return Newly created iterator context. Must not be NULL. + */ + virtual IteratorContextPtr getAllRecords(int id) const = 0; + + /// Start a transaction for updating a zone. + /// + /// Each derived class version of this method starts a database + /// transaction to make updates to the given name of zone (whose class was + /// specified at the construction of the class). + /// + /// If \c replace is true, any existing records of the zone will be + /// deleted on successful completion of updates (after + /// \c commitUpdateZone()); if it's false, the existing records will be + /// intact unless explicitly deleted by \c deleteRecordInZone(). + /// + /// A single \c DatabaseAccessor instance can perform at most one update + /// transaction; a duplicate call to this method before + /// \c commitUpdateZone() or \c rollbackUpdateZone() will result in + /// a \c DataSourceError exception. If multiple update attempts need + /// to be performed concurrently (and if the underlying database allows + /// such operation), separate \c DatabaseAccessor instance must be + /// created. + /// + /// \note The underlying database may not allow concurrent updates to + /// the same database instance even if different "connections" (or + /// something similar specific to the database implementation) are used + /// for different sets of updates. For example, it doesn't seem to be + /// possible for SQLite3 unless different databases are used. MySQL + /// allows concurrent updates to different tables of the same database, + /// but a specific operation may block others. As such, this interface + /// doesn't require derived classes to allow concurrent updates with + /// multiple \c DatabaseAccessor instances; however, the implementation + /// is encouraged to do the best for making it more likely to succeed + /// as long as the underlying database system allows concurrent updates. + /// + /// This method returns a pair of \c bool and \c int. Its first element + /// indicates whether the given name of zone is found. If it's false, + /// the transaction isn't considered to be started; a subsequent call to + /// this method with an existing zone name should succeed. Likewise, + /// if a call to this method results in an exception, the transaction + /// isn't considered to be started. Note also that if the zone is not + /// found this method doesn't try to create a new one in the database. + /// It must have been created by some other means beforehand. + /// + /// The second element is the internal zone ID used for subsequent + /// updates. Depending on implementation details of the actual derived + /// class method, it may be different from the one returned by + /// \c getZone(); for example, a specific implementation may use a + /// completely new zone ID when \c replace is true. + /// + /// \exception DataSourceError Duplicate call to this method, or some + /// internal database related error. + /// + /// \param zone_name A string representation of the zone name to be updated + /// \param replace Whether to replace the entire zone (see above) + /// + /// \return A pair of bool and int, indicating whether the specified zone + /// exists and (if so) the zone ID to be used for the update, respectively. + virtual std::pair startUpdateZone(const std::string& zone_name, + bool replace) = 0; + + /// Add a single record to the zone to be updated. + /// + /// This method provides a simple interface to insert a new record + /// (a database "row") to the zone in the update context started by + /// \c startUpdateZone(). The zone to which the record to be added + /// is the one specified at the time of the \c startUpdateZone() call. + /// + /// A successful call to \c startUpdateZone() must have preceded to + /// this call; otherwise a \c DataSourceError exception will be thrown. + /// + /// The row is defined as a vector of strings that has exactly + /// ADD_COLUMN_COUNT number of elements. See AddRecordColumns for + /// the semantics of each element. + /// + /// Derived class methods are not required to check whether the given + /// values in \c columns are valid in terms of the expected semantics; + /// in general, it's the caller's responsibility. + /// For example, TTLs would normally be expected to be a textual + /// representation of decimal numbers, but this interface doesn't require + /// the implementation to perform this level of validation. It may check + /// the values, however, and in that case if it detects an error it + /// should throw a \c DataSourceError exception. + /// + /// Likewise, derived class methods are not required to detect any + /// duplicate record that is already in the zone. + /// + /// \note The underlying database schema may not have a trivial mapping + /// from this style of definition of rows to actual database records. + /// It's the implementation's responsibility to implement the mapping + /// in the actual derived method. + /// + /// \exception DataSourceError Invalid call without starting a transaction, + /// or other internal database error. + /// + /// \param columns An array of strings that defines a record to be added + /// to the zone. + virtual void addRecordToZone( + const std::string (&columns)[ADD_COLUMN_COUNT]) = 0; + + /// Delete a single record from the zone to be updated. + /// + /// This method provides a simple interface to delete a record + /// (a database "row") from the zone in the update context started by + /// \c startUpdateZone(). The zone from which the record to be deleted + /// is the one specified at the time of the \c startUpdateZone() call. + /// + /// A successful call to \c startUpdateZone() must have preceded to + /// this call; otherwise a \c DataSourceError exception will be thrown. + /// + /// The record to be deleted is specified by a vector of strings that has + /// exactly DEL_PARAM_COUNT number of elements. See DeleteRecordParams + /// for the semantics of each element. + /// + /// \note In IXFR, TTL may also be specified, but we intentionally + /// ignore that in this interface, because it's not guaranteed + /// that all records have the same TTL (unlike the RRset + /// assumption) and there can even be multiple records for the + /// same name, type and rdata with different TTLs. If we only + /// delete one of them, subsequent lookup will still return a + /// positive answer, which would be confusing. It's a higher + /// layer's responsibility to check if there is at least one + /// record in the database that has the given TTL. + /// + /// Like \c addRecordToZone, derived class methods are not required to + /// validate the semantics of the given parameters or to check if there + /// is a record that matches the specified parameter; if there isn't + /// it simply ignores the result. + /// + /// \exception DataSourceError Invalid call without starting a transaction, + /// or other internal database error. + /// + /// \param params An array of strings that defines a record to be deleted + /// from the zone. + virtual void deleteRecordInZone( + const std::string (¶ms)[DEL_PARAM_COUNT]) = 0; + + /// Commit updates to the zone. + /// + /// This method completes a transaction of making updates to the zone + /// in the context started by startUpdateZone. + /// + /// A successful call to \c startUpdateZone() must have preceded to + /// this call; otherwise a \c DataSourceError exception will be thrown. + /// Once this method successfully completes, the transaction isn't + /// considered to exist any more. So a new transaction can now be + /// started. On the other hand, a duplicate call to this method after + /// a successful completion of it is invalid and should result in + /// a \c DataSourceError exception. + /// + /// If some internal database error happens, a \c DataSourceError + /// exception must be thrown. In that case the transaction is still + /// considered to be valid; the caller must explicitly rollback it + /// or (if it's confident that the error is temporary) try to commit it + /// again. + /// + /// \exception DataSourceError Call without a transaction, duplicate call + /// to the method or internal database error. + virtual void commitUpdateZone() = 0; + + /// Rollback updates to the zone made so far. + /// + /// This method rollbacks a transaction of making updates to the zone + /// in the context started by startUpdateZone. When it succeeds + /// (it normally should, but see below), the underlying database should + /// be reverted to the point before performing the corresponding + /// \c startUpdateZone(). + /// + /// A successful call to \c startUpdateZone() must have preceded to + /// this call; otherwise a \c DataSourceError exception will be thrown. + /// Once this method successfully completes, the transaction isn't + /// considered to exist any more. So a new transaction can now be + /// started. On the other hand, a duplicate call to this method after + /// a successful completion of it is invalid and should result in + /// a \c DataSourceError exception. + /// + /// Normally this method should not fail. But it may not always be + /// possible to guarantee it depending on the characteristics of the + /// underlying database system. So this interface doesn't require the + /// actual implementation for the error free property. But if a specific + /// implementation of this method can fail, it is encouraged to document + /// when that can happen with its implication. + /// + /// \exception DataSourceError Call without a transaction, duplicate call + /// to the method or internal database error. + virtual void rollbackUpdateZone() = 0; + + /// Clone the accessor with the same configuration. + /// + /// Each derived class implementation of this method will create a new + /// accessor of the same derived class with the same configuration + /// (such as the database server address) as that of the caller object + /// and return it. + /// + /// Note that other internal states won't be copied to the new accessor + /// even though the name of "clone" may indicate so. For example, even + /// if the calling accessor is in the middle of a update transaction, + /// the new accessor will not start a transaction to trace the same + /// updates. + /// + /// The intended use case of cloning is to create a separate context + /// where a specific set of database operations can be performed + /// independently from the original accessor. The updater will use it + /// so that multiple updaters can be created concurrently even if the + /// underlying database system doesn't allow running multiple transactions + /// in a single database connection. + /// + /// The underlying database system may not support the functionality + /// that would be needed to implement this method. For example, it + /// may not allow a single thread (or process) to have more than one + /// database connections. In such a case the derived class implementation + /// should throw a \c DataSourceError exception. + /// + /// \return A shared pointer to the cloned accessor. + virtual boost::shared_ptr clone() = 0; + + /** + * \brief Returns a string identifying this dabase backend + * + * The returned string is mainly intended to be used for + * debugging/logging purposes. + * + * Any implementation is free to choose the exact string content, + * but it is advisable to make it a name that is distinguishable + * from the others. + * + * \return the name of the database + */ + virtual const std::string& getDBName() const = 0; + + /** + * \brief It returns the previous name in DNSSEC order. + * + * This is used in DatabaseClient::findPreviousName and does more + * or less the real work, except for working on strings. + * + * \param rname The name to ask for previous of, in reversed form. + * We use the reversed form (see isc::dns::Name::reverse), + * because then the case insensitive order of string representation + * and the DNSSEC order correspond (eg. org.example.a is followed + * by org.example.a.b which is followed by org.example.b, etc). + * \param zone_id The zone to look through. + * \return The previous name. + * \note This function must return previous name even in case + * the queried rname does not exist in the zone. + * \note This method must skip under-the-zone-cut data (glue data). + * This might be implemented by looking for NSEC records (as glue + * data don't have them) in the zone or in some other way. + * + * \throw DataSourceError if there's a problem with the database. + * \throw NotImplemented if this database doesn't support DNSSEC + * or there's no previous name for the queried one (the NSECs + * might be missing or the queried name is less or equal the + * apex of the zone). + */ + virtual std::string findPreviousName(int zone_id, + const std::string& rname) const = 0; +}; + +/** + * \brief Concrete data source client oriented at database backends. + * + * This class (together with corresponding versions of ZoneFinder, + * ZoneIterator, etc.) translates high-level data source queries to + * low-level calls on DatabaseAccessor. It calls multiple queries + * if necessary and validates data from the database, allowing the + * DatabaseAccessor to be just simple translation to SQL/other + * queries to database. + * + * While it is possible to subclass it for specific database in case + * of special needs, it is not expected to be needed. This should just + * work as it is with whatever DatabaseAccessor. + */ +class DatabaseClient : public DataSourceClient { +public: + /** + * \brief Constructor + * + * It initializes the client with a database via the given accessor. + * + * \exception isc::InvalidParameter if accessor is NULL. It might throw + * standard allocation exception as well, but doesn't throw anything else. + * + * \param rrclass The RR class of the zones that this client will handle. + * \param accessor The accessor to the database to use to get data. + * As the parameter suggests, the client takes ownership of the accessor + * and will delete it when itself deleted. + */ + DatabaseClient(isc::dns::RRClass rrclass, + boost::shared_ptr accessor); + + /** + * \brief Corresponding ZoneFinder implementation + * + * The zone finder implementation for database data sources. Similarly + * to the DatabaseClient, it translates the queries to methods of the + * database. + * + * Application should not come directly in contact with this class + * (it should handle it trough generic ZoneFinder pointer), therefore + * it could be completely hidden in the .cc file. But it is provided + * to allow testing and for rare cases when a database needs slightly + * different handling, so it can be subclassed. + * + * Methods directly corresponds to the ones in ZoneFinder. + */ + class Finder : public ZoneFinder { + public: + /** + * \brief Constructor + * + * \param database The database (shared with DatabaseClient) to + * be used for queries (the one asked for ID before). + * \param zone_id The zone ID which was returned from + * DatabaseAccessor::getZone and which will be passed to further + * calls to the database. + * \param origin The name of the origin of this zone. It could query + * it from database, but as the DatabaseClient just searched for + * the zone using the name, it should have it. + */ + Finder(boost::shared_ptr database, int zone_id, + const isc::dns::Name& origin); + // The following three methods are just implementations of inherited + // ZoneFinder's pure virtual methods. + virtual isc::dns::Name getOrigin() const; + virtual isc::dns::RRClass getClass() const; + + /** + * \brief Find an RRset in the datasource + * + * Searches the datasource for an RRset of the given name and + * type. If there is a CNAME at the given name, the CNAME rrset + * is returned. + * (this implementation is not complete, and currently only + * does full matches, CNAMES, and the signatures for matches and + * CNAMEs) + * \note target was used in the original design to handle ANY + * queries. This is not implemented yet, and may use + * target again for that, but it might also use something + * different. It is left in for compatibility at the moment. + * \note options are ignored at this moment + * + * \note Maybe counter intuitively, this method is not a const member + * function. This is intentional; some of the underlying implementations + * are expected to use a database backend, and would internally contain + * some abstraction of "database connection". In the most strict sense + * any (even read only) operation might change the internal state of + * such a connection, and in that sense the operation cannot be considered + * "const". In order to avoid giving a false sense of safety to the + * caller, we indicate a call to this method may have a surprising + * side effect. That said, this view may be too strict and it may + * make sense to say the internal database connection doesn't affect + * external behavior in terms of the interface of this method. As + * we gain more experiences with various kinds of backends we may + * revisit the constness. + * + * \exception DataSourceError when there is a problem reading + * the data from the dabase backend. + * This can be a connection, code, or + * data (parse) error. + * + * \param name The name to find + * \param type The RRType to find + * \param target Unused at this moment + * \param options Options about how to search. + * See ZoneFinder::FindOptions. + */ + virtual FindResult find(const isc::dns::Name& name, + const isc::dns::RRType& type, + isc::dns::RRsetList* target = NULL, + const FindOptions options = FIND_DEFAULT); + + /** + * \brief Implementation of ZoneFinder::findPreviousName method. + */ + virtual isc::dns::Name findPreviousName(const isc::dns::Name& query) + const; + + /** + * \brief The zone ID + * + * This function provides the stored zone ID as passed to the + * constructor. This is meant for testing purposes and normal + * applications shouldn't need it. + */ + int zone_id() const { return (zone_id_); } + + /** + * \brief The database accessor. + * + * This function provides the database accessor stored inside as + * passed to the constructor. This is meant for testing purposes and + * normal applications shouldn't need it. + */ + const DatabaseAccessor& getAccessor() const { + return (*accessor_); + } + private: + boost::shared_ptr accessor_; + const int zone_id_; + const isc::dns::Name origin_; + // + /// \brief Shortcut name for the result of getRRsets + typedef std::pair > + FoundRRsets; + /// \brief Just shortcut for set of types + typedef std::set WantedTypes; + /** + * \brief Searches database for RRsets of one domain. + * + * This method scans RRs of single domain specified by name and + * extracts any RRsets found and requested by parameters. + * + * It is used internally by find(), because it is called multiple + * times (usually with different domains). + * + * \param name Which domain name should be scanned. + * \param types List of types the caller is interested in. + * \param check_ns If this is set to true, it checks nothing lives + * together with NS record (with few little exceptions, like RRSIG + * or NSEC). This check is meant for non-apex NS records. + * \param construct_name If this is NULL, the resulting RRsets have + * their name set to name. If it is not NULL, it overrides the name + * and uses this one (this can be used for wildcard synthesized + * records). + * \return A pair, where the first element indicates if the domain + * contains any RRs at all (not only the requested, it may happen + * this is set to true, but the second part is empty). The second + * part is map from RRtypes to RRsets of the corresponding types. + * If the RRset is not present in DB, the RRtype is not there at + * all (so you'll not find NULL pointer in the result). + * \throw DataSourceError If there's a low-level error with the + * database or the database contains bad data. + */ + FoundRRsets getRRsets(const std::string& name, + const WantedTypes& types, bool check_ns, + const std::string* construct_name = NULL); + /** + * \brief Checks if something lives below this domain. + * + * This looks if there's any subdomain of the given name. It can be + * used to test if domain is empty non-terminal. + * + * \param name The domain to check. + */ + bool hasSubdomains(const std::string& name); + + /** + * \brief Get the NSEC covering a name. + * + * This one calls findPreviousName on the given name and extracts an NSEC + * record on the result. It handles various error cases. The method exists + * to share code present at more than one location. + */ + dns::RRsetPtr findNSECCover(const dns::Name& name); + + /** + * \brief Convenience type shortcut. + * + * To find stuff in the result of getRRsets. + */ + typedef std::map::const_iterator + FoundIterator; + }; + + /** + * \brief Find a zone in the database + * + * This queries database's getZone to find the best matching zone. + * It will propagate whatever exceptions are thrown from that method + * (which is not restricted in any way). + * + * \param name Name of the zone or data contained there. + * \return FindResult containing the code and an instance of Finder, if + * anything is found. However, application should not rely on the + * ZoneFinder being instance of Finder (possible subclass of this class + * may return something else and it may change in future versions), it + * should use it as a ZoneFinder only. + */ + virtual FindResult findZone(const isc::dns::Name& name) const; + + /** + * \brief Get the zone iterator + * + * The iterator allows going through the whole zone content. If the + * underlying DatabaseConnection is implemented correctly, it should + * be possible to have multiple ZoneIterators at once and query data + * at the same time. + * + * \exception DataSourceError if the zone doesn't exist. + * \exception isc::NotImplemented if the underlying DatabaseConnection + * doesn't implement iteration. But in case it is not implemented + * and the zone doesn't exist, DataSourceError is thrown. + * \exception Anything else the underlying DatabaseConnection might + * want to throw. + * \param name The origin of the zone to iterate. + * \return Shared pointer to the iterator (it will never be NULL) + */ + virtual ZoneIteratorPtr getIterator(const isc::dns::Name& name) const; + + /// This implementation internally clones the accessor from the one + /// used in the client and starts a separate transaction using the cloned + /// accessor. The returned updater will be able to work separately from + /// the original client. + virtual ZoneUpdaterPtr getUpdater(const isc::dns::Name& name, + bool replace) const; + +private: + /// \brief The RR class that this client handles. + const isc::dns::RRClass rrclass_; + + /// \brief The accessor to our database. + const boost::shared_ptr accessor_; +}; + +} +} + +#endif // __DATABASE_DATASRC_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index c69236452b..04ad6101f0 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -17,63 +17,149 @@ $NAMESPACE isc::datasrc # \brief Messages for the data source library % DATASRC_CACHE_CREATE creating the hotspot cache -Debug information that the hotspot cache was created at startup. +This is a debug message issued during startup when the hotspot cache +is created. % DATASRC_CACHE_DESTROY destroying the hotspot cache Debug information. The hotspot cache is being destroyed. -% DATASRC_CACHE_DISABLE disabling the cache -The hotspot cache is disabled from now on. It is not going to store -information or return anything. +% DATASRC_CACHE_DISABLE disabling the hotspot cache +A debug message issued when the hotspot cache is disabled. -% DATASRC_CACHE_ENABLE enabling the cache -The hotspot cache is enabled from now on. +% DATASRC_CACHE_ENABLE enabling the hotspot cache +A debug message issued when the hotspot cache is enabled. -% DATASRC_CACHE_EXPIRED the item '%1' is expired -Debug information. There was an attempt to look up an item in the hotspot -cache. And the item was actually there, but it was too old, so it was removed -instead and nothing is reported (the external behaviour is the same as with -CACHE_NOT_FOUND). +% DATASRC_CACHE_EXPIRED item '%1' in the hotspot cache has expired +A debug message issued when a hotspot cache lookup located the item but it +had expired. The item was removed and the program proceeded as if the item +had not been found. % DATASRC_CACHE_FOUND the item '%1' was found -Debug information. An item was successfully looked up in the hotspot cache. +Debug information. An item was successfully located in the hotspot cache. -% DATASRC_CACHE_FULL cache is full, dropping oldest +% DATASRC_CACHE_FULL hotspot cache is full, dropping oldest Debug information. After inserting an item into the hotspot cache, the maximum number of items was exceeded, so the least recently used item will be dropped. This should be directly followed by CACHE_REMOVE. -% DATASRC_CACHE_INSERT inserting item '%1' into the cache -Debug information. It means a new item is being inserted into the hotspot +% DATASRC_CACHE_INSERT inserting item '%1' into the hotspot cache +A debug message indicating that a new item is being inserted into the hotspot cache. -% DATASRC_CACHE_NOT_FOUND the item '%1' was not found -Debug information. It was attempted to look up an item in the hotspot cache, -but it is not there. +% DATASRC_CACHE_NOT_FOUND the item '%1' was not found in the hotspot cache +A debug message issued when hotspot cache was searched for the specified +item but it was not found. -% DATASRC_CACHE_OLD_FOUND older instance of cache item found, replacing +% DATASRC_CACHE_OLD_FOUND older instance of hotspot cache item '%1' found, replacing Debug information. While inserting an item into the hotspot cache, an older -instance of an item with the same name was found. The old instance will be -removed. This should be directly followed by CACHE_REMOVE. +instance of an item with the same name was found; the old instance will be +removed. This will be directly followed by CACHE_REMOVE. -% DATASRC_CACHE_REMOVE removing '%1' from the cache +% DATASRC_CACHE_REMOVE removing '%1' from the hotspot cache Debug information. An item is being removed from the hotspot cache. -% DATASRC_CACHE_SLOTS setting the cache size to '%1', dropping '%2' items +% DATASRC_CACHE_SLOTS setting the hotspot cache size to '%1', dropping '%2' items The maximum allowed number of items of the hotspot cache is set to the given number. If there are too many, some of them will be dropped. The size of 0 means no limit. +% DATASRC_DATABASE_COVER_NSEC_UNSUPPORTED %1 doesn't support DNSSEC when asked for NSEC data covering %2 +The datasource tried to provide an NSEC proof that the named domain does not +exist, but the database backend doesn't support DNSSEC. No proof is included +in the answer as a result. + +% DATASRC_DATABASE_FIND_RECORDS looking in datasource %1 for record %2/%3 +Debug information. The database data source is looking up records with the given +name and type in the database. + +% DATASRC_DATABASE_FIND_TTL_MISMATCH TTL values differ in %1 for elements of %2/%3/%4, setting to %5 +The datasource backend provided resource records for the given RRset with +different TTL values. This isn't allowed on the wire and is considered +an error, so we set it to the lowest value we found (but we don't modify the +database). The data in database should be checked and fixed. + +% DATASRC_DATABASE_FOUND_DELEGATION Found delegation at %2 in %1 +When searching for a domain, the program met a delegation to a different zone +at the given domain name. It will return that one instead. + +% DATASRC_DATABASE_FOUND_DELEGATION_EXACT Found delegation at %2 (exact match) in %1 +The program found the domain requested, but it is a delegation point to a +different zone, therefore it is not authoritative for this domain name. +It will return the NS record instead. + +% DATASRC_DATABASE_FOUND_DNAME Found DNAME at %2 in %1 +When searching for a domain, the program met a DNAME redirection to a different +place in the domain space at the given domain name. It will return that one +instead. + +% DATASRC_DATABASE_FOUND_EMPTY_NONTERMINAL empty non-terminal %2 in %1 +The domain name doesn't have any RRs, so it doesn't exist in the database. +However, it has a subdomain, so it exists in the DNS address space. So we +return NXRRSET instead of NXDOMAIN. + +% DATASRC_DATABASE_FOUND_NXDOMAIN search in datasource %1 resulted in NXDOMAIN for %2/%3/%4 +The data returned by the database backend did not contain any data for the given +domain name, class and type. + +% DATASRC_DATABASE_FOUND_NXRRSET search in datasource %1 resulted in NXRRSET for %2/%3/%4 +The data returned by the database backend contained data for the given domain +name and class, but not for the given type. + +% DATASRC_DATABASE_FOUND_RRSET search in datasource %1 resulted in RRset %2 +The data returned by the database backend contained data for the given domain +name, and it either matches the type or has a relevant type. The RRset that is +returned is printed. + +% DATASRC_DATABASE_ITERATE iterating zone %1 +The program is reading the whole zone, eg. not searching for data, but going +through each of the RRsets there. + +% DATASRC_DATABASE_ITERATE_END iterating zone finished +While iterating through the zone, the program reached end of the data. + +% DATASRC_DATABASE_ITERATE_NEXT next RRset in zone is %1/%2 +While iterating through the zone, the program extracted next RRset from it. +The name and RRtype of the RRset is indicated in the message. + +% DATASRC_DATABASE_ITERATE_TTL_MISMATCH TTL values differ for RRs of %1/%2/%3, setting to %4 +While iterating through the zone, the time to live for RRs of the given RRset +were found to be different. This isn't allowed on the wire and is considered +an error, so we set it to the lowest value we found (but we don't modify the +database). The data in database should be checked and fixed. + +% DATASRC_DATABASE_WILDCARD constructing RRset %3 from wildcard %2 in %1 +The database doesn't contain directly matching domain, but it does contain a +wildcard one which is being used to synthesize the answer. + +% DATASRC_DATABASE_WILDCARD_CANCEL_NS canceled wildcard match on %2 because %3 contains NS in %1 +The database was queried to provide glue data and it didn't find direct match. +It could create it from given wildcard, but matching wildcards is forbidden +under a zone cut, which was found. Therefore the delegation will be returned +instead. + +% DATASRC_DATABASE_WILDCARD_CANCEL_SUB wildcard %2 can't be used to construct %3 because %4 exists in %1 +The answer could be constructed using the wildcard, but the given subdomain +exists, therefore this name is something like empty non-terminal (actually, +from the protocol point of view, it is empty non-terminal, but the code +discovers it differently). + +% DATASRC_DATABASE_WILDCARD_EMPTY implicit wildcard %2 used to construct %3 in %1 +The given wildcard exists implicitly in the domainspace, as empty nonterminal +(eg. there's something like subdomain.*.example.org, so *.example.org exists +implicitly, but is empty). This will produce NXRRSET, because the constructed +domain is empty as well as the wildcard. + % DATASRC_DO_QUERY handling query for '%1/%2' -Debug information. We're processing some internal query for given name and -type. +A debug message indicating that a query for the given name and RR type is being +processed. % DATASRC_MEM_ADD_RRSET adding RRset '%1/%2' into zone '%3' Debug information. An RRset is being added to the in-memory data source. % DATASRC_MEM_ADD_WILDCARD adding wildcards for '%1' -Debug information. Some special marks above each * in wildcard name are needed. -They are being added now for this name. +This is a debug message issued during the processing of a wildcard +name. The internal domain name tree is scanned and some nodes are +specially marked to allow the wildcard lookup to succeed. % DATASRC_MEM_ADD_ZONE adding zone '%1/%2' Debug information. A zone is being added into the in-memory data source. @@ -114,9 +200,9 @@ stop the search. Debug information. A DNAME was found instead of the requested information. % DATASRC_MEM_DNAME_NS DNAME and NS can't coexist in non-apex domain '%1' -It was requested for DNAME and NS records to be put into the same domain -which is not the apex (the top of the zone). This is forbidden by RFC -2672, section 3. This indicates a problem with provided data. +A request was made for DNAME and NS records to be put into the same +domain which is not the apex (the top of the zone). This is forbidden +by RFC 2672 (section 3) and indicates a problem with provided data. % DATASRC_MEM_DOMAIN_EMPTY requested domain '%1' is empty Debug information. The requested domain exists in the tree of domains, but @@ -142,7 +228,7 @@ in-memory data source. % DATASRC_MEM_LOAD loading zone '%1' from file '%2' Debug information. The content of master file is being loaded into the memory. -% DATASRC_MEM_NOTFOUND requested domain '%1' not found +% DATASRC_MEM_NOT_FOUND requested domain '%1' not found Debug information. The requested domain does not exist. % DATASRC_MEM_NS_ENCOUNTERED encountered a NS @@ -201,11 +287,11 @@ behave and BIND 9 refuses that as well. Please describe your intention using different tools. % DATASRC_META_ADD adding a data source into meta data source -Debug information. Yet another data source is being added into the meta data -source. (probably at startup or reconfiguration) +This is a debug message issued during startup or reconfiguration. +Another data source is being added into the meta data source. % DATASRC_META_ADD_CLASS_MISMATCH mismatch between classes '%1' and '%2' -It was attempted to add a data source into a meta data source. But their +It was attempted to add a data source into a meta data source, but their classes do not match. % DATASRC_META_REMOVE removing data source from meta data source @@ -234,11 +320,11 @@ specific error already. The domain lives in another zone. But it is not possible to generate referral information for it. -% DATASRC_QUERY_CACHED data for %1/%2 found in cache +% DATASRC_QUERY_CACHED data for %1/%2 found in hotspot cache Debug information. The requested data were found in the hotspot cache, so no query is sent to the real data source. -% DATASRC_QUERY_CHECK_CACHE checking cache for '%1/%2' +% DATASRC_QUERY_CHECK_CACHE checking hotspot cache for '%1/%2' Debug information. While processing a query, lookup to the hotspot cache is being made. @@ -251,10 +337,9 @@ Debug information. The software is trying to identify delegation points on the way down to the given domain. % DATASRC_QUERY_EMPTY_CNAME CNAME at '%1' is empty -There was an CNAME and it was being followed. But it contains no records, -so there's nowhere to go. There will be no answer. This indicates a problem -with supplied data. -We tried to follow +A CNAME chain was being followed and an entry was found that pointed +to a domain name that had no RRsets associated with it. As a result, +the query cannot be answered. This indicates a problem with supplied data. % DATASRC_QUERY_EMPTY_DNAME the DNAME on '%1' is empty During an attempt to synthesize CNAME from this DNAME it was discovered the @@ -262,11 +347,11 @@ DNAME is empty (it has no records). This indicates problem with supplied data. % DATASRC_QUERY_FAIL query failed Some subtask of query processing failed. The reason should have been reported -already. We are returning SERVFAIL. +already and a SERVFAIL will be returned to the querying system. % DATASRC_QUERY_FOLLOW_CNAME following CNAME at '%1' -Debug information. The domain is a CNAME (or a DNAME and we created a CNAME -for it already), so it's being followed. +Debug information. The domain is a CNAME (or a DNAME and a CNAME for it +has already been created) and the search is following this chain. % DATASRC_QUERY_GET_MX_ADDITIONAL addition of A/AAAA for '%1' requested by MX '%2' Debug information. While processing a query, a MX record was met. It @@ -291,14 +376,14 @@ operation code. Debug information. The last DO_QUERY is an auth query. % DATASRC_QUERY_IS_GLUE glue query (%1/%2) -Debug information. The last DO_QUERY is query for glue addresses. +Debug information. The last DO_QUERY is a query for glue addresses. % DATASRC_QUERY_IS_NOGLUE query for non-glue addresses (%1/%2) -Debug information. The last DO_QUERY is query for addresses that are not +Debug information. The last DO_QUERY is a query for addresses that are not glue. % DATASRC_QUERY_IS_REF query for referral (%1/%2) -Debug information. The last DO_QUERY is query for referral information. +Debug information. The last DO_QUERY is a query for referral information. % DATASRC_QUERY_IS_SIMPLE simple query (%1/%2) Debug information. The last DO_QUERY is a simple query. @@ -322,11 +407,11 @@ The underlying data source failed to answer the no-glue query. 1 means some error, 2 is not implemented. The data source should have logged the specific error already. -% DATASRC_QUERY_NO_CACHE_ANY_AUTH ignoring cache for ANY query (%1/%2 in %3 class) +% DATASRC_QUERY_NO_CACHE_ANY_AUTH ignoring hotspot cache for ANY query (%1/%2 in %3 class) Debug information. The hotspot cache is ignored for authoritative ANY queries for consistency reasons. -% DATASRC_QUERY_NO_CACHE_ANY_SIMPLE ignoring cache for ANY query (%1/%2 in %3 class) +% DATASRC_QUERY_NO_CACHE_ANY_SIMPLE ignoring hotspot cache for ANY query (%1/%2 in %3 class) Debug information. The hotspot cache is ignored for ANY queries for consistency reasons. @@ -345,7 +430,7 @@ domain. Maybe someone sent a query to the wrong server for some reason. % DATASRC_QUERY_PROCESS processing query '%1/%2' in the '%3' class Debug information. A sure query is being processed now. -% DATASRC_QUERY_PROVENX_FAIL unable to prove nonexistence of '%1' +% DATASRC_QUERY_PROVE_NX_FAIL unable to prove nonexistence of '%1' The user wants DNSSEC and we discovered the entity doesn't exist (either domain or the record). But there was an error getting NSEC/NSEC3 record to prove the nonexistence. @@ -365,9 +450,9 @@ error, 2 is not implemented. The data source should have logged the specific error already. % DATASRC_QUERY_SYNTH_CNAME synthesizing CNAME from DNAME on '%1' -Debug information. While answering a query, a DNAME was met. The DNAME itself -will be returned, but along with it a CNAME for clients which don't understand -DNAMEs will be synthesized. +This is a debug message. While answering a query, a DNAME was encountered. The +DNAME itself will be returned, along with a synthesized CNAME for clients that +do not understand the DNAME RR. % DATASRC_QUERY_TASK_FAIL task failed with %1 The query subtask failed. The reason should have been reported by the subtask @@ -391,7 +476,7 @@ domain is being looked for now. During an attempt to cover the domain by a wildcard an error happened. The exact kind was hopefully already reported. -% DATASRC_QUERY_WILDCARD_PROVENX_FAIL unable to prove nonexistence of '%1' (%2) +% DATASRC_QUERY_WILDCARD_PROVE_NX_FAIL unable to prove nonexistence of '%1' (%2) While processing a wildcard, it wasn't possible to prove nonexistence of the given domain or record. The code is 1 for error and 2 for not implemented. @@ -401,17 +486,27 @@ enough information for it. The code is 1 for error, 2 for not implemented. % DATASRC_SQLITE_CLOSE closing SQLite database Debug information. The SQLite data source is closing the database file. + +% DATASRC_SQLITE_CONNOPEN Opening sqlite database file '%1' +The database file is being opened so it can start providing data. + +% DATASRC_SQLITE_CONNCLOSE Closing sqlite database +The database file is no longer needed and is being closed. + % DATASRC_SQLITE_CREATE SQLite data source created Debug information. An instance of SQLite data source is being created. % DATASRC_SQLITE_DESTROY SQLite data source destroyed Debug information. An instance of SQLite data source is being destroyed. +% DATASRC_SQLITE_DROPCONN SQLite3Database is being deinitialized +The object around a database connection is being destroyed. + % DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1' Debug information. The SQLite data source is trying to identify which zone should hold this domain. -% DATASRC_SQLITE_ENCLOSURE_NOTFOUND no zone contains it +% DATASRC_SQLITE_ENCLOSURE_NOT_FOUND no zone contains '%1' Debug information. The last SQLITE_ENCLOSURE query was unsuccessful; there's no such zone in our data. @@ -459,25 +554,35 @@ source. The SQLite data source was asked to provide a NSEC3 record for given zone. But it doesn't contain that zone. +% DATASRC_SQLITE_NEWCONN SQLite3Database is being initialized +A wrapper object to hold database connection is being initialized. + % DATASRC_SQLITE_OPEN opening SQLite database '%1' Debug information. The SQLite data source is loading an SQLite database in the provided file. % DATASRC_SQLITE_PREVIOUS looking for name previous to '%1' -Debug information. We're trying to look up name preceding the supplied one. +This is a debug message. The name given was not found, so the program +is searching for the next name higher up the hierarchy (e.g. if +www.example.com were queried for and not found, the software searches +for the "previous" name, example.com). % DATASRC_SQLITE_PREVIOUS_NO_ZONE no zone containing '%1' -The SQLite data source tried to identify name preceding this one. But this -one is not contained in any zone in the data source. +The name given was not found, so the program is searching for the next +name higher up the hierarchy (e.g. if www.example.com were queried +for and not found, the software searches for the "previous" name, +example.com). However, this name is not contained in any zone in the +data source. This is an error since it indicates a problem in the earlier +processing of the query. % DATASRC_SQLITE_SETUP setting up SQLite database The database for SQLite data source was found empty. It is assumed this is the first run and it is being initialized with current schema. It'll still contain no data, but it will be ready for use. -% DATASRC_STATIC_BAD_CLASS static data source can handle CH only -For some reason, someone asked the static data source a query that is not in -the CH class. +% DATASRC_STATIC_CLASS_NOT_CH static data source can handle CH class only +An error message indicating that a query requesting a RR for a class other +that CH was sent to the static data source (which only handles CH queries). % DATASRC_STATIC_CREATE creating the static datasource Debug information. The static data source (the one holding stuff like @@ -491,3 +596,37 @@ data source. This indicates a programming error. An internal task of unknown type was generated. +% DATASRC_DATABASE_UPDATER_CREATED zone updater created for '%1/%2' on %3 +Debug information. A zone updater object is created to make updates to +the shown zone on the shown backend database. + +% DATASRC_DATABASE_UPDATER_DESTROYED zone updater destroyed for '%1/%2' on %3 +Debug information. A zone updater object is destroyed, either successfully +or after failure of, making updates to the shown zone on the shown backend +database. + +%DATASRC_DATABASE_UPDATER_ROLLBACK zone updates roll-backed for '%1/%2' on %3 +A zone updater is being destroyed without committing the changes. +This would typically mean the update attempt was aborted due to some +error, but may also be a bug of the application that forgets committing +the changes. The intermediate changes made through the updater won't +be applied to the underlying database. The zone name, its class, and +the underlying database name are shown in the log message. + +%DATASRC_DATABASE_UPDATER_ROLLBACKFAIL failed to roll back zone updates for '%1/%2' on %3: %4 +A zone updater is being destroyed without committing the changes to +the database, and attempts to rollback incomplete updates, but it +unexpectedly fails. The higher level implementation does not expect +it to fail, so this means either a serious operational error in the +underlying data source (such as a system failure of a database) or +software bug in the underlying data source implementation. In either +case if this message is logged the administrator should carefully +examine the underlying data source to see what exactly happens and +whether the data is still valid. The zone name, its class, and the +underlying database name as well as the error message thrown from the +database module are shown in the log message. + +% DATASRC_DATABASE_UPDATER_COMMIT updates committed for '%1/%2' on %3 +Debug information. A set of updates to a zone has been successfully +committed to the corresponding database backend. The zone name, +its class and the database name are printed. diff --git a/src/lib/datasrc/factory.cc b/src/lib/datasrc/factory.cc new file mode 100644 index 0000000000..eddd4f41c1 --- /dev/null +++ b/src/lib/datasrc/factory.cc @@ -0,0 +1,82 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include "factory.h" + +#include "data_source.h" +#include "database.h" +#include "sqlite3_accessor.h" +#include "memory_datasrc.h" + +#include + +#include + +using namespace isc::data; +using namespace isc::datasrc; + +namespace isc { +namespace datasrc { + +LibraryContainer::LibraryContainer(const std::string& name) { + ds_lib_ = dlopen(name.c_str(), RTLD_NOW | RTLD_LOCAL); + if (ds_lib_ == NULL) { + isc_throw(DataSourceLibraryError, dlerror()); + } +} + +LibraryContainer::~LibraryContainer() { + dlclose(ds_lib_); +} + +void* +LibraryContainer::getSym(const char* name) { + // Since dlsym can return NULL on success, we check for errors by + // first clearing any existing errors with dlerror(), then calling dlsym, + // and finally checking for errors with dlerror() + dlerror(); + + void *sym = dlsym(ds_lib_, name); + + const char* dlsym_error = dlerror(); + if (dlsym_error != NULL) { + isc_throw(DataSourceLibrarySymbolError, dlsym_error); + } + + return (sym); +} + +DataSourceClientContainer::DataSourceClientContainer(const std::string& type, + ConstElementPtr config) +: ds_lib_(type + "_ds.so") +{ + // We are casting from a data to a function pointer here + // Some compilers (rightfully) complain about that, but + // c-style casts are accepted the most here. If we run + // into any that also don't like this, we might need to + // use some form of union cast or memory copy to get + // from the void* to the function pointer. + ds_creator* ds_create = (ds_creator*)ds_lib_.getSym("createInstance"); + destructor_ = (ds_destructor*)ds_lib_.getSym("destroyInstance"); + + instance_ = ds_create(config); +} + +DataSourceClientContainer::~DataSourceClientContainer() { + destructor_(instance_); +} + +} // end namespace datasrc +} // end namespace isc + diff --git a/src/lib/datasrc/factory.h b/src/lib/datasrc/factory.h new file mode 100644 index 0000000000..8db9ec91dd --- /dev/null +++ b/src/lib/datasrc/factory.h @@ -0,0 +1,182 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __DATA_SOURCE_FACTORY_H +#define __DATA_SOURCE_FACTORY_H 1 + +#include + +#include +#include +#include + +#include + +namespace isc { +namespace datasrc { + + +/// \brief Raised if there is an error loading the datasource implementation +/// library +class DataSourceLibraryError : public DataSourceError { +public: + DataSourceLibraryError(const char* file, size_t line, const char* what) : + DataSourceError(file, line, what) {} +}; + +/// \brief Raised if there is an error reading a symbol from the datasource +/// implementation library +class DataSourceLibrarySymbolError : public DataSourceError { +public: + DataSourceLibrarySymbolError(const char* file, size_t line, + const char* what) : + DataSourceError(file, line, what) {} +}; + +/// \brief Raised if the given config contains bad data +/// +/// Depending on the datasource type, the configuration may differ (for +/// instance, the sqlite3 datasource needs a database file). +class DataSourceConfigError : public DataSourceError { +public: + DataSourceConfigError(const char* file, size_t line, const char* what) : + DataSourceError(file, line, what) {} + // This exception is created in the dynamic modules. Apparently + // sunstudio can't handle it if we then automatically derive the + // destructor, so we provide it explicitely + ~DataSourceConfigError() throw() {} +}; + +typedef DataSourceClient* ds_creator(isc::data::ConstElementPtr config); +typedef void ds_destructor(DataSourceClient* instance); + +/// \brief Container class for dynamically loaded libraries +/// +/// This class is used to dlopen() a library, provides access to dlsym(), +/// and cleans up the dlopened library when the instance of this class is +/// destroyed. +/// +/// Its main function is to provide RAII-style access to dlopen'ed libraries. +/// +/// \note Currently it is Datasource-backend specific. If we have need for this +/// in other places than for dynamically loading datasources, then, apart +/// from moving it to another location, we also need to make the +/// exceptions raised more general. +class LibraryContainer : boost::noncopyable { +public: + /// \brief Constructor + /// + /// \param name The name of the library (.so) file. This file must be in + /// the library path. + /// + /// \exception DataSourceLibraryError If the library cannot be found or + /// cannot be loaded. + LibraryContainer(const std::string& name); + + /// \brief Destructor + /// + /// Cleans up the library by calling dlclose() + ~LibraryContainer(); + + /// \brief Retrieve a symbol + /// + /// This retrieves a symbol from the loaded library. + /// + /// \exception DataSourceLibrarySymbolError if the symbol cannot be found, + /// or if another error (as reported by dlerror() occurs. + /// + /// \param name The name of the symbol to retrieve + /// \return A pointer to the symbol. This may be NULL, and if so, indicates + /// the symbol does indeed exist, but has the value NULL itself. + /// If the symbol does not exist, a DataSourceLibrarySymbolError is + /// raised. + /// + /// \note The argument is a const char* (and not a std::string like the + /// argument in the constructor). This argument is always a fixed + /// string in the code, while the other can be read from + /// configuration, and needs modification + void* getSym(const char* name); +private: + /// Pointer to the dynamically loaded library structure + void *ds_lib_; +}; + + +/// \brief Container for a specific instance of a dynamically loaded +/// DataSourceClient implementation +/// +/// Given a datasource type and a type-specific set of configuration data, +/// the corresponding dynamic library is loaded (if it hadn't been already), +/// and an instance is created. This instance is stored within this structure, +/// and can be accessed through getInstance(). Upon destruction of this +/// container, the stored instance of the DataSourceClient is deleted with +/// the destructor function provided by the loaded library. +/// +/// The 'type' is actually the name of the library, minus the '_ds.so' postfix +/// Datasource implementation libraries therefore have a fixed name, both for +/// easy recognition and to reduce potential mistakes. +/// For example, the sqlite3 implementation has the type 'sqlite3', and the +/// derived filename 'sqlite3_ds.so' +/// +/// There are of course some demands to an implementation, not all of which +/// can be verified compile-time. It must provide a creator and destructor +/// functions. The creator function must return an instance of a subclass of +/// DataSourceClient. The prototypes of these functions are as follows: +/// \code +/// extern "C" DataSourceClient* createInstance(isc::data::ConstElementPtr cfg); +/// +/// extern "C" void destroyInstance(isc::data::DataSourceClient* instance); +/// \endcode +class DataSourceClientContainer : boost::noncopyable { +public: + /// \brief Constructor + /// + /// \exception DataSourceLibraryError if there is an error loading the + /// backend library + /// \exception DataSourceLibrarySymbolError if the library does not have + /// the needed symbols, or if there is an error reading them + /// \exception DataSourceConfigError if the given config is not correct + /// for the given type + /// + /// \param type The type of the datasource client. Based on the value of + /// type, a specific backend library is used, by appending the + /// string '_ds.so' to the given type, and loading that as the + /// implementation library + /// \param config Type-specific configuration data, see the documentation + /// of the datasource backend type for information on what + /// configuration data to pass. + DataSourceClientContainer(const std::string& type, + isc::data::ConstElementPtr config); + + /// \brief Destructor + ~DataSourceClientContainer(); + + /// \brief Accessor to the instance + /// + /// \return Reference to the DataSourceClient instance contained in this + /// container. + DataSourceClient& getInstance() { return *instance_; } + +private: + DataSourceClient* instance_; + ds_destructor* destructor_; + LibraryContainer ds_lib_; +}; + +} // end namespace datasrc +} // end namespace isc +#endif // DATA_SOURCE_FACTORY_H +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/datasrc/iterator.h b/src/lib/datasrc/iterator.h new file mode 100644 index 0000000000..0102fcb9e5 --- /dev/null +++ b/src/lib/datasrc/iterator.h @@ -0,0 +1,61 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +namespace isc { +namespace datasrc { + +/** + * \brief Read-only iterator to a zone. + * + * You can get an instance of (descendand of) ZoneIterator from + * DataSourceClient::getIterator() method. The actual concrete implementation + * will be different depending on the actual data source used. This is the + * abstract interface. + * + * There's no way to start iterating from the beginning again or return. + */ +class ZoneIterator : public boost::noncopyable { +public: + /** + * \brief Destructor + * + * Virtual destructor. It is empty, but ensures the right destructor from + * descendant is called. + */ + virtual ~ ZoneIterator() { } + + /** + * \brief Get next RRset from the zone. + * + * This returns the next RRset in the zone as a shared pointer. The + * shared pointer is used to allow both accessing in-memory data and + * automatic memory management. + * + * Any special order is not guaranteed. + * + * While this can potentially throw anything (including standard allocation + * errors), it should be rare. + * + * \return Pointer to the next RRset or NULL pointer when the iteration + * gets to the end of the zone. + */ + virtual isc::dns::ConstRRsetPtr getNextRRset() = 0; +}; + +} +} diff --git a/src/lib/datasrc/memory_datasrc.cc b/src/lib/datasrc/memory_datasrc.cc index 3c57d1b087..4c9e53f595 100644 --- a/src/lib/datasrc/memory_datasrc.cc +++ b/src/lib/datasrc/memory_datasrc.cc @@ -16,6 +16,9 @@ #include #include #include +#include + +#include #include #include @@ -25,17 +28,44 @@ #include #include #include +#include +#include +#include + +#include using namespace std; using namespace isc::dns; +using namespace isc::data; namespace isc { namespace datasrc { -// Private data and hidden methods of MemoryZone -struct MemoryZone::MemoryZoneImpl { +namespace { +// Some type aliases +/* + * Each domain consists of some RRsets. They will be looked up by the + * RRType. + * + * The use of map is questionable with regard to performance - there'll + * be usually only few RRsets in the domain, so the log n benefit isn't + * much and a vector/array might be faster due to its simplicity and + * continuous memory location. But this is unlikely to be a performance + * critical place and map has better interface for the lookups, so we use + * that. + */ +typedef map Domain; +typedef Domain::value_type DomainPair; +typedef boost::shared_ptr DomainPtr; +// The tree stores domains +typedef RBTree DomainTree; +typedef RBNode DomainNode; +} + +// Private data and hidden methods of InMemoryZoneFinder +struct InMemoryZoneFinder::InMemoryZoneFinderImpl { // Constructor - MemoryZoneImpl(const RRClass& zone_class, const Name& origin) : + InMemoryZoneFinderImpl(const RRClass& zone_class, const Name& origin) : zone_class_(zone_class), origin_(origin), origin_data_(NULL), domains_(true) { @@ -44,25 +74,6 @@ struct MemoryZone::MemoryZoneImpl { DomainPtr origin_domain(new Domain); origin_data_->setData(origin_domain); } - - // Some type aliases - /* - * Each domain consists of some RRsets. They will be looked up by the - * RRType. - * - * The use of map is questionable with regard to performance - there'll - * be usually only few RRsets in the domain, so the log n benefit isn't - * much and a vector/array might be faster due to its simplicity and - * continuous memory location. But this is unlikely to be a performance - * critical place and map has better interface for the lookups, so we use - * that. - */ - typedef map Domain; - typedef Domain::value_type DomainPair; - typedef boost::shared_ptr DomainPtr; - // The tree stores domains - typedef RBTree DomainTree; - typedef RBNode DomainNode; static const DomainNode::Flags DOMAINFLAG_WILD = DomainNode::FLAG_USER1; // Information about the zone @@ -129,7 +140,7 @@ struct MemoryZone::MemoryZoneImpl { // Ensure CNAME and other type of RR don't coexist for the same // owner name. if (rrset->getType() == RRType::CNAME()) { - // XXX: this check will become incorrect when we support DNSSEC + // TODO: this check will become incorrect when we support DNSSEC // (depending on how we support DNSSEC). We should revisit it // at that point. if (!domain->empty()) { @@ -223,12 +234,15 @@ struct MemoryZone::MemoryZoneImpl { * Implementation of longer methods. We put them here, because the * access is without the impl_-> and it will get inlined anyway. */ - // Implementation of MemoryZone::add + // Implementation of InMemoryZoneFinder::add result::Result add(const ConstRRsetPtr& rrset, DomainTree* domains) { + // Sanitize input. This will cause an exception to be thrown + // if the input RRset is empty. + addValidation(rrset); + + // OK, can add the RRset. LOG_DEBUG(logger, DBG_TRACE_DATA, DATASRC_MEM_ADD_RRSET). arg(rrset->getName()).arg(rrset->getType()).arg(origin_); - // Sanitize input - addValidation(rrset); // Add wildcards possibly contained in the owner name to the domain // tree. @@ -406,7 +420,7 @@ struct MemoryZone::MemoryZoneImpl { } } - // Implementation of MemoryZone::find + // Implementation of InMemoryZoneFinder::find FindResult find(const Name& name, RRType type, RRsetList* target, const FindOptions options) const { @@ -520,7 +534,7 @@ struct MemoryZone::MemoryZoneImpl { // fall through case DomainTree::NOTFOUND: - LOG_DEBUG(logger, DBG_TRACE_DATA, DATASRC_MEM_NOTFOUND). + LOG_DEBUG(logger, DBG_TRACE_DATA, DATASRC_MEM_NOT_FOUND). arg(name); return (FindResult(NXDOMAIN, ConstRRsetPtr())); case DomainTree::EXACTMATCH: // This one is OK, handle it @@ -590,50 +604,50 @@ struct MemoryZone::MemoryZoneImpl { } }; -MemoryZone::MemoryZone(const RRClass& zone_class, const Name& origin) : - impl_(new MemoryZoneImpl(zone_class, origin)) +InMemoryZoneFinder::InMemoryZoneFinder(const RRClass& zone_class, const Name& origin) : + impl_(new InMemoryZoneFinderImpl(zone_class, origin)) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_MEM_CREATE).arg(origin). arg(zone_class); } -MemoryZone::~MemoryZone() { +InMemoryZoneFinder::~InMemoryZoneFinder() { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_MEM_DESTROY).arg(getOrigin()). arg(getClass()); delete impl_; } -const Name& -MemoryZone::getOrigin() const { +Name +InMemoryZoneFinder::getOrigin() const { return (impl_->origin_); } -const RRClass& -MemoryZone::getClass() const { +RRClass +InMemoryZoneFinder::getClass() const { return (impl_->zone_class_); } -Zone::FindResult -MemoryZone::find(const Name& name, const RRType& type, - RRsetList* target, const FindOptions options) const +ZoneFinder::FindResult +InMemoryZoneFinder::find(const Name& name, const RRType& type, + RRsetList* target, const FindOptions options) { return (impl_->find(name, type, target, options)); } result::Result -MemoryZone::add(const ConstRRsetPtr& rrset) { +InMemoryZoneFinder::add(const ConstRRsetPtr& rrset) { return (impl_->add(rrset, &impl_->domains_)); } void -MemoryZone::load(const string& filename) { +InMemoryZoneFinder::load(const string& filename) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_MEM_LOAD).arg(getOrigin()). arg(filename); // Load it into a temporary tree - MemoryZoneImpl::DomainTree tmp; + DomainTree tmp; masterLoad(filename.c_str(), getOrigin(), getClass(), - boost::bind(&MemoryZoneImpl::addFromLoad, impl_, _1, &tmp)); + boost::bind(&InMemoryZoneFinderImpl::addFromLoad, impl_, _1, &tmp)); // If it went well, put it inside impl_->file_name_ = filename; tmp.swap(impl_->domains_); @@ -641,64 +655,294 @@ MemoryZone::load(const string& filename) { } void -MemoryZone::swap(MemoryZone& zone) { +InMemoryZoneFinder::swap(InMemoryZoneFinder& zone_finder) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_MEM_SWAP).arg(getOrigin()). - arg(zone.getOrigin()); - std::swap(impl_, zone.impl_); + arg(zone_finder.getOrigin()); + std::swap(impl_, zone_finder.impl_); } const string -MemoryZone::getFileName() const { +InMemoryZoneFinder::getFileName() const { return (impl_->file_name_); } -/// Implementation details for \c MemoryDataSrc hidden from the public +isc::dns::Name +InMemoryZoneFinder::findPreviousName(const isc::dns::Name&) const { + isc_throw(NotImplemented, "InMemory data source doesn't support DNSSEC " + "yet, can't find previous name"); +} + +/// Implementation details for \c InMemoryClient hidden from the public /// interface. /// -/// For now, \c MemoryDataSrc only contains a \c ZoneTable object, which -/// consists of (pointers to) \c MemoryZone objects, we may add more +/// For now, \c InMemoryClient only contains a \c ZoneTable object, which +/// consists of (pointers to) \c InMemoryZoneFinder objects, we may add more /// member variables later for new features. -class MemoryDataSrc::MemoryDataSrcImpl { +class InMemoryClient::InMemoryClientImpl { public: - MemoryDataSrcImpl() : zone_count(0) {} + InMemoryClientImpl() : zone_count(0) {} unsigned int zone_count; ZoneTable zone_table; }; -MemoryDataSrc::MemoryDataSrc() : impl_(new MemoryDataSrcImpl) +InMemoryClient::InMemoryClient() : impl_(new InMemoryClientImpl) {} -MemoryDataSrc::~MemoryDataSrc() { +InMemoryClient::~InMemoryClient() { delete impl_; } unsigned int -MemoryDataSrc::getZoneCount() const { +InMemoryClient::getZoneCount() const { return (impl_->zone_count); } result::Result -MemoryDataSrc::addZone(ZonePtr zone) { - if (!zone) { +InMemoryClient::addZone(ZoneFinderPtr zone_finder) { + if (!zone_finder) { isc_throw(InvalidParameter, - "Null pointer is passed to MemoryDataSrc::addZone()"); + "Null pointer is passed to InMemoryClient::addZone()"); } LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_MEM_ADD_ZONE). - arg(zone->getOrigin()).arg(zone->getClass().toText()); + arg(zone_finder->getOrigin()).arg(zone_finder->getClass().toText()); - const result::Result result = impl_->zone_table.addZone(zone); + const result::Result result = impl_->zone_table.addZone(zone_finder); if (result == result::SUCCESS) { ++impl_->zone_count; } return (result); } -MemoryDataSrc::FindResult -MemoryDataSrc::findZone(const isc::dns::Name& name) const { +InMemoryClient::FindResult +InMemoryClient::findZone(const isc::dns::Name& name) const { LOG_DEBUG(logger, DBG_TRACE_DATA, DATASRC_MEM_FIND_ZONE).arg(name); - return (FindResult(impl_->zone_table.findZone(name).code, - impl_->zone_table.findZone(name).zone)); + ZoneTable::FindResult result(impl_->zone_table.findZone(name)); + return (FindResult(result.code, result.zone)); } + +namespace { + +class MemoryIterator : public ZoneIterator { +private: + RBTreeNodeChain chain_; + Domain::const_iterator dom_iterator_; + const DomainTree& tree_; + const DomainNode* node_; + bool ready_; +public: + MemoryIterator(const DomainTree& tree, const Name& origin) : + tree_(tree), + ready_(true) + { + // Find the first node (origin) and preserve the node chain for future + // searches + DomainTree::Result result(tree_.find(origin, &node_, chain_, + NULL, NULL)); + // It can't happen that the origin is not in there + if (result != DomainTree::EXACTMATCH) { + isc_throw(Unexpected, + "In-memory zone corrupted, missing origin node"); + } + // Initialize the iterator if there's somewhere to point to + if (node_ != NULL && node_->getData() != DomainPtr()) { + dom_iterator_ = node_->getData()->begin(); + } + } + + virtual ConstRRsetPtr getNextRRset() { + if (!ready_) { + isc_throw(Unexpected, "Iterating past the zone end"); + } + /* + * This cycle finds the first nonempty node with yet unused RRset. + * If it is NULL, we run out of nodes. If it is empty, it doesn't + * contain any RRsets. If we are at the end, just get to next one. + */ + while (node_ != NULL && (node_->getData() == DomainPtr() || + dom_iterator_ == node_->getData()->end())) { + node_ = tree_.nextNode(chain_); + // If there's a node, initialize the iterator and check next time + // if the map is empty or not + if (node_ != NULL && node_->getData() != NULL) { + dom_iterator_ = node_->getData()->begin(); + } + } + if (node_ == NULL) { + // That's all, folks + ready_ = false; + return (ConstRRsetPtr()); + } + // The iterator points to the next yet unused RRset now + ConstRRsetPtr result(dom_iterator_->second); + // This one is used, move it to the next time for next call + ++dom_iterator_; + + return (result); + } +}; + +} // End of anonymous namespace + +ZoneIteratorPtr +InMemoryClient::getIterator(const Name& name) const { + ZoneTable::FindResult result(impl_->zone_table.findZone(name)); + if (result.code != result::SUCCESS) { + isc_throw(DataSourceError, "No such zone: " + name.toText()); + } + + const InMemoryZoneFinder* + zone(dynamic_cast(result.zone.get())); + if (zone == NULL) { + /* + * TODO: This can happen only during some of the tests and only as + * a temporary solution. This should be fixed by #1159 and then + * this cast and check shouldn't be necessary. We don't have + * test for handling a "can not happen" condition. + */ + isc_throw(Unexpected, "The zone at " + name.toText() + + " is not InMemoryZoneFinder"); + } + return (ZoneIteratorPtr(new MemoryIterator(zone->impl_->domains_, name))); +} + +ZoneUpdaterPtr +InMemoryClient::getUpdater(const isc::dns::Name&, bool) const { + isc_throw(isc::NotImplemented, "Update attempt on in memory data source"); +} + + +namespace { +// convencience function to add an error message to a list of those +// (TODO: move functions like these to some util lib?) +void +addError(ElementPtr errors, const std::string& error) { + if (errors != ElementPtr() && errors->getType() == Element::list) { + errors->add(Element::create(error)); + } +} + +/// Check if the given element exists in the map, and if it is a string +bool +checkConfigElementString(ConstElementPtr config, const std::string& name, + ElementPtr errors) +{ + if (!config->contains(name)) { + addError(errors, + "Config for memory backend does not contain a '" + "type" + "' value"); + return false; + } else if (!config->get(name) || + config->get(name)->getType() != Element::string) { + addError(errors, "value of " + name + + " in memory backend config is not a string"); + return false; + } else { + return true; + } +} + +bool +checkZoneConfig(ConstElementPtr config, ElementPtr errors) { + bool result = true; + if (!config || config->getType() != Element::map) { + addError(errors, "Elements in memory backend's zone list must be maps"); + result = false; + } else { + if (!checkConfigElementString(config, "origin", errors)) { + result = false; + } + if (!checkConfigElementString(config, "file", errors)) { + result = false; + } + // we could add some existence/readabilty/parsability checks here + // if we want + } + return result; +} + +bool +checkConfig(ConstElementPtr config, ElementPtr errors) { + /* Specific configuration is under discussion, right now this accepts + * the 'old' configuration, see [TODO] + * So for memory datasource, we get a structure like this: + * { "type": string ("memory"), + * "class": string ("IN"/"CH"/etc), + * "zones": list + * } + * Zones list is a list of maps: + * { "origin": string, + * "file": string + * } + * + * At this moment we cannot be completely sure of the contents of the + * structure, so we have to do some more extensive tests than should + * strictly be necessary (e.g. existence and type of elements) + */ + bool result = true; + + if (!config || config->getType() != Element::map) { + addError(errors, "Base config for memory backend must be a map"); + result = false; + } else { + if (!checkConfigElementString(config, "type", errors)) { + result = false; + } else { + if (config->get("type")->stringValue() != "memory") { + addError(errors, + "Config for memory backend is not of type \"memory\""); + result = false; + } + } + if (!checkConfigElementString(config, "class", errors)) { + result = false; + } else { + try { + RRClass rrc(config->get("class")->stringValue()); + } catch (const isc::Exception& rrce) { + addError(errors, + "Error parsing class config for memory backend: " + + std::string(rrce.what())); + result = false; + } + } + if (!config->contains("zones")) { + addError(errors, "No 'zones' element in memory backend config"); + result = false; + } else if (!config->get("zones") || + config->get("zones")->getType() != Element::list) { + addError(errors, "'zones' element in memory backend config is not a list"); + result = false; + } else { + BOOST_FOREACH(ConstElementPtr zone_config, + config->get("zones")->listValue()) { + if (!checkZoneConfig(zone_config, errors)) { + result = false; + } + } + } + } + + return (result); + return true; +} + +} // end anonymous namespace + +DataSourceClient * +createInstance(isc::data::ConstElementPtr config) { + ElementPtr errors(Element::createList()); + if (!checkConfig(config, errors)) { + isc_throw(DataSourceConfigError, errors->str()); + } + return (new InMemoryClient()); +} + +void destroyInstance(DataSourceClient* instance) { + delete instance; +} + + } // end of namespace datasrc -} // end of namespace dns +} // end of namespace isc diff --git a/src/lib/datasrc/memory_datasrc.h b/src/lib/datasrc/memory_datasrc.h index 99bb4e81bc..cf467a2423 100644 --- a/src/lib/datasrc/memory_datasrc.h +++ b/src/lib/datasrc/memory_datasrc.h @@ -17,7 +17,12 @@ #include +#include + #include +#include + +#include namespace isc { namespace dns { @@ -27,18 +32,17 @@ class RRsetList; namespace datasrc { -/// A derived zone class intended to be used with the memory data source. -class MemoryZone : public Zone { +/// A derived zone finder class intended to be used with the memory data source. +/// +/// Conceptually this "finder" maintains a local in-memory copy of all RRs +/// of a single zone from some kind of source (right now it's a textual +/// master file, but it could also be another data source with a database +/// backend). This is why the class has methods like \c load() or \c add(). +/// +/// This class is non copyable. +class InMemoryZoneFinder : boost::noncopyable, public ZoneFinder { /// /// \name Constructors and Destructor. - /// - /// \b Note: - /// The copy constructor and the assignment operator are intentionally - /// defined as private, making this class non copyable. - //@{ -private: - MemoryZone(const MemoryZone& source); - MemoryZone& operator=(const MemoryZone& source); public: /// \brief Constructor from zone parameters. /// @@ -48,17 +52,18 @@ public: /// /// \param rrclass The RR class of the zone. /// \param origin The origin name of the zone. - MemoryZone(const isc::dns::RRClass& rrclass, const isc::dns::Name& origin); + InMemoryZoneFinder(const isc::dns::RRClass& rrclass, + const isc::dns::Name& origin); /// The destructor. - virtual ~MemoryZone(); + virtual ~InMemoryZoneFinder(); //@} /// \brief Returns the origin of the zone. - virtual const isc::dns::Name& getOrigin() const; + virtual isc::dns::Name getOrigin() const; /// \brief Returns the class of the zone. - virtual const isc::dns::RRClass& getClass() const; + virtual isc::dns::RRClass getClass() const; /// \brief Looks up an RRset in the zone. /// @@ -70,7 +75,13 @@ public: virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, - const FindOptions options = FIND_DEFAULT) const; + const FindOptions options = FIND_DEFAULT); + + /// \brief Imelementation of the ZoneFinder::findPreviousName method + /// + /// This one throws NotImplemented exception, as InMemory doesn't + /// support DNSSEC currently. + virtual isc::dns::Name findPreviousName(const isc::dns::Name& query) const; /// \brief Inserts an rrset into the zone. /// @@ -128,14 +139,14 @@ public: /// Return the master file name of the zone /// /// This method returns the name of the zone's master file to be loaded. - /// The returned string will be an empty unless the zone has successfully - /// loaded a zone. + /// The returned string will be an empty unless the zone finder has + /// successfully loaded a zone. /// /// This method should normally not throw an exception. But the creation /// of the return string may involve a resource allocation, and if it /// fails, the corresponding standard exception will be thrown. /// - /// \return The name of the zone file loaded in the zone, or an empty + /// \return The name of the zone file loaded in the zone finder, or an empty /// string if the zone hasn't loaded any file. const std::string getFileName() const; @@ -164,144 +175,147 @@ public: /// configuration reloading is written. void load(const std::string& filename); - /// Exchanges the content of \c this zone with that of the given \c zone. + /// Exchanges the content of \c this zone finder with that of the given + /// \c zone_finder. /// /// This method never throws an exception. /// - /// \param zone Another \c MemoryZone object which is to be swapped with - /// \c this zone. - void swap(MemoryZone& zone); + /// \param zone_finder Another \c InMemoryZone object which is to + /// be swapped with \c this zone finder. + void swap(InMemoryZoneFinder& zone_finder); private: /// \name Hidden private data //@{ - struct MemoryZoneImpl; - MemoryZoneImpl* impl_; + struct InMemoryZoneFinderImpl; + InMemoryZoneFinderImpl* impl_; //@} + // The friend here is for InMemoryClient::getIterator. The iterator + // needs to access the data inside the zone, so the InMemoryClient + // extracts the pointer to data and puts it into the iterator. + // The access is read only. + friend class InMemoryClient; }; -/// \brief A data source that uses in memory dedicated backend. +/// \brief A data source client that holds all necessary data in memory. /// -/// The \c MemoryDataSrc class represents a data source and provides a -/// basic interface to help DNS lookup processing. For a given domain -/// name, its \c findZone() method searches the in memory dedicated backend -/// for the zone that gives a longest match against that name. +/// The \c InMemoryClient class provides an access to a conceptual data +/// source that maintains all necessary data in a memory image, thereby +/// allowing much faster lookups. The in memory data is a copy of some +/// real physical source - in the current implementation a list of zones +/// are populated as a result of \c addZone() calls; zone data is given +/// in a standard master file (but there's a plan to use database backends +/// as a source of the in memory data). /// -/// The in memory dedicated backend are assumed to be of the same RR class, -/// but the \c MemoryDataSrc class does not enforce the assumption through +/// Although every data source client is assumed to be of the same RR class, +/// the \c InMemoryClient class does not enforce the assumption through /// its interface. /// For example, the \c addZone() method does not check if the new zone is of -/// the same RR class as that of the others already in the dedicated backend. +/// the same RR class as that of the others already in memory. /// It is caller's responsibility to ensure this assumption. /// /// Notes to developer: /// -/// For now, we don't make it a derived class of AbstractDataSrc because the -/// interface is so different (we'll eventually consider this as part of the -/// generalization work). -/// /// The addZone() method takes a (Boost) shared pointer because it would be /// inconvenient to require the caller to maintain the ownership of zones, /// while it wouldn't be safe to delete unnecessary zones inside the dedicated /// backend. /// -/// The findZone() method takes a domain name and returns the best matching \c -/// MemoryZone in the form of (Boost) shared pointer, so that it can provide -/// the general interface for all data sources. -class MemoryDataSrc { +/// The findZone() method takes a domain name and returns the best matching +/// \c InMemoryZoneFinder in the form of (Boost) shared pointer, so that it can +/// provide the general interface for all data sources. +class InMemoryClient : public DataSourceClient { public: - /// \brief A helper structure to represent the search result of - /// MemoryDataSrc::find(). - /// - /// This is a straightforward pair of the result code and a share pointer - /// to the found zone to represent the result of \c find(). - /// We use this in order to avoid overloading the return value for both - /// the result code ("success" or "not found") and the found object, - /// i.e., avoid using \c NULL to mean "not found", etc. - /// - /// This is a simple value class with no internal state, so for - /// convenience we allow the applications to refer to the members - /// directly. - /// - /// See the description of \c find() for the semantics of the member - /// variables. - struct FindResult { - FindResult(result::Result param_code, const ZonePtr param_zone) : - code(param_code), zone(param_zone) - {} - const result::Result code; - const ZonePtr zone; - }; - /// /// \name Constructors and Destructor. /// - /// \b Note: - /// The copy constructor and the assignment operator are intentionally - /// defined as private, making this class non copyable. //@{ -private: - MemoryDataSrc(const MemoryDataSrc& source); - MemoryDataSrc& operator=(const MemoryDataSrc& source); -public: /// Default constructor. /// /// This constructor internally involves resource allocation, and if /// it fails, a corresponding standard exception will be thrown. /// It never throws an exception otherwise. - MemoryDataSrc(); + InMemoryClient(); /// The destructor. - ~MemoryDataSrc(); + ~InMemoryClient(); //@} - /// Return the number of zones stored in the data source. + /// Return the number of zones stored in the client. /// /// This method never throws an exception. /// - /// \return The number of zones stored in the data source. + /// \return The number of zones stored in the client. unsigned int getZoneCount() const; - /// Add a \c Zone to the \c MemoryDataSrc. + /// Add a zone (in the form of \c ZoneFinder) to the \c InMemoryClient. /// - /// \c Zone must not be associated with a NULL pointer; otherwise + /// \c zone_finder must not be associated with a NULL pointer; otherwise /// an exception of class \c InvalidParameter will be thrown. /// If internal resource allocation fails, a corresponding standard /// exception will be thrown. /// This method never throws an exception otherwise. /// - /// \param zone A \c Zone object to be added. - /// \return \c result::SUCCESS If the zone is successfully - /// added to the memory data source. + /// \param zone_finder A \c ZoneFinder object to be added. + /// \return \c result::SUCCESS If the zone_finder is successfully + /// added to the client. /// \return \c result::EXIST The memory data source already /// stores a zone that has the same origin. - result::Result addZone(ZonePtr zone); + result::Result addZone(ZoneFinderPtr zone_finder); - /// Find a \c Zone that best matches the given name in the \c MemoryDataSrc. + /// Returns a \c ZoneFinder for a zone_finder that best matches the given + /// name. /// - /// It searches the internal storage for a \c Zone that gives the - /// longest match against \c name, and returns the result in the - /// form of a \c FindResult object as follows: - /// - \c code: The result code of the operation. - /// - \c result::SUCCESS: A zone that gives an exact match - // is found - /// - \c result::PARTIALMATCH: A zone whose origin is a - // super domain of \c name is found (but there is no exact match) - /// - \c result::NOTFOUND: For all other cases. - /// - \c zone: A "Boost" shared pointer to the found \c Zone object if one - // is found; otherwise \c NULL. + /// This derived version of the method never throws an exception. + /// For other details see \c DataSourceClient::findZone(). + virtual FindResult findZone(const isc::dns::Name& name) const; + + /// \brief Implementation of the getIterator method + virtual ZoneIteratorPtr getIterator(const isc::dns::Name& name) const; + + /// In-memory data source is read-only, so this derived method will + /// result in a NotImplemented exception. /// - /// This method never throws an exception. - /// - /// \param name A domain name for which the search is performed. - /// \return A \c FindResult object enclosing the search result (see above). - FindResult findZone(const isc::dns::Name& name) const; + /// \note We plan to use a database-based data source as a backend + /// persistent storage for an in-memory data source. When it's + /// implemented we may also want to allow the user of the in-memory client + /// to update via its updater (this may or may not be a good idea and + /// is subject to further discussions). + virtual ZoneUpdaterPtr getUpdater(const isc::dns::Name& name, + bool replace) const; private: - class MemoryDataSrcImpl; - MemoryDataSrcImpl* impl_; + // TODO: Do we still need the PImpl if nobody should manipulate this class + // directly any more (it should be handled through DataSourceClient)? + class InMemoryClientImpl; + InMemoryClientImpl* impl_; }; + +/// \brief Creates an instance of the Memory datasource client +/// +/// Currently the configuration passed here must be a MapElement, formed as +/// follows: +/// \code +/// { "type": string ("memory"), +/// "class": string ("IN"/"CH"/etc), +/// "zones": list +/// } +/// Zones list is a list of maps: +/// { "origin": string, +/// "file": string +/// } +/// \endcode +/// (i.e. the configuration that was used prior to the datasource refactor) +/// +/// This configuration setup is currently under discussion and will change in +/// the near future. +extern "C" DataSourceClient* createInstance(isc::data::ConstElementPtr config); + +/// \brief Destroy the instance created by createInstance() +extern "C" void destroyInstance(DataSourceClient* instance); + + } } #endif // __DATA_SOURCE_MEMORY_H diff --git a/src/lib/datasrc/rbtree.h b/src/lib/datasrc/rbtree.h index 03a696749c..ccdfa4856b 100644 --- a/src/lib/datasrc/rbtree.h +++ b/src/lib/datasrc/rbtree.h @@ -704,9 +704,9 @@ public: /// \brief Find with callback and node chain. /// /// This version of \c find() is specifically designed for the backend - /// of the \c MemoryZone class, and implements all necessary features - /// for that purpose. Other applications shouldn't need these additional - /// features, and should normally use the simpler versions. + /// of the \c InMemoryZoneFinder class, and implements all necessary + /// features for that purpose. Other applications shouldn't need these + /// additional features, and should normally use the simpler versions. /// /// This version of \c find() calls the callback whenever traversing (on /// the way from root down the tree) a marked node on the way down through diff --git a/src/lib/datasrc/sqlite3_accessor.cc b/src/lib/datasrc/sqlite3_accessor.cc new file mode 100644 index 0000000000..360722743d --- /dev/null +++ b/src/lib/datasrc/sqlite3_accessor.cc @@ -0,0 +1,779 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include +#include + +#include + +#include +#include +#include +#include +#include + +using namespace std; +using namespace isc::data; + +#define SQLITE_SCHEMA_VERSION 1 + +#define CONFIG_ITEM_DATABASE_FILE "database_file" + +namespace isc { +namespace datasrc { + +// The following enum and char* array define the SQL statements commonly +// used in this implementation. Corresponding prepared statements (of +// type sqlite3_stmt*) are maintained in the statements_ array of the +// SQLite3Parameters structure. + +enum StatementID { + ZONE = 0, + ANY = 1, + ANY_SUB = 2, + BEGIN = 3, + COMMIT = 4, + ROLLBACK = 5, + DEL_ZONE_RECORDS = 6, + ADD_RECORD = 7, + DEL_RECORD = 8, + ITERATE = 9, + FIND_PREVIOUS = 10, + NUM_STATEMENTS = 11 +}; + +const char* const text_statements[NUM_STATEMENTS] = { + // note for ANY and ITERATE: the order of the SELECT values is + // specifically chosen to match the enum values in RecordColumns + "SELECT id FROM zones WHERE name=?1 AND rdclass = ?2", // ZONE + "SELECT rdtype, ttl, sigtype, rdata FROM records " // ANY + "WHERE zone_id=?1 AND name=?2", + "SELECT rdtype, ttl, sigtype, rdata " // ANY_SUB + "FROM records WHERE zone_id=?1 AND name LIKE (\"%.\" || ?2)", + "BEGIN", // BEGIN + "COMMIT", // COMMIT + "ROLLBACK", // ROLLBACK + "DELETE FROM records WHERE zone_id=?1", // DEL_ZONE_RECORDS + "INSERT INTO records " // ADD_RECORD + "(zone_id, name, rname, ttl, rdtype, sigtype, rdata) " + "VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)", + "DELETE FROM records WHERE zone_id=?1 AND name=?2 " // DEL_RECORD + "AND rdtype=?3 AND rdata=?4", + "SELECT rdtype, ttl, sigtype, rdata, name FROM records " // ITERATE + "WHERE zone_id = ?1 ORDER BY name, rdtype", + /* + * This one looks for previous name with NSEC record. It is done by + * using the reversed name. The NSEC is checked because we need to + * skip glue data, which don't have the NSEC. + */ + "SELECT name FROM records " // FIND_PREVIOUS + "WHERE zone_id=?1 AND rdtype = 'NSEC' AND " + "rname < $2 ORDER BY rname DESC LIMIT 1" +}; + +struct SQLite3Parameters { + SQLite3Parameters() : + db_(NULL), version_(-1), updating_zone(false), updated_zone_id(-1) + { + for (int i = 0; i < NUM_STATEMENTS; ++i) { + statements_[i] = NULL; + } + } + + sqlite3* db_; + int version_; + sqlite3_stmt* statements_[NUM_STATEMENTS]; + bool updating_zone; // whether or not updating the zone + int updated_zone_id; // valid only when updating_zone is true +}; + +// This is a helper class to encapsulate the code logic of executing +// a specific SQLite3 statement, ensuring the corresponding prepared +// statement is always reset whether the execution is completed successfully +// or it results in an exception. +// Note that an object of this class is intended to be used for "ephemeral" +// statement, which is completed with a single "step" (normally within a +// single call to an SQLite3Database method). In particular, it cannot be +// used for "SELECT" variants, which generally expect multiple matching rows. +class StatementProcessor { +public: + // desc will be used on failure in the what() message of the resulting + // DataSourceError exception. + StatementProcessor(SQLite3Parameters& dbparameters, StatementID stmt_id, + const char* desc) : + dbparameters_(dbparameters), stmt_id_(stmt_id), desc_(desc) + { + sqlite3_clear_bindings(dbparameters_.statements_[stmt_id_]); + } + + ~StatementProcessor() { + sqlite3_reset(dbparameters_.statements_[stmt_id_]); + } + + void exec() { + if (sqlite3_step(dbparameters_.statements_[stmt_id_]) != SQLITE_DONE) { + sqlite3_reset(dbparameters_.statements_[stmt_id_]); + isc_throw(DataSourceError, "failed to " << desc_ << ": " << + sqlite3_errmsg(dbparameters_.db_)); + } + } + +private: + SQLite3Parameters& dbparameters_; + const StatementID stmt_id_; + const char* const desc_; +}; + +SQLite3Accessor::SQLite3Accessor(const std::string& filename, + const isc::dns::RRClass& rrclass) : + dbparameters_(new SQLite3Parameters), + filename_(filename), + class_(rrclass.toText()), + database_name_("sqlite3_" + + isc::util::Filename(filename).nameAndExtension()) +{ + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_NEWCONN); + + open(filename); +} + +SQLite3Accessor::SQLite3Accessor(const std::string& filename, + const string& rrclass) : + dbparameters_(new SQLite3Parameters), + filename_(filename), + class_(rrclass), + database_name_("sqlite3_" + + isc::util::Filename(filename).nameAndExtension()) +{ + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_NEWCONN); + + open(filename); +} + +boost::shared_ptr +SQLite3Accessor::clone() { + return (boost::shared_ptr(new SQLite3Accessor(filename_, + class_))); +} + +namespace { + +// This is a helper class to initialize a Sqlite3 DB safely. An object of +// this class encapsulates all temporary resources that are necessary for +// the initialization, and release them in the destructor. Once everything +// is properly initialized, the move() method moves the allocated resources +// to the main object in an exception free manner. This way, the main code +// for the initialization can be exception safe, and can provide the strong +// exception guarantee. +class Initializer { +public: + ~Initializer() { + for (int i = 0; i < NUM_STATEMENTS; ++i) { + sqlite3_finalize(params_.statements_[i]); + } + + if (params_.db_ != NULL) { + sqlite3_close(params_.db_); + } + } + void move(SQLite3Parameters* dst) { + *dst = params_; + params_ = SQLite3Parameters(); // clear everything + } + SQLite3Parameters params_; +}; + +const char* const SCHEMA_LIST[] = { + "CREATE TABLE schema_version (version INTEGER NOT NULL)", + "INSERT INTO schema_version VALUES (1)", + "CREATE TABLE zones (id INTEGER PRIMARY KEY, " + "name STRING NOT NULL COLLATE NOCASE, " + "rdclass STRING NOT NULL COLLATE NOCASE DEFAULT 'IN', " + "dnssec BOOLEAN NOT NULL DEFAULT 0)", + "CREATE INDEX zones_byname ON zones (name)", + "CREATE TABLE records (id INTEGER PRIMARY KEY, " + "zone_id INTEGER NOT NULL, name STRING NOT NULL COLLATE NOCASE, " + "rname STRING NOT NULL COLLATE NOCASE, ttl INTEGER NOT NULL, " + "rdtype STRING NOT NULL COLLATE NOCASE, sigtype STRING COLLATE NOCASE, " + "rdata STRING NOT NULL)", + "CREATE INDEX records_byname ON records (name)", + "CREATE INDEX records_byrname ON records (rname)", + "CREATE TABLE nsec3 (id INTEGER PRIMARY KEY, zone_id INTEGER NOT NULL, " + "hash STRING NOT NULL COLLATE NOCASE, " + "owner STRING NOT NULL COLLATE NOCASE, " + "ttl INTEGER NOT NULL, rdtype STRING NOT NULL COLLATE NOCASE, " + "rdata STRING NOT NULL)", + "CREATE INDEX nsec3_byhash ON nsec3 (hash)", + NULL +}; + +sqlite3_stmt* +prepare(sqlite3* const db, const char* const statement) { + sqlite3_stmt* prepared = NULL; + if (sqlite3_prepare_v2(db, statement, -1, &prepared, NULL) != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not prepare SQLite statement: " << + statement); + } + return (prepared); +} + +// small function to sleep for 0.1 seconds, needed when waiting for +// exclusive database locks (which should only occur on startup, and only +// when the database has not been created yet) +void doSleep() { + struct timespec req; + req.tv_sec = 0; + req.tv_nsec = 100000000; + nanosleep(&req, NULL); +} + +// returns the schema version if the schema version table exists +// returns -1 if it does not +int checkSchemaVersion(sqlite3* db) { + sqlite3_stmt* prepared = NULL; + // At this point in time, the database might be exclusively locked, in + // which case even prepare() will return BUSY, so we may need to try a + // few times + for (size_t i = 0; i < 50; ++i) { + int rc = sqlite3_prepare_v2(db, "SELECT version FROM schema_version", + -1, &prepared, NULL); + if (rc == SQLITE_ERROR) { + // this is the error that is returned when the table does not + // exist + return (-1); + } else if (rc == SQLITE_OK) { + break; + } else if (rc != SQLITE_BUSY || i == 50) { + isc_throw(SQLite3Error, "Unable to prepare version query: " + << rc << " " << sqlite3_errmsg(db)); + } + doSleep(); + } + if (sqlite3_step(prepared) != SQLITE_ROW) { + isc_throw(SQLite3Error, + "Unable to query version: " << sqlite3_errmsg(db)); + } + int version = sqlite3_column_int(prepared, 0); + sqlite3_finalize(prepared); + return (version); +} + +// return db version +int create_database(sqlite3* db) { + // try to get an exclusive lock. Once that is obtained, do the version + // check *again*, just in case this process was racing another + // + // try for 5 secs (50*0.1) + int rc; + logger.info(DATASRC_SQLITE_SETUP); + for (size_t i = 0; i < 50; ++i) { + rc = sqlite3_exec(db, "BEGIN EXCLUSIVE TRANSACTION", NULL, NULL, + NULL); + if (rc == SQLITE_OK) { + break; + } else if (rc != SQLITE_BUSY || i == 50) { + isc_throw(SQLite3Error, "Unable to acquire exclusive lock " + "for database creation: " << sqlite3_errmsg(db)); + } + doSleep(); + } + int schema_version = checkSchemaVersion(db); + if (schema_version == -1) { + for (int i = 0; SCHEMA_LIST[i] != NULL; ++i) { + if (sqlite3_exec(db, SCHEMA_LIST[i], NULL, NULL, NULL) != + SQLITE_OK) { + isc_throw(SQLite3Error, + "Failed to set up schema " << SCHEMA_LIST[i]); + } + } + sqlite3_exec(db, "COMMIT TRANSACTION", NULL, NULL, NULL); + return (SQLITE_SCHEMA_VERSION); + } else { + return (schema_version); + } +} + +void +checkAndSetupSchema(Initializer* initializer) { + sqlite3* const db = initializer->params_.db_; + + int schema_version = checkSchemaVersion(db); + if (schema_version != SQLITE_SCHEMA_VERSION) { + schema_version = create_database(db); + } + initializer->params_.version_ = schema_version; + + for (int i = 0; i < NUM_STATEMENTS; ++i) { + initializer->params_.statements_[i] = prepare(db, text_statements[i]); + } +} + +} + +void +SQLite3Accessor::open(const std::string& name) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_CONNOPEN).arg(name); + if (dbparameters_->db_ != NULL) { + // There shouldn't be a way to trigger this anyway + isc_throw(DataSourceError, "Duplicate SQLite open with " << name); + } + + Initializer initializer; + + if (sqlite3_open(name.c_str(), &initializer.params_.db_) != 0) { + isc_throw(SQLite3Error, "Cannot open SQLite database file: " << name); + } + + checkAndSetupSchema(&initializer); + initializer.move(dbparameters_.get()); +} + +SQLite3Accessor::~SQLite3Accessor() { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_DROPCONN); + if (dbparameters_->db_ != NULL) { + close(); + } +} + +void +SQLite3Accessor::close(void) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_CONNCLOSE); + if (dbparameters_->db_ == NULL) { + isc_throw(DataSourceError, + "SQLite data source is being closed before open"); + } + + // XXX: sqlite3_finalize() could fail. What should we do in that case? + for (int i = 0; i < NUM_STATEMENTS; ++i) { + sqlite3_finalize(dbparameters_->statements_[i]); + dbparameters_->statements_[i] = NULL; + } + + sqlite3_close(dbparameters_->db_); + dbparameters_->db_ = NULL; +} + +std::pair +SQLite3Accessor::getZone(const std::string& name) const { + int rc; + sqlite3_stmt* const stmt = dbparameters_->statements_[ZONE]; + + // Take the statement (simple SELECT id FROM zones WHERE...) + // and prepare it (bind the parameters to it) + sqlite3_reset(stmt); + rc = sqlite3_bind_text(stmt, 1, name.c_str(), -1, SQLITE_STATIC); + if (rc != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not bind " << name << + " to SQL statement (zone)"); + } + rc = sqlite3_bind_text(stmt, 2, class_.c_str(), -1, SQLITE_STATIC); + if (rc != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not bind " << class_ << + " to SQL statement (zone)"); + } + + // Get the data there and see if it found anything + rc = sqlite3_step(stmt); + if (rc == SQLITE_ROW) { + const int zone_id = sqlite3_column_int(stmt, 0); + sqlite3_reset(stmt); + return (pair(true, zone_id)); + } else if (rc == SQLITE_DONE) { + // Free resources + sqlite3_reset(stmt); + return (pair(false, 0)); + } + + sqlite3_reset(stmt); + isc_throw(DataSourceError, "Unexpected failure in sqlite3_step: " << + sqlite3_errmsg(dbparameters_->db_)); + // Compilers might not realize isc_throw always throws + return (std::pair(false, 0)); +} + +namespace { + +// Conversion to plain char +const char* +convertToPlainChar(const unsigned char* ucp, sqlite3 *db) { + if (ucp == NULL) { + // The field can really be NULL, in which case we return an + // empty string, or sqlite may have run out of memory, in + // which case we raise an error + if (sqlite3_errcode(db) == SQLITE_NOMEM) { + isc_throw(DataSourceError, + "Sqlite3 backend encountered a memory allocation " + "error in sqlite3_column_text()"); + } else { + return (""); + } + } + const void* p = ucp; + return (static_cast(p)); +} + +} +class SQLite3Accessor::Context : public DatabaseAccessor::IteratorContext { +public: + // Construct an iterator for all records. When constructed this + // way, the getNext() call will copy all fields + Context(const boost::shared_ptr& accessor, int id) : + iterator_type_(ITT_ALL), + accessor_(accessor), + statement_(NULL), + name_("") + { + // We create the statement now and then just keep getting data from it + statement_ = prepare(accessor->dbparameters_->db_, + text_statements[ITERATE]); + bindZoneId(id); + } + + // Construct an iterator for records with a specific name. When constructed + // this way, the getNext() call will copy all fields except name + Context(const boost::shared_ptr& accessor, int id, + const std::string& name, bool subdomains) : + iterator_type_(ITT_NAME), + accessor_(accessor), + statement_(NULL), + name_(name) + + { + // We create the statement now and then just keep getting data from it + statement_ = prepare(accessor->dbparameters_->db_, + subdomains ? text_statements[ANY_SUB] : + text_statements[ANY]); + bindZoneId(id); + bindName(name_); + } + + bool getNext(std::string (&data)[COLUMN_COUNT]) { + // If there's another row, get it + // If finalize has been called (e.g. when previous getNext() got + // SQLITE_DONE), directly return false + if (statement_ == NULL) { + return false; + } + const int rc(sqlite3_step(statement_)); + if (rc == SQLITE_ROW) { + // For both types, we copy the first four columns + copyColumn(data, TYPE_COLUMN); + copyColumn(data, TTL_COLUMN); + copyColumn(data, SIGTYPE_COLUMN); + copyColumn(data, RDATA_COLUMN); + // Only copy Name if we are iterating over every record + if (iterator_type_ == ITT_ALL) { + copyColumn(data, NAME_COLUMN); + } + return (true); + } else if (rc != SQLITE_DONE) { + isc_throw(DataSourceError, + "Unexpected failure in sqlite3_step: " << + sqlite3_errmsg(accessor_->dbparameters_->db_)); + } + finalize(); + return (false); + } + + virtual ~Context() { + finalize(); + } + +private: + // Depending on which constructor is called, behaviour is slightly + // different. We keep track of what to do with the iterator type + // See description of getNext() and the constructors + enum IteratorType { + ITT_ALL, + ITT_NAME + }; + + void copyColumn(std::string (&data)[COLUMN_COUNT], int column) { + data[column] = convertToPlainChar(sqlite3_column_text(statement_, + column), + accessor_->dbparameters_->db_); + } + + void bindZoneId(const int zone_id) { + if (sqlite3_bind_int(statement_, 1, zone_id) != SQLITE_OK) { + finalize(); + isc_throw(SQLite3Error, "Could not bind int " << zone_id << + " to SQL statement: " << + sqlite3_errmsg(accessor_->dbparameters_->db_)); + } + } + + void bindName(const std::string& name) { + if (sqlite3_bind_text(statement_, 2, name.c_str(), -1, + SQLITE_TRANSIENT) != SQLITE_OK) { + const char* errmsg = sqlite3_errmsg(accessor_->dbparameters_->db_); + finalize(); + isc_throw(SQLite3Error, "Could not bind text '" << name << + "' to SQL statement: " << errmsg); + } + } + + void finalize() { + sqlite3_finalize(statement_); + statement_ = NULL; + } + + const IteratorType iterator_type_; + boost::shared_ptr accessor_; + sqlite3_stmt *statement_; + const std::string name_; +}; + +DatabaseAccessor::IteratorContextPtr +SQLite3Accessor::getRecords(const std::string& name, int id, + bool subdomains) const +{ + return (IteratorContextPtr(new Context(shared_from_this(), id, name, + subdomains))); +} + +DatabaseAccessor::IteratorContextPtr +SQLite3Accessor::getAllRecords(int id) const { + return (IteratorContextPtr(new Context(shared_from_this(), id))); +} + +pair +SQLite3Accessor::startUpdateZone(const string& zone_name, const bool replace) { + if (dbparameters_->updating_zone) { + isc_throw(DataSourceError, + "duplicate zone update on SQLite3 data source"); + } + + const pair zone_info(getZone(zone_name)); + if (!zone_info.first) { + return (zone_info); + } + + StatementProcessor(*dbparameters_, BEGIN, + "start an SQLite3 transaction").exec(); + + if (replace) { + try { + StatementProcessor delzone_exec(*dbparameters_, DEL_ZONE_RECORDS, + "delete zone records"); + + sqlite3_clear_bindings( + dbparameters_->statements_[DEL_ZONE_RECORDS]); + if (sqlite3_bind_int(dbparameters_->statements_[DEL_ZONE_RECORDS], + 1, zone_info.second) != SQLITE_OK) { + isc_throw(DataSourceError, + "failed to bind SQLite3 parameter: " << + sqlite3_errmsg(dbparameters_->db_)); + } + + delzone_exec.exec(); + } catch (const DataSourceError&) { + // Once we start a transaction, if something unexpected happens + // we need to rollback the transaction so that a subsequent update + // is still possible with this accessor. + StatementProcessor(*dbparameters_, ROLLBACK, + "rollback an SQLite3 transaction").exec(); + throw; + } + } + + dbparameters_->updating_zone = true; + dbparameters_->updated_zone_id = zone_info.second; + + return (zone_info); +} + +void +SQLite3Accessor::commitUpdateZone() { + if (!dbparameters_->updating_zone) { + isc_throw(DataSourceError, "committing zone update on SQLite3 " + "data source without transaction"); + } + + StatementProcessor(*dbparameters_, COMMIT, + "commit an SQLite3 transaction").exec(); + dbparameters_->updating_zone = false; + dbparameters_->updated_zone_id = -1; +} + +void +SQLite3Accessor::rollbackUpdateZone() { + if (!dbparameters_->updating_zone) { + isc_throw(DataSourceError, "rolling back zone update on SQLite3 " + "data source without transaction"); + } + + StatementProcessor(*dbparameters_, ROLLBACK, + "rollback an SQLite3 transaction").exec(); + dbparameters_->updating_zone = false; + dbparameters_->updated_zone_id = -1; +} + +namespace { +// Commonly used code sequence for adding/deleting record +template +void +doUpdate(SQLite3Parameters& dbparams, StatementID stmt_id, + COLUMNS_TYPE update_params, const char* exec_desc) +{ + sqlite3_stmt* const stmt = dbparams.statements_[stmt_id]; + StatementProcessor executer(dbparams, stmt_id, exec_desc); + + int param_id = 0; + if (sqlite3_bind_int(stmt, ++param_id, dbparams.updated_zone_id) + != SQLITE_OK) { + isc_throw(DataSourceError, "failed to bind SQLite3 parameter: " << + sqlite3_errmsg(dbparams.db_)); + } + const size_t column_count = + sizeof(update_params) / sizeof(update_params[0]); + for (int i = 0; i < column_count; ++i) { + if (sqlite3_bind_text(stmt, ++param_id, update_params[i].c_str(), -1, + SQLITE_TRANSIENT) != SQLITE_OK) { + isc_throw(DataSourceError, "failed to bind SQLite3 parameter: " << + sqlite3_errmsg(dbparams.db_)); + } + } + executer.exec(); +} +} + +void +SQLite3Accessor::addRecordToZone(const string (&columns)[ADD_COLUMN_COUNT]) { + if (!dbparameters_->updating_zone) { + isc_throw(DataSourceError, "adding record to SQLite3 " + "data source without transaction"); + } + doUpdate( + *dbparameters_, ADD_RECORD, columns, "add record to zone"); +} + +void +SQLite3Accessor::deleteRecordInZone(const string (¶ms)[DEL_PARAM_COUNT]) { + if (!dbparameters_->updating_zone) { + isc_throw(DataSourceError, "deleting record in SQLite3 " + "data source without transaction"); + } + doUpdate( + *dbparameters_, DEL_RECORD, params, "delete record from zone"); +} + +std::string +SQLite3Accessor::findPreviousName(int zone_id, const std::string& rname) + const +{ + sqlite3_reset(dbparameters_->statements_[FIND_PREVIOUS]); + sqlite3_clear_bindings(dbparameters_->statements_[FIND_PREVIOUS]); + + if (sqlite3_bind_int(dbparameters_->statements_[FIND_PREVIOUS], 1, + zone_id) != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not bind zone ID " << zone_id << + " to SQL statement (find previous): " << + sqlite3_errmsg(dbparameters_->db_)); + } + if (sqlite3_bind_text(dbparameters_->statements_[FIND_PREVIOUS], 2, + rname.c_str(), -1, SQLITE_STATIC) != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not bind name " << rname << + " to SQL statement (find previous): " << + sqlite3_errmsg(dbparameters_->db_)); + } + + std::string result; + const int rc = sqlite3_step(dbparameters_->statements_[FIND_PREVIOUS]); + if (rc == SQLITE_ROW) { + // We found it + result = convertToPlainChar(sqlite3_column_text(dbparameters_-> + statements_[FIND_PREVIOUS], 0), dbparameters_->db_); + } + sqlite3_reset(dbparameters_->statements_[FIND_PREVIOUS]); + + if (rc == SQLITE_DONE) { + // No NSEC records here, this DB doesn't support DNSSEC or + // we asked before the apex + isc_throw(isc::NotImplemented, "The zone doesn't support DNSSEC or " + "query before apex"); + } + + if (rc != SQLITE_ROW && rc != SQLITE_DONE) { + // Some kind of error + isc_throw(SQLite3Error, "Could not get data for previous name"); + } + + return (result); +} + +namespace { +void +addError(ElementPtr errors, const std::string& error) { + if (errors != ElementPtr() && errors->getType() == Element::list) { + errors->add(Element::create(error)); + } +} + +bool +checkConfig(ConstElementPtr config, ElementPtr errors) { + /* Specific configuration is under discussion, right now this accepts + * the 'old' configuration, see header file + */ + bool result = true; + + if (!config || config->getType() != Element::map) { + addError(errors, "Base config for SQlite3 backend must be a map"); + result = false; + } else { + if (!config->contains(CONFIG_ITEM_DATABASE_FILE)) { + addError(errors, + "Config for SQlite3 backend does not contain a '" + CONFIG_ITEM_DATABASE_FILE + "' value"); + result = false; + } else if (!config->get(CONFIG_ITEM_DATABASE_FILE) || + config->get(CONFIG_ITEM_DATABASE_FILE)->getType() != + Element::string) { + addError(errors, "value of " CONFIG_ITEM_DATABASE_FILE + " in SQLite3 backend is not a string"); + result = false; + } else if (config->get(CONFIG_ITEM_DATABASE_FILE)->stringValue() == + "") { + addError(errors, "value of " CONFIG_ITEM_DATABASE_FILE + " in SQLite3 backend is empty"); + result = false; + } + } + + return (result); +} + +} // end anonymous namespace + +DataSourceClient * +createInstance(isc::data::ConstElementPtr config) { + ElementPtr errors(Element::createList()); + if (!checkConfig(config, errors)) { + isc_throw(DataSourceConfigError, errors->str()); + } + std::string dbfile = config->get(CONFIG_ITEM_DATABASE_FILE)->stringValue(); + boost::shared_ptr sqlite3_accessor( + new SQLite3Accessor(dbfile, isc::dns::RRClass::IN())); + return (new DatabaseClient(isc::dns::RRClass::IN(), sqlite3_accessor)); +} + +void destroyInstance(DataSourceClient* instance) { + delete instance; +} + +} // end of namespace datasrc +} // end of namespace isc diff --git a/src/lib/datasrc/sqlite3_accessor.h b/src/lib/datasrc/sqlite3_accessor.h new file mode 100644 index 0000000000..3286f3b5f4 --- /dev/null +++ b/src/lib/datasrc/sqlite3_accessor.h @@ -0,0 +1,215 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + + +#ifndef __DATASRC_SQLITE3_ACCESSOR_H +#define __DATASRC_SQLITE3_ACCESSOR_H + +#include + +#include + +#include +#include +#include + +#include + +namespace isc { +namespace dns { +class RRClass; +} + +namespace datasrc { + +/** + * \brief Low-level database error + * + * This exception is thrown when the SQLite library complains about something. + * It might mean corrupt database file, invalid request or that something is + * rotten in the library. + */ +class SQLite3Error : public Exception { +public: + SQLite3Error(const char* file, size_t line, const char* what) : + isc::Exception(file, line, what) {} +}; + +struct SQLite3Parameters; + +/** + * \brief Concrete implementation of DatabaseAccessor for SQLite3 databases + * + * This opens one database file with our schema and serves data from there. + * According to the design, it doesn't interpret the data in any way, it just + * provides unified access to the DB. + */ +class SQLite3Accessor : public DatabaseAccessor, + public boost::enable_shared_from_this { +public: + /** + * \brief Constructor + * + * This opens the database and becomes ready to serve data from there. + * + * \exception SQLite3Error will be thrown if the given database file + * doesn't work (it is broken, doesn't exist and can't be created, etc). + * + * \param filename The database file to be used. + * \param rrclass Which class of data it should serve (while the database + * file can contain multiple classes of data, single database can + * provide only one class). + */ + SQLite3Accessor(const std::string& filename, + const isc::dns::RRClass& rrclass); + + /** + * \brief Constructor + * + * Same as the other version, but takes rrclass as a bare string. + * we should obsolete the other version and unify the constructor to + * this version; the SQLite3Accessor is expected to be "dumb" and + * shouldn't care about DNS specific information such as RRClass. + */ + SQLite3Accessor(const std::string& filename, const std::string& rrclass); + + /** + * \brief Destructor + * + * Closes the database. + */ + ~SQLite3Accessor(); + + /// This implementation internally opens a new sqlite3 database for the + /// same file name specified in the constructor of the original accessor. + virtual boost::shared_ptr clone(); + + /** + * \brief Look up a zone + * + * This implements the getZone from DatabaseAccessor and looks up a zone + * in the data. It looks for a zone with the exact given origin and class + * passed to the constructor. + * + * \exception SQLite3Error if something about the database is broken. + * + * \param name The (fully qualified) domain name of zone to look up + * \return The pair contains if the lookup was successful in the first + * element and the zone id in the second if it was. + */ + virtual std::pair getZone(const std::string& name) const; + + /** \brief Look up all resource records for a name + * + * This implements the getRecords() method from DatabaseAccessor + * + * \exception SQLite3Error if there is an sqlite3 error when performing + * the query + * + * \param name the name to look up + * \param id the zone id, as returned by getZone() + * \param subdomains Match subdomains instead of the name. + * \return Iterator that contains all records with the given name + */ + virtual IteratorContextPtr getRecords(const std::string& name, + int id, + bool subdomains = false) const; + + /** \brief Look up all resource records for a zone + * + * This implements the getRecords() method from DatabaseAccessor + * + * \exception SQLite3Error if there is an sqlite3 error when performing + * the query + * + * \param id the zone id, as returned by getZone() + * \return Iterator that contains all records in the given zone + */ + virtual IteratorContextPtr getAllRecords(int id) const; + + virtual std::pair startUpdateZone(const std::string& zone_name, + bool replace); + + /// \note we are quite impatient here: it's quite possible that the COMMIT + /// fails due to other process performing SELECT on the same database + /// (consider the case where COMMIT is done by xfrin or dynamic update + /// server while an authoritative server is busy reading the DB). + /// In a future version we should probably need to introduce some retry + /// attempt and/or increase timeout before giving up the COMMIT, even + /// if it still doesn't guarantee 100% success. Right now this + /// implementation throws a \c DataSourceError exception in such a case. + virtual void commitUpdateZone(); + + /// \note In SQLite3 rollback can fail if there's another unfinished + /// statement is performed for the same database structure. + /// Although it's not expected to happen in our expected usage, it's not + /// guaranteed to be prevented at the API level. If it ever happens, this + /// method throws a \c DataSourceError exception. It should be + /// considered a bug of the higher level application program. + virtual void rollbackUpdateZone(); + + virtual void addRecordToZone( + const std::string (&columns)[ADD_COLUMN_COUNT]); + + virtual void deleteRecordInZone( + const std::string (¶ms)[DEL_PARAM_COUNT]); + + /// The SQLite3 implementation of this method returns a string starting + /// with a fixed prefix of "sqlite3_" followed by the DB file name + /// removing any path name. For example, for the DB file + /// /somewhere/in/the/system/bind10.sqlite3, this method will return + /// "sqlite3_bind10.sqlite3". + virtual const std::string& getDBName() const { return (database_name_); } + + /// \brief Concrete implementation of the pure virtual method + virtual std::string findPreviousName(int zone_id, const std::string& rname) + const; + +private: + /// \brief Private database data + boost::scoped_ptr dbparameters_; + /// \brief The filename of the DB (necessary for clone()) + const std::string filename_; + /// \brief The class for which the queries are done + const std::string class_; + /// \brief Opens the database + void open(const std::string& filename); + /// \brief Closes the database + void close(); + /// \brief SQLite3 implementation of IteratorContext + class Context; + friend class Context; + const std::string database_name_; +}; + +/// \brief Creates an instance of the SQlite3 datasource client +/// +/// Currently the configuration passed here must be a MapElement, containing +/// one item called "database_file", whose value is a string +/// +/// This configuration setup is currently under discussion and will change in +/// the near future. +extern "C" DataSourceClient* createInstance(isc::data::ConstElementPtr config); + +/// \brief Destroy the instance created by createInstance() +extern "C" void destroyInstance(DataSourceClient* instance); + +} +} + +#endif // __DATASRC_SQLITE3_CONNECTION_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/datasrc/sqlite3_datasrc.cc b/src/lib/datasrc/sqlite3_datasrc.cc index 13d98ed0b3..03b057cd49 100644 --- a/src/lib/datasrc/sqlite3_datasrc.cc +++ b/src/lib/datasrc/sqlite3_datasrc.cc @@ -26,6 +26,8 @@ #include #include +#define SQLITE_SCHEMA_VERSION 1 + using namespace std; using namespace isc::dns; using namespace isc::dns::rdata; @@ -77,6 +79,8 @@ const char* const SCHEMA_LIST[] = { NULL }; +const char* const q_version_str = "SELECT version FROM schema_version"; + const char* const q_zone_str = "SELECT id FROM zones WHERE name=?1"; const char* const q_record_str = "SELECT rdtype, ttl, sigtype, rdata " @@ -254,7 +258,7 @@ Sqlite3DataSrc::findRecords(const Name& name, const RRType& rdtype, } break; } - + sqlite3_reset(query); sqlite3_clear_bindings(query); @@ -295,7 +299,7 @@ Sqlite3DataSrc::findRecords(const Name& name, const RRType& rdtype, // sqlite3_reset(dbparameters->q_count_); sqlite3_clear_bindings(dbparameters->q_count_); - + rc = sqlite3_bind_int(dbparameters->q_count_, 1, zone_id); if (rc != SQLITE_OK) { isc_throw(Sqlite3Error, "Could not bind zone ID " << zone_id << @@ -356,7 +360,8 @@ Sqlite3DataSrc::findClosestEnclosure(DataSrcMatch& match) const { unsigned int position; if (findClosest(match.getName(), &position) == -1) { - LOG_DEBUG(logger, DBG_TRACE_DATA, DATASRC_SQLITE_ENCLOSURE_NOTFOUND); + LOG_DEBUG(logger, DBG_TRACE_DATA, DATASRC_SQLITE_ENCLOSURE_NOT_FOUND) + .arg(match.getName()); return; } @@ -652,29 +657,90 @@ prepare(sqlite3* const db, const char* const statement) { return (prepared); } -void -checkAndSetupSchema(Sqlite3Initializer* initializer) { - sqlite3* const db = initializer->params_.db_; +// small function to sleep for 0.1 seconds, needed when waiting for +// exclusive database locks (which should only occur on startup, and only +// when the database has not been created yet) +void do_sleep() { + struct timespec req; + req.tv_sec = 0; + req.tv_nsec = 100000000; + nanosleep(&req, NULL); +} +// returns the schema version if the schema version table exists +// returns -1 if it does not +int check_schema_version(sqlite3* db) { sqlite3_stmt* prepared = NULL; - if (sqlite3_prepare_v2(db, "SELECT version FROM schema_version", -1, - &prepared, NULL) == SQLITE_OK && - sqlite3_step(prepared) == SQLITE_ROW) { - initializer->params_.version_ = sqlite3_column_int(prepared, 0); - sqlite3_finalize(prepared); - } else { - logger.info(DATASRC_SQLITE_SETUP); - if (prepared != NULL) { - sqlite3_finalize(prepared); + // At this point in time, the database might be exclusively locked, in + // which case even prepare() will return BUSY, so we may need to try a + // few times + for (size_t i = 0; i < 50; ++i) { + int rc = sqlite3_prepare_v2(db, q_version_str, -1, &prepared, NULL); + if (rc == SQLITE_ERROR) { + // this is the error that is returned when the table does not + // exist + return (-1); + } else if (rc == SQLITE_OK) { + break; + } else if (rc != SQLITE_BUSY || i == 50) { + isc_throw(Sqlite3Error, "Unable to prepare version query: " + << rc << " " << sqlite3_errmsg(db)); } + do_sleep(); + } + if (sqlite3_step(prepared) != SQLITE_ROW) { + isc_throw(Sqlite3Error, + "Unable to query version: " << sqlite3_errmsg(db)); + } + int version = sqlite3_column_int(prepared, 0); + sqlite3_finalize(prepared); + return (version); +} + +// return db version +int create_database(sqlite3* db) { + // try to get an exclusive lock. Once that is obtained, do the version + // check *again*, just in case this process was racing another + // + // try for 5 secs (50*0.1) + int rc; + logger.info(DATASRC_SQLITE_SETUP); + for (size_t i = 0; i < 50; ++i) { + rc = sqlite3_exec(db, "BEGIN EXCLUSIVE TRANSACTION", NULL, NULL, + NULL); + if (rc == SQLITE_OK) { + break; + } else if (rc != SQLITE_BUSY || i == 50) { + isc_throw(Sqlite3Error, "Unable to acquire exclusive lock " + "for database creation: " << sqlite3_errmsg(db)); + } + do_sleep(); + } + int schema_version = check_schema_version(db); + if (schema_version == -1) { for (int i = 0; SCHEMA_LIST[i] != NULL; ++i) { if (sqlite3_exec(db, SCHEMA_LIST[i], NULL, NULL, NULL) != SQLITE_OK) { isc_throw(Sqlite3Error, - "Failed to set up schema " << SCHEMA_LIST[i]); + "Failed to set up schema " << SCHEMA_LIST[i]); } } + sqlite3_exec(db, "COMMIT TRANSACTION", NULL, NULL, NULL); + return (SQLITE_SCHEMA_VERSION); + } else { + return (schema_version); } +} + +void +checkAndSetupSchema(Sqlite3Initializer* initializer) { + sqlite3* const db = initializer->params_.db_; + + int schema_version = check_schema_version(db); + if (schema_version != SQLITE_SCHEMA_VERSION) { + schema_version = create_database(db); + } + initializer->params_.version_ = schema_version; initializer->params_.q_zone_ = prepare(db, q_zone_str); initializer->params_.q_record_ = prepare(db, q_record_str); diff --git a/src/lib/datasrc/static_datasrc.cc b/src/lib/datasrc/static_datasrc.cc index dee14b9d1f..fd43e1ca6d 100644 --- a/src/lib/datasrc/static_datasrc.cc +++ b/src/lib/datasrc/static_datasrc.cc @@ -70,6 +70,7 @@ StaticDataSrcImpl::StaticDataSrcImpl() : authors = RRsetPtr(new RRset(authors_name, RRClass::CH(), RRType::TXT(), RRTTL(0))); authors->addRdata(generic::TXT("Chen Zhengzhang")); // Jerry + authors->addRdata(generic::TXT("Dmitriy Volodin")); authors->addRdata(generic::TXT("Evan Hunt")); authors->addRdata(generic::TXT("Haidong Wang")); // Ocean authors->addRdata(generic::TXT("Han Feng")); @@ -161,7 +162,7 @@ StaticDataSrc::findRRset(const Name& qname, arg(qtype); flags = 0; if (qclass != getClass() && qclass != RRClass::ANY()) { - LOG_ERROR(logger, DATASRC_STATIC_BAD_CLASS); + LOG_ERROR(logger, DATASRC_STATIC_CLASS_NOT_CH); return (ERROR); } diff --git a/src/lib/datasrc/tests/Makefile.am b/src/lib/datasrc/tests/Makefile.am index fbcf9c95c0..3183b1d4f7 100644 --- a/src/lib/datasrc/tests/Makefile.am +++ b/src/lib/datasrc/tests/Makefile.am @@ -1,8 +1,12 @@ +SUBDIRS = . testdata + AM_CPPFLAGS = -I$(top_srcdir)/src/lib -I$(top_builddir)/src/lib AM_CPPFLAGS += -I$(top_builddir)/src/lib/dns -I$(top_srcdir)/src/lib/dns AM_CPPFLAGS += $(BOOST_INCLUDES) AM_CPPFLAGS += $(SQLITE_CFLAGS) -AM_CPPFLAGS += -DTEST_DATA_DIR=\"$(srcdir)/testdata\" +AM_CPPFLAGS += -DTEST_DATA_DIR=\"$(abs_srcdir)/testdata\" +AM_CPPFLAGS += -DTEST_DATA_BUILDDIR=\"$(abs_builddir)/testdata\" +AM_CPPFLAGS += -DINSTALL_PROG=\"$(abs_top_srcdir)/install-sh\" AM_CXXFLAGS = $(B10_CXXFLAGS) @@ -25,9 +29,23 @@ run_unittests_SOURCES += query_unittest.cc run_unittests_SOURCES += cache_unittest.cc run_unittests_SOURCES += test_datasrc.h test_datasrc.cc run_unittests_SOURCES += rbtree_unittest.cc -run_unittests_SOURCES += zonetable_unittest.cc -run_unittests_SOURCES += memory_datasrc_unittest.cc +#run_unittests_SOURCES += zonetable_unittest.cc +#run_unittests_SOURCES += memory_datasrc_unittest.cc run_unittests_SOURCES += logger_unittest.cc +run_unittests_SOURCES += database_unittest.cc +run_unittests_SOURCES += client_unittest.cc +run_unittests_SOURCES += sqlite3_accessor_unittest.cc +if !USE_STATIC_LINK +# This test uses dynamically loadable module. It will cause various +# troubles with static link such as "missing" symbols in the static object +# for the module. As a workaround we disable this particualr test +# in this case. +run_unittests_SOURCES += factory_unittest.cc +endif +# for the dlopened types we have tests for, we also need to include the +# sources +run_unittests_SOURCES += $(top_srcdir)/src/lib/datasrc/sqlite3_accessor.cc +#run_unittests_SOURCES += $(top_srcdir)/src/lib/datasrc/memory_datasrc.cc run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) run_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS) @@ -36,6 +54,7 @@ run_unittests_LDADD = $(GTEST_LDADD) run_unittests_LDADD += $(SQLITE_LIBS) run_unittests_LDADD += $(top_builddir)/src/lib/datasrc/libdatasrc.la run_unittests_LDADD += $(top_builddir)/src/lib/dns/libdns++.la +run_unittests_LDADD += $(top_builddir)/src/lib/util/libutil.la run_unittests_LDADD += $(top_builddir)/src/lib/log/liblog.la run_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la run_unittests_LDADD += $(top_builddir)/src/lib/cc/libcc.la @@ -57,3 +76,4 @@ EXTRA_DIST += testdata/sql1.example.com.signed EXTRA_DIST += testdata/sql2.example.com.signed EXTRA_DIST += testdata/test-root.sqlite3 EXTRA_DIST += testdata/test.sqlite3 +EXTRA_DIST += testdata/rwtest.sqlite3 diff --git a/src/lib/datasrc/tests/cache_unittest.cc b/src/lib/datasrc/tests/cache_unittest.cc index 96beae072a..1325f64f37 100644 --- a/src/lib/datasrc/tests/cache_unittest.cc +++ b/src/lib/datasrc/tests/cache_unittest.cc @@ -202,15 +202,15 @@ TEST_F(CacheTest, retrieveFail) { } TEST_F(CacheTest, expire) { - // Insert "foo" with a duration of 2 seconds; sleep 3. The + // Insert "foo" with a duration of 1 seconds; sleep 2. The // record should not be returned from the cache even though it's // at the top of the cache. RRsetPtr aaaa(new RRset(Name("foo"), RRClass::IN(), RRType::AAAA(), RRTTL(0))); aaaa->addRdata(in::AAAA("2001:db8:3:bb::5")); - cache.addPositive(aaaa, 0, 2); + cache.addPositive(aaaa, 0, 1); - sleep(3); + sleep(2); RRsetPtr r; uint32_t f; diff --git a/src/lib/datasrc/tests/client_unittest.cc b/src/lib/datasrc/tests/client_unittest.cc new file mode 100644 index 0000000000..5b2c91ab51 --- /dev/null +++ b/src/lib/datasrc/tests/client_unittest.cc @@ -0,0 +1,50 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +#include + +using namespace isc::datasrc; +using isc::dns::Name; + +namespace { + +/* + * The DataSourceClient can't be created as it has pure virtual methods. + * So we implement them as NOPs and test the other methods. + */ +class NopClient : public DataSourceClient { +public: + virtual FindResult findZone(const isc::dns::Name&) const { + return (FindResult(result::NOTFOUND, ZoneFinderPtr())); + } + virtual ZoneUpdaterPtr getUpdater(const isc::dns::Name&, bool) const { + return (ZoneUpdaterPtr()); + } +}; + +class ClientTest : public ::testing::Test { +public: + NopClient client_; +}; + +// The default implementation is NotImplemented +TEST_F(ClientTest, defaultIterator) { + EXPECT_THROW(client_.getIterator(Name(".")), isc::NotImplemented); +} + +} diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc new file mode 100644 index 0000000000..fe57185235 --- /dev/null +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -0,0 +1,2410 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include + +#include + +using namespace isc::datasrc; +using namespace std; +using namespace boost; +using namespace isc::dns; + +namespace { + +// Imaginary zone IDs used in the mock accessor below. +const int READONLY_ZONE_ID = 42; +const int WRITABLE_ZONE_ID = 4200; + +// Commonly used test data +const char* const TEST_RECORDS[][5] = { + // some plain data + {"www.example.org.", "A", "3600", "", "192.0.2.1"}, + {"www.example.org.", "AAAA", "3600", "", "2001:db8::1"}, + {"www.example.org.", "AAAA", "3600", "", "2001:db8::2"}, + {"www.example.org.", "NSEC", "3600", "", "www2.example.org. A AAAA NSEC RRSIG"}, + {"www.example.org.", "RRSIG", "3600", "", "NSEC 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + {"www2.example.org.", "A", "3600", "", "192.0.2.1"}, + {"www2.example.org.", "AAAA", "3600", "", "2001:db8::1"}, + {"www2.example.org.", "A", "3600", "", "192.0.2.2"}, + + {"cname.example.org.", "CNAME", "3600", "", "www.example.org."}, + + // some DNSSEC-'signed' data + {"signed1.example.org.", "A", "3600", "", "192.0.2.1"}, + {"signed1.example.org.", "RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + {"signed1.example.org.", "RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"}, + {"signed1.example.org.", "AAAA", "3600", "", "2001:db8::1"}, + {"signed1.example.org.", "AAAA", "3600", "", "2001:db8::2"}, + {"signed1.example.org.", "RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + {"signedcname1.example.org.", "CNAME", "3600", "", "www.example.org."}, + {"signedcname1.example.org.", "RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + // special case might fail; sig is for cname, which isn't there (should be ignored) + // (ignoring of 'normal' other type is done above by www.) + {"acnamesig1.example.org.", "A", "3600", "", "192.0.2.1"}, + {"acnamesig1.example.org.", "RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"acnamesig1.example.org.", "RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + // let's pretend we have a database that is not careful + // about the order in which it returns data + {"signed2.example.org.", "RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"signed2.example.org.", "AAAA", "3600", "", "2001:db8::2"}, + {"signed2.example.org.", "RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"}, + {"signed2.example.org.", "A", "3600", "", "192.0.2.1"}, + {"signed2.example.org.", "RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"signed2.example.org.", "AAAA", "3600", "", "2001:db8::1"}, + + {"signedcname2.example.org.", "RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"signedcname2.example.org.", "CNAME", "3600", "", "www.example.org."}, + + {"acnamesig2.example.org.", "RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"acnamesig2.example.org.", "A", "3600", "", "192.0.2.1"}, + {"acnamesig2.example.org.", "RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + {"acnamesig3.example.org.", "RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"acnamesig3.example.org.", "RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"acnamesig3.example.org.", "A", "3600", "", "192.0.2.1"}, + + {"ttldiff1.example.org.", "A", "3600", "", "192.0.2.1"}, + {"ttldiff1.example.org.", "A", "360", "", "192.0.2.2"}, + + {"ttldiff2.example.org.", "A", "360", "", "192.0.2.1"}, + {"ttldiff2.example.org.", "A", "3600", "", "192.0.2.2"}, + + // also add some intentionally bad data + {"badcname1.example.org.", "A", "3600", "", "192.0.2.1"}, + {"badcname1.example.org.", "CNAME", "3600", "", "www.example.org."}, + + {"badcname2.example.org.", "CNAME", "3600", "", "www.example.org."}, + {"badcname2.example.org.", "A", "3600", "", "192.0.2.1"}, + + {"badcname3.example.org.", "CNAME", "3600", "", "www.example.org."}, + {"badcname3.example.org.", "CNAME", "3600", "", "www.example2.org."}, + + {"badrdata.example.org.", "A", "3600", "", "bad"}, + + {"badtype.example.org.", "BAD_TYPE", "3600", "", "192.0.2.1"}, + + {"badttl.example.org.", "A", "badttl", "", "192.0.2.1"}, + + {"badsig.example.org.", "A", "badttl", "", "192.0.2.1"}, + {"badsig.example.org.", "RRSIG", "3600", "", "A 5 3 3600 somebaddata 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + {"badsigtype.example.org.", "A", "3600", "", "192.0.2.1"}, + {"badsigtype.example.org.", "RRSIG", "3600", "TXT", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + // Data for testing delegation (with NS and DNAME) + {"delegation.example.org.", "NS", "3600", "", "ns.example.com."}, + {"delegation.example.org.", "NS", "3600", "", + "ns.delegation.example.org."}, + {"delegation.example.org.", "DS", "3600", "", "1 RSAMD5 2 abcd"}, + {"delegation.example.org.", "RRSIG", "3600", "", "NS 5 3 3600 " + "20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"ns.delegation.example.org.", "A", "3600", "", "192.0.2.1"}, + {"deep.below.delegation.example.org.", "A", "3600", "", "192.0.2.1"}, + + {"dname.example.org.", "A", "3600", "", "192.0.2.1"}, + {"dname.example.org.", "DNAME", "3600", "", "dname.example.com."}, + {"dname.example.org.", "RRSIG", "3600", "", + "DNAME 5 3 3600 20000101000000 20000201000000 12345 " + "example.org. FAKEFAKEFAKE"}, + + {"below.dname.example.org.", "A", "3600", "", "192.0.2.1"}, + + // Broken NS + {"brokenns1.example.org.", "A", "3600", "", "192.0.2.1"}, + {"brokenns1.example.org.", "NS", "3600", "", "ns.example.com."}, + + {"brokenns2.example.org.", "NS", "3600", "", "ns.example.com."}, + {"brokenns2.example.org.", "A", "3600", "", "192.0.2.1"}, + + // Now double DNAME, to test failure mode + {"baddname.example.org.", "DNAME", "3600", "", "dname1.example.com."}, + {"baddname.example.org.", "DNAME", "3600", "", "dname2.example.com."}, + + // Put some data into apex (including NS) so we can check our NS + // doesn't break anything + {"example.org.", "NS", "3600", "", "ns.example.com."}, + {"example.org.", "A", "3600", "", "192.0.2.1"}, + {"example.org.", "NSEC", "3600", "", "acnamesig1.example.org. NS A NSEC RRSIG"}, + {"example.org.", "RRSIG", "3600", "", "NSEC 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"example.org.", "RRSIG", "3600", "", "NS 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. FAKEFAKEFAKE"}, + + // This is because of empty domain test + {"a.b.example.org.", "A", "3600", "", "192.0.2.1"}, + + // Something for wildcards + {"*.wild.example.org.", "A", "3600", "", "192.0.2.5"}, + {"*.wild.example.org.", "RRSIG", "3600", "A", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"*.wild.example.org.", "NSEC", "3600", "", "cancel.here.wild.example.org. A NSEC RRSIG"}, + {"*.wild.example.org.", "RRSIG", "3600", "", "NSEC 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"}, + {"cancel.here.wild.example.org.", "AAAA", "3600", "", "2001:db8::5"}, + {"delegatedwild.example.org.", "NS", "3600", "", "ns.example.com."}, + {"*.delegatedwild.example.org.", "A", "3600", "", "192.0.2.5"}, + {"wild.*.foo.example.org.", "A", "3600", "", "192.0.2.5"}, + {"wild.*.foo.*.bar.example.org.", "A", "3600", "", "192.0.2.5"}, + {"bao.example.org.", "NSEC", "3600", "", "wild.*.foo.*.bar.example.org. NSEC"}, + {"*.cnamewild.example.org.", "CNAME", "3600", "", "www.example.org."}, + {"*.nswild.example.org.", "NS", "3600", "", "ns.example.com."}, + // For NSEC empty non-terminal + {"l.example.org.", "NSEC", "3600", "", "empty.nonterminal.example.org. NSEC"}, + {"empty.nonterminal.example.org.", "A", "3600", "", "192.0.2.1"}, + // Invalid rdata + {"invalidrdata.example.org.", "A", "3600", "", "Bunch of nonsense"}, + {"invalidrdata2.example.org.", "A", "3600", "", "192.0.2.1"}, + {"invalidrdata2.example.org.", "RRSIG", "3600", "", "Nonsense"}, + + {NULL, NULL, NULL, NULL, NULL}, +}; + +/* + * An accessor with minimum implementation, keeping the original + * "NotImplemented" methods. + */ +class NopAccessor : public DatabaseAccessor { +public: + NopAccessor() : database_name_("mock_database") + { } + + virtual std::pair getZone(const std::string& name) const { + if (name == "example.org.") { + return (std::pair(true, READONLY_ZONE_ID)); + } else if (name == "null.example.org.") { + return (std::pair(true, 13)); + } else if (name == "empty.example.org.") { + return (std::pair(true, 0)); + } else if (name == "bad.example.org.") { + return (std::pair(true, -1)); + } else { + return (std::pair(false, 0)); + } + } + + virtual shared_ptr clone() { + return (shared_ptr()); // bogus data, but unused + } + + virtual std::pair startUpdateZone(const std::string&, bool) { + // return dummy value. unused anyway. + return (pair(true, 0)); + } + virtual void commitUpdateZone() {} + virtual void rollbackUpdateZone() {} + virtual void addRecordToZone(const string (&)[ADD_COLUMN_COUNT]) {} + virtual void deleteRecordInZone(const string (&)[DEL_PARAM_COUNT]) {} + + virtual const std::string& getDBName() const { + return (database_name_); + } + + virtual IteratorContextPtr getRecords(const std::string&, int, bool) + const + { + isc_throw(isc::NotImplemented, + "This database datasource can't be iterated"); + } + + virtual IteratorContextPtr getAllRecords(int) const { + isc_throw(isc::NotImplemented, + "This database datasource can't be iterated"); + } + + virtual std::string findPreviousName(int, const std::string&) const { + isc_throw(isc::NotImplemented, + "This data source doesn't support DNSSEC"); + } +private: + const std::string database_name_; + +}; + +/* + * A virtual database accessor that pretends it contains single zone -- + * example.org. + * + * It has the same getZone method as NopConnection, but it provides + * implementation of the optional functionality. + */ +class MockAccessor : public NopAccessor { + // Type of mock database "row"s + typedef std::map > > + Domains; + +public: + MockAccessor() : rollbacked_(false) { + readonly_records_ = &readonly_records_master_; + update_records_ = &update_records_master_; + empty_records_ = &empty_records_master_; + fillData(); + } + + virtual shared_ptr clone() { + shared_ptr cloned_accessor(new MockAccessor()); + cloned_accessor->readonly_records_ = &readonly_records_master_; + cloned_accessor->update_records_ = &update_records_master_; + cloned_accessor->empty_records_ = &empty_records_master_; + latest_clone_ = cloned_accessor; + return (cloned_accessor); + } + +private: + class MockNameIteratorContext : public IteratorContext { + public: + MockNameIteratorContext(const MockAccessor& mock_accessor, int zone_id, + const std::string& name, bool subdomains) : + searched_name_(name), cur_record_(0) + { + // 'hardcoded' names to trigger exceptions + // On these names some exceptions are thrown, to test the robustness + // of the find() method. + if (searched_name_ == "dsexception.in.search.") { + isc_throw(DataSourceError, "datasource exception on search"); + } else if (searched_name_ == "iscexception.in.search.") { + isc_throw(isc::Exception, "isc exception on search"); + } else if (searched_name_ == "basicexception.in.search.") { + throw std::exception(); + } + + cur_record_ = 0; + const Domains& cur_records = mock_accessor.getMockRecords(zone_id); + if (cur_records.count(name) > 0) { + // we're not aiming for efficiency in this test, simply + // copy the relevant vector from records + cur_name = cur_records.find(name)->second; + } else if (subdomains) { + cur_name.clear(); + // Just walk everything and check if it is a subdomain. + // If it is, just copy all data from there. + for (Domains::const_iterator i(cur_records.begin()); + i != cur_records.end(); ++i) { + const Name local(i->first); + if (local.compare(Name(name)).getRelation() == + isc::dns::NameComparisonResult::SUBDOMAIN) { + cur_name.insert(cur_name.end(), i->second.begin(), + i->second.end()); + } + } + } else { + cur_name.clear(); + } + } + + virtual bool getNext(std::string (&columns)[COLUMN_COUNT]) { + if (searched_name_ == "dsexception.in.getnext.") { + isc_throw(DataSourceError, "datasource exception on getnextrecord"); + } else if (searched_name_ == "iscexception.in.getnext.") { + isc_throw(isc::Exception, "isc exception on getnextrecord"); + } else if (searched_name_ == "basicexception.in.getnext.") { + throw std::exception(); + } + + if (cur_record_ < cur_name.size()) { + for (size_t i = 0; i < COLUMN_COUNT; ++i) { + columns[i] = cur_name[cur_record_][i]; + } + cur_record_++; + return (true); + } else { + return (false); + } + } + + private: + const std::string searched_name_; + int cur_record_; + std::vector< std::vector > cur_name; + }; + + class MockIteratorContext : public IteratorContext { + private: + int step; + public: + MockIteratorContext() : + step(0) + { } + virtual bool getNext(string (&data)[COLUMN_COUNT]) { + switch (step ++) { + case 0: + data[DatabaseAccessor::NAME_COLUMN] = "example.org"; + data[DatabaseAccessor::TYPE_COLUMN] = "SOA"; + data[DatabaseAccessor::TTL_COLUMN] = "300"; + data[DatabaseAccessor::RDATA_COLUMN] = "ns1.example.org. admin.example.org. " + "1234 3600 1800 2419200 7200"; + return (true); + case 1: + data[DatabaseAccessor::NAME_COLUMN] = "x.example.org"; + data[DatabaseAccessor::TYPE_COLUMN] = "A"; + data[DatabaseAccessor::TTL_COLUMN] = "300"; + data[DatabaseAccessor::RDATA_COLUMN] = "192.0.2.1"; + return (true); + case 2: + data[DatabaseAccessor::NAME_COLUMN] = "x.example.org"; + data[DatabaseAccessor::TYPE_COLUMN] = "A"; + data[DatabaseAccessor::TTL_COLUMN] = "300"; + data[DatabaseAccessor::RDATA_COLUMN] = "192.0.2.2"; + return (true); + case 3: + data[DatabaseAccessor::NAME_COLUMN] = "x.example.org"; + data[DatabaseAccessor::TYPE_COLUMN] = "AAAA"; + data[DatabaseAccessor::TTL_COLUMN] = "300"; + data[DatabaseAccessor::RDATA_COLUMN] = "2001:db8::1"; + return (true); + case 4: + data[DatabaseAccessor::NAME_COLUMN] = "x.example.org"; + data[DatabaseAccessor::TYPE_COLUMN] = "AAAA"; + data[DatabaseAccessor::TTL_COLUMN] = "300"; + data[DatabaseAccessor::RDATA_COLUMN] = "2001:db8::2"; + return (true); + default: + ADD_FAILURE() << + "Request past the end of iterator context"; + case 5: + return (false); + } + } + }; + class EmptyIteratorContext : public IteratorContext { + public: + virtual bool getNext(string(&)[COLUMN_COUNT]) { + return (false); + } + }; + class BadIteratorContext : public IteratorContext { + private: + int step; + public: + BadIteratorContext() : + step(0) + { } + virtual bool getNext(string (&data)[COLUMN_COUNT]) { + switch (step ++) { + case 0: + data[DatabaseAccessor::NAME_COLUMN] = "x.example.org"; + data[DatabaseAccessor::TYPE_COLUMN] = "A"; + data[DatabaseAccessor::TTL_COLUMN] = "300"; + data[DatabaseAccessor::RDATA_COLUMN] = "192.0.2.1"; + return (true); + case 1: + data[DatabaseAccessor::NAME_COLUMN] = "x.example.org"; + data[DatabaseAccessor::TYPE_COLUMN] = "A"; + data[DatabaseAccessor::TTL_COLUMN] = "301"; + data[DatabaseAccessor::RDATA_COLUMN] = "192.0.2.2"; + return (true); + default: + ADD_FAILURE() << + "Request past the end of iterator context"; + case 2: + return (false); + } + } + }; +public: + virtual IteratorContextPtr getAllRecords(int id) const { + if (id == READONLY_ZONE_ID) { + return (IteratorContextPtr(new MockIteratorContext())); + } else if (id == 13) { + return (IteratorContextPtr()); + } else if (id == 0) { + return (IteratorContextPtr(new EmptyIteratorContext())); + } else if (id == -1) { + return (IteratorContextPtr(new BadIteratorContext())); + } else { + isc_throw(isc::Unexpected, "Unknown zone ID"); + } + } + + virtual IteratorContextPtr getRecords(const std::string& name, int id, + bool subdomains) const + { + if (id == READONLY_ZONE_ID || id == WRITABLE_ZONE_ID) { + return (IteratorContextPtr( + new MockNameIteratorContext(*this, id, name, + subdomains))); + } else { + isc_throw(isc::Unexpected, "Unknown zone ID"); + } + } + + virtual pair startUpdateZone(const std::string& zone_name, + bool replace) + { + const pair zone_info = getZone(zone_name); + if (!zone_info.first) { + return (pair(false, 0)); + } + + // Prepare the record set for update. If replacing the existing one, + // we use an empty set; otherwise we use a writable copy of the + // original. + if (replace) { + update_records_->clear(); + } else { + *update_records_ = *readonly_records_; + } + + return (pair(true, WRITABLE_ZONE_ID)); + } + virtual void commitUpdateZone() { + *readonly_records_ = *update_records_; + } + virtual void rollbackUpdateZone() { + // Special hook: if something with a name of "throw.example.org" + // has been added, trigger an imaginary unexpected event with an + // exception. + if (update_records_->count("throw.example.org.") > 0) { + isc_throw(DataSourceError, "unexpected failure in rollback"); + } + + rollbacked_ = true; + } + virtual void addRecordToZone(const string (&columns)[ADD_COLUMN_COUNT]) { + // Copy the current value to cur_name. If it doesn't exist, + // operator[] will create a new one. + cur_name_ = (*update_records_)[columns[DatabaseAccessor::ADD_NAME]]; + + vector record_columns; + record_columns.push_back(columns[DatabaseAccessor::ADD_TYPE]); + record_columns.push_back(columns[DatabaseAccessor::ADD_TTL]); + record_columns.push_back(columns[DatabaseAccessor::ADD_SIGTYPE]); + record_columns.push_back(columns[DatabaseAccessor::ADD_RDATA]); + record_columns.push_back(columns[DatabaseAccessor::ADD_NAME]); + + // copy back the added entry + cur_name_.push_back(record_columns); + (*update_records_)[columns[DatabaseAccessor::ADD_NAME]] = cur_name_; + + // remember this one so that test cases can check it. + copy(columns, columns + DatabaseAccessor::ADD_COLUMN_COUNT, + columns_lastadded_); + } + + // Helper predicate class used in deleteRecordInZone(). + struct deleteMatch { + deleteMatch(const string& type, const string& rdata) : + type_(type), rdata_(rdata) + {} + bool operator()(const vector& row) const { + return (row[0] == type_ && row[3] == rdata_); + } + const string& type_; + const string& rdata_; + }; + + virtual void deleteRecordInZone(const string (¶ms)[DEL_PARAM_COUNT]) { + vector >& records = + (*update_records_)[params[DatabaseAccessor::DEL_NAME]]; + records.erase(remove_if(records.begin(), records.end(), + deleteMatch( + params[DatabaseAccessor::DEL_TYPE], + params[DatabaseAccessor::DEL_RDATA])), + records.end()); + if (records.empty()) { + (*update_records_).erase(params[DatabaseAccessor::DEL_NAME]); + } + } + + // + // Helper methods to keep track of some update related activities + // + bool isRollbacked() const { + return (rollbacked_); + } + + const string* getLastAdded() const { + return (columns_lastadded_); + } + + // This allows the test code to get the accessor used in an update context + shared_ptr getLatestClone() const { + return (latest_clone_); + } + + virtual std::string findPreviousName(int id, const std::string& rname) + const + { + // Hardcoded for now, but we could compute it from the data + // Maybe do it when it is needed some time in future? + if (id == -1) { + isc_throw(isc::NotImplemented, "Test not implemented behaviour"); + } else if (id == 42) { + if (rname == "org.example.nonterminal.") { + return ("l.example.org."); + } else if (rname == "org.example.aa.") { + return ("example.org."); + } else if (rname == "org.example.www2." || + rname == "org.example.www1.") { + return ("www.example.org."); + } else if (rname == "org.example.badnsec2.") { + return ("badnsec1.example.org."); + } else if (rname == "org.example.brokenname.") { + return ("brokenname...example.org."); + } else if (rname == "org.example.bar.*.") { + return ("bao.example.org."); + } else if (rname == "org.example.notimplnsec." || + rname == "org.example.wild.here.") { + isc_throw(isc::NotImplemented, "Not implemented in this test"); + } else { + isc_throw(isc::Unexpected, "Unexpected name"); + } + } else { + isc_throw(isc::Unexpected, "Unknown zone ID"); + } + } + +private: + // The following member variables are storage and/or update work space + // of the test zone. The "master"s are the real objects that contain + // the data, and they are shared among all accessors cloned from + // an initially created one. The pointer members allow the sharing. + // "readonly" is for normal lookups. "update" is the workspace for + // updates. When update starts it will be initialized either as an + // empty set (when replacing the entire zone) or as a copy of the + // "readonly" one. "empty" is a sentinel to produce negative results. + Domains readonly_records_master_; + Domains* readonly_records_; + Domains update_records_master_; + Domains* update_records_; + const Domains empty_records_master_; + const Domains* empty_records_; + + // used as temporary storage during the building of the fake data + + // used as temporary storage after searchForRecord() and during + // getNextRecord() calls, as well as during the building of the + // fake data + std::vector< std::vector > cur_name_; + + // The columns that were most recently added via addRecordToZone() + string columns_lastadded_[ADD_COLUMN_COUNT]; + + // Whether rollback operation has been performed for the database. + // Not useful except for purely testing purpose. + bool rollbacked_; + + // Remember the mock accessor that was last cloned + boost::shared_ptr latest_clone_; + + const Domains& getMockRecords(int zone_id) const { + if (zone_id == READONLY_ZONE_ID) { + return (*readonly_records_); + } else if (zone_id == WRITABLE_ZONE_ID) { + return (*update_records_); + } + return (*empty_records_); + } + + // Adds one record to the current name in the database + // The actual data will not be added to 'records' until + // addCurName() is called + void addRecord(const std::string& type, + const std::string& ttl, + const std::string& sigtype, + const std::string& rdata) { + std::vector columns; + columns.push_back(type); + columns.push_back(ttl); + columns.push_back(sigtype); + columns.push_back(rdata); + cur_name_.push_back(columns); + } + + // Adds all records we just built with calls to addRecords + // to the actual fake database. This will clear cur_name_, + // so we can immediately start adding new records. + void addCurName(const std::string& name) { + ASSERT_EQ(0, readonly_records_->count(name)); + // Append the name to all of them + for (std::vector >::iterator + i(cur_name_.begin()); i != cur_name_.end(); ++ i) { + i->push_back(name); + } + (*readonly_records_)[name] = cur_name_; + cur_name_.clear(); + } + + // Fills the database with zone data. + // This method constructs a number of resource records (with addRecord), + // which will all be added for one domain name to the fake database + // (with addCurName). So for instance the first set of calls create + // data for the name 'www.example.org', which will consist of one A RRset + // of one record, and one AAAA RRset of two records. + // The order in which they are added is the order in which getNextRecord() + // will return them (so we can test whether find() etc. support data that + // might not come in 'normal' order) + // It shall immediately fail if you try to add the same name twice. + void fillData() { + const char* prev_name = NULL; + for (int i = 0; TEST_RECORDS[i][0] != NULL; ++i) { + if (prev_name != NULL && + strcmp(prev_name, TEST_RECORDS[i][0]) != 0) { + addCurName(prev_name); + } + prev_name = TEST_RECORDS[i][0]; + addRecord(TEST_RECORDS[i][1], TEST_RECORDS[i][2], + TEST_RECORDS[i][3], TEST_RECORDS[i][4]); + } + addCurName(prev_name); + } +}; + +// This tests the default getRecords behaviour, throwing NotImplemented +TEST(DatabaseConnectionTest, getRecords) { + EXPECT_THROW(NopAccessor().getRecords(".", 1, false), + isc::NotImplemented); +} + +// This tests the default getAllRecords behaviour, throwing NotImplemented +TEST(DatabaseConnectionTest, getAllRecords) { + // The parameters don't matter + EXPECT_THROW(NopAccessor().getAllRecords(1), + isc::NotImplemented); +} + +// This test fixture is templated so that we can share (most of) the test +// cases with different types of data sources. Note that in test cases +// we need to use 'this' to refer to member variables of the test class. +template +class DatabaseClientTest : public ::testing::Test { +public: + DatabaseClientTest() : zname_("example.org"), qname_("www.example.org"), + qclass_(RRClass::IN()), qtype_(RRType::A()), + rrttl_(3600) + { + createClient(); + + // set up the commonly used finder. + DataSourceClient::FindResult zone(client_->findZone(zname_)); + assert(zone.code == result::SUCCESS); + finder_ = dynamic_pointer_cast( + zone.zone_finder); + + // Test IN/A RDATA to be added in update tests. Intentionally using + // different data than the initial data configured in the MockAccessor. + rrset_.reset(new RRset(qname_, qclass_, qtype_, rrttl_)); + rrset_->addRdata(rdata::createRdata(rrset_->getType(), + rrset_->getClass(), "192.0.2.2")); + + // And its RRSIG. Also different from the configured one. + rrsigset_.reset(new RRset(qname_, qclass_, RRType::RRSIG(), + rrttl_)); + rrsigset_->addRdata(rdata::createRdata(rrsigset_->getType(), + rrsigset_->getClass(), + "A 5 3 0 20000101000000 " + "20000201000000 0 example.org. " + "FAKEFAKEFAKE")); + } + + /* + * We initialize the client from a function, so we can call it multiple + * times per test. + */ + void createClient() { + current_accessor_ = new ACCESSOR_TYPE(); + is_mock_ = (dynamic_cast(current_accessor_) != NULL); + client_.reset(new DatabaseClient(qclass_, + shared_ptr( + current_accessor_))); + } + + /** + * Check the zone finder is a valid one and references the zone ID and + * database available here. + */ + void checkZoneFinder(const DataSourceClient::FindResult& zone) { + ASSERT_NE(ZoneFinderPtr(), zone.zone_finder) << "No zone finder"; + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + ASSERT_NE(shared_ptr(), finder) << + "Wrong type of finder"; + if (is_mock_) { + EXPECT_EQ(READONLY_ZONE_ID, finder->zone_id()); + } + EXPECT_EQ(current_accessor_, &finder->getAccessor()); + } + + shared_ptr getFinder() { + DataSourceClient::FindResult zone(client_->findZone(zname_)); + EXPECT_EQ(result::SUCCESS, zone.code); + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + if (is_mock_) { + EXPECT_EQ(READONLY_ZONE_ID, finder->zone_id()); + } + + return (finder); + } + + // Helper methods for update tests + bool isRollbacked(bool expected = false) const { + if (is_mock_) { + const MockAccessor& mock_accessor = + dynamic_cast(*update_accessor_); + return (mock_accessor.isRollbacked()); + } else { + return (expected); + } + } + + void checkLastAdded(const char* const expected[]) const { + if (is_mock_) { + const MockAccessor* mock_accessor = + dynamic_cast(current_accessor_); + for (int i = 0; i < DatabaseAccessor::ADD_COLUMN_COUNT; ++i) { + EXPECT_EQ(expected[i], + mock_accessor->getLatestClone()->getLastAdded()[i]); + } + } + } + + void setUpdateAccessor() { + if (is_mock_) { + const MockAccessor* mock_accessor = + dynamic_cast(current_accessor_); + update_accessor_ = mock_accessor->getLatestClone(); + } + } + + // Some tests only work for MockAccessor. We remember whether our accessor + // is of that type. + bool is_mock_; + + // Will be deleted by client_, just keep the current value for comparison. + ACCESSOR_TYPE* current_accessor_; + shared_ptr client_; + const std::string database_name_; + + // The zone finder of the test zone commonly used in various tests. + shared_ptr finder_; + + // Some shortcut variables for commonly used test parameters + const Name zname_; // the zone name stored in the test data source + const Name qname_; // commonly used name to be found + const RRClass qclass_; // commonly used RR class used with qname + const RRType qtype_; // commonly used RR type used with qname + const RRTTL rrttl_; // commonly used RR TTL + RRsetPtr rrset_; // for adding/deleting an RRset + RRsetPtr rrsigset_; // for adding/deleting an RRset + + // update related objects to be tested + ZoneUpdaterPtr updater_; + shared_ptr update_accessor_; + + // placeholders + const std::vector empty_rdatas_; // for NXRRSET/NXDOMAIN + std::vector expected_rdatas_; + std::vector expected_sig_rdatas_; +}; + +class TestSQLite3Accessor : public SQLite3Accessor { +public: + TestSQLite3Accessor() : SQLite3Accessor( + TEST_DATA_BUILDDIR "/rwtest.sqlite3.copied", + RRClass::IN()) + { + startUpdateZone("example.org.", true); + string columns[ADD_COLUMN_COUNT]; + for (int i = 0; TEST_RECORDS[i][0] != NULL; ++i) { + columns[ADD_NAME] = TEST_RECORDS[i][0]; + columns[ADD_REV_NAME] = Name(columns[ADD_NAME]).reverse().toText(); + columns[ADD_TYPE] = TEST_RECORDS[i][1]; + columns[ADD_TTL] = TEST_RECORDS[i][2]; + columns[ADD_SIGTYPE] = TEST_RECORDS[i][3]; + columns[ADD_RDATA] = TEST_RECORDS[i][4]; + + addRecordToZone(columns); + } + commitUpdateZone(); + } +}; + +// The following two lines instantiate test cases with concrete accessor +// classes to be tested. +// XXX: clang++ installed on our FreeBSD buildbot cannot complete compiling +// this file, seemingly due to the size of the code. We'll consider more +// complete workaround, but for a short term workaround we'll reduce the +// number of tested accessor classes (thus reducing the amount of code +// to be compiled) for this particular environment. +#if defined(__clang__) && defined(__FreeBSD__) +typedef ::testing::Types TestAccessorTypes; +#else +typedef ::testing::Types TestAccessorTypes; +#endif + +TYPED_TEST_CASE(DatabaseClientTest, TestAccessorTypes); + +// In some cases the entire test fixture is for the mock accessor only. +// We use the usual TEST_F for them with the corresponding specialized class +// to make the code simpler. +typedef DatabaseClientTest MockDatabaseClientTest; + +TYPED_TEST(DatabaseClientTest, zoneNotFound) { + DataSourceClient::FindResult zone( + this->client_->findZone(Name("example.com"))); + EXPECT_EQ(result::NOTFOUND, zone.code); +} + +TYPED_TEST(DatabaseClientTest, exactZone) { + DataSourceClient::FindResult zone( + this->client_->findZone(Name("example.org"))); + EXPECT_EQ(result::SUCCESS, zone.code); + this->checkZoneFinder(zone); +} + +TYPED_TEST(DatabaseClientTest, superZone) { + DataSourceClient::FindResult zone(this->client_->findZone(Name( + "sub.example.org"))); + EXPECT_EQ(result::PARTIALMATCH, zone.code); + this->checkZoneFinder(zone); +} + +// This test doesn't depend on derived accessor class, so we use TEST(). +TEST(GenericDatabaseClientTest, noAccessorException) { + // We need a dummy variable here; some compiler would regard it a mere + // declaration instead of an instantiation and make the test fail. + EXPECT_THROW(DatabaseClient dummy(RRClass::IN(), + shared_ptr()), + isc::InvalidParameter); +} + +// If the zone doesn't exist, exception is thrown +TYPED_TEST(DatabaseClientTest, noZoneIterator) { + EXPECT_THROW(this->client_->getIterator(Name("example.com")), + DataSourceError); +} + +// If the zone doesn't exist and iteration is not implemented, it still throws +// the exception it doesn't exist +TEST(GenericDatabaseClientTest, noZoneNotImplementedIterator) { + EXPECT_THROW(DatabaseClient(RRClass::IN(), + boost::shared_ptr( + new NopAccessor())).getIterator( + Name("example.com")), + DataSourceError); +} + +TEST(GenericDatabaseClientTest, notImplementedIterator) { + EXPECT_THROW(DatabaseClient(RRClass::IN(), shared_ptr( + new NopAccessor())).getIterator(Name("example.org")), + isc::NotImplemented); +} + +// Pretend a bug in the connection and pass NULL as the context +// Should not crash, but gracefully throw. Works for the mock accessor only. +TEST_F(MockDatabaseClientTest, nullIteratorContext) { + EXPECT_THROW(this->client_->getIterator(Name("null.example.org")), + isc::Unexpected); +} + +// It doesn't crash or anything if the zone is completely empty. +// Works for the mock accessor only. +TEST_F(MockDatabaseClientTest, emptyIterator) { + ZoneIteratorPtr it(this->client_->getIterator(Name("empty.example.org"))); + EXPECT_EQ(ConstRRsetPtr(), it->getNextRRset()); + // This is past the end, it should throw + EXPECT_THROW(it->getNextRRset(), isc::Unexpected); +} + +// Iterate through a zone +TYPED_TEST(DatabaseClientTest, iterator) { + ZoneIteratorPtr it(this->client_->getIterator(Name("example.org"))); + ConstRRsetPtr rrset(it->getNextRRset()); + ASSERT_NE(ConstRRsetPtr(), rrset); + + // The rest of the checks work only for the mock accessor. + if (!this->is_mock_) { + return; + } + + EXPECT_EQ(Name("example.org"), rrset->getName()); + EXPECT_EQ(RRClass::IN(), rrset->getClass()); + EXPECT_EQ(RRType::SOA(), rrset->getType()); + EXPECT_EQ(RRTTL(300), rrset->getTTL()); + RdataIteratorPtr rit(rrset->getRdataIterator()); + ASSERT_FALSE(rit->isLast()); + rit->next(); + EXPECT_TRUE(rit->isLast()); + + rrset = it->getNextRRset(); + ASSERT_NE(ConstRRsetPtr(), rrset); + EXPECT_EQ(Name("x.example.org"), rrset->getName()); + EXPECT_EQ(RRClass::IN(), rrset->getClass()); + EXPECT_EQ(RRType::A(), rrset->getType()); + EXPECT_EQ(RRTTL(300), rrset->getTTL()); + rit = rrset->getRdataIterator(); + ASSERT_FALSE(rit->isLast()); + EXPECT_EQ("192.0.2.1", rit->getCurrent().toText()); + rit->next(); + ASSERT_FALSE(rit->isLast()); + EXPECT_EQ("192.0.2.2", rit->getCurrent().toText()); + rit->next(); + EXPECT_TRUE(rit->isLast()); + + rrset = it->getNextRRset(); + ASSERT_NE(ConstRRsetPtr(), rrset); + EXPECT_EQ(Name("x.example.org"), rrset->getName()); + EXPECT_EQ(RRClass::IN(), rrset->getClass()); + EXPECT_EQ(RRType::AAAA(), rrset->getType()); + EXPECT_EQ(RRTTL(300), rrset->getTTL()); + EXPECT_EQ(ConstRRsetPtr(), it->getNextRRset()); + rit = rrset->getRdataIterator(); + ASSERT_FALSE(rit->isLast()); + EXPECT_EQ("2001:db8::1", rit->getCurrent().toText()); + rit->next(); + ASSERT_FALSE(rit->isLast()); + EXPECT_EQ("2001:db8::2", rit->getCurrent().toText()); + rit->next(); + EXPECT_TRUE(rit->isLast()); +} + +// This has inconsistent TTL in the set (the rest, like nonsense in +// the data is handled in rdata itself). Works for the mock accessor only. +TEST_F(MockDatabaseClientTest, badIterator) { + // It should not throw, but get the lowest one of them + ZoneIteratorPtr it(this->client_->getIterator(Name("bad.example.org"))); + EXPECT_EQ(it->getNextRRset()->getTTL(), isc::dns::RRTTL(300)); +} + +// checks if the given rrset matches the +// given name, class, type and rdatas +void +checkRRset(isc::dns::ConstRRsetPtr rrset, + const isc::dns::Name& name, + const isc::dns::RRClass& rrclass, + const isc::dns::RRType& rrtype, + const isc::dns::RRTTL& rrttl, + const std::vector& rdatas) { + isc::dns::RRsetPtr expected_rrset( + new isc::dns::RRset(name, rrclass, rrtype, rrttl)); + for (unsigned int i = 0; i < rdatas.size(); ++i) { + expected_rrset->addRdata( + isc::dns::rdata::createRdata(rrtype, rrclass, + rdatas[i])); + } + isc::testutils::rrsetCheck(expected_rrset, rrset); +} + +void +doFindTest(ZoneFinder& finder, + const isc::dns::Name& name, + const isc::dns::RRType& type, + const isc::dns::RRType& expected_type, + const isc::dns::RRTTL expected_ttl, + ZoneFinder::Result expected_result, + const std::vector& expected_rdatas, + const std::vector& expected_sig_rdatas, + const isc::dns::Name& expected_name = isc::dns::Name::ROOT_NAME(), + const ZoneFinder::FindOptions options = ZoneFinder::FIND_DEFAULT) +{ + SCOPED_TRACE("doFindTest " + name.toText() + " " + type.toText()); + ZoneFinder::FindResult result = + finder.find(name, type, NULL, options); + ASSERT_EQ(expected_result, result.code) << name << " " << type; + if (!expected_rdatas.empty() && result.rrset) { + checkRRset(result.rrset, expected_name != Name(".") ? expected_name : + name, finder.getClass(), expected_type, expected_ttl, + expected_rdatas); + + if (!expected_sig_rdatas.empty() && result.rrset->getRRsig()) { + checkRRset(result.rrset->getRRsig(), expected_name != Name(".") ? + expected_name : name, finder.getClass(), + isc::dns::RRType::RRSIG(), expected_ttl, + expected_sig_rdatas); + } else if (expected_sig_rdatas.empty()) { + EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset->getRRsig()); + } else { + ADD_FAILURE() << "Missing RRSIG"; + } + } else if (expected_rdatas.empty()) { + EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset); + } else { + ADD_FAILURE() << "Missing result"; + } +} + +TYPED_TEST(DatabaseClientTest, find) { + shared_ptr finder(this->getFinder()); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + doFindTest(*finder, isc::dns::Name("www.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.2"); + doFindTest(*finder, isc::dns::Name("www2.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("2001:db8::1"); + this->expected_rdatas_.push_back("2001:db8::2"); + doFindTest(*finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, + ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + this->rrttl_, + ZoneFinder::NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("www.example.org."); + doFindTest(*finder, isc::dns::Name("cname.example.org."), + this->qtype_, isc::dns::RRType::CNAME(), this->rrttl_, + ZoneFinder::CNAME, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("www.example.org."); + doFindTest(*finder, isc::dns::Name("cname.example.org."), + isc::dns::RRType::CNAME(), isc::dns::RRType::CNAME(), + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("doesnotexist.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::NXDOMAIN, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("signed1.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("2001:db8::1"); + this->expected_rdatas_.push_back("2001:db8::2"); + this->expected_sig_rdatas_.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("signed1.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("signed1.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), this->rrttl_, + ZoneFinder::NXRRSET, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("www.example.org."); + this->expected_sig_rdatas_.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("signedcname1.example.org."), + this->qtype_, isc::dns::RRType::CNAME(), this->rrttl_, + ZoneFinder::CNAME, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("signed2.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("2001:db8::2"); + this->expected_rdatas_.push_back("2001:db8::1"); + this->expected_sig_rdatas_.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("signed2.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("signed2.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), this->rrttl_, + ZoneFinder::NXRRSET, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("www.example.org."); + this->expected_sig_rdatas_.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("signedcname2.example.org."), + this->qtype_, isc::dns::RRType::CNAME(), this->rrttl_, + ZoneFinder::CNAME, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("acnamesig1.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("acnamesig2.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("acnamesig3.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.2"); + doFindTest(*finder, isc::dns::Name("ttldiff1.example.org."), + this->qtype_, this->qtype_, isc::dns::RRTTL(360), + ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.2"); + doFindTest(*finder, isc::dns::Name("ttldiff2.example.org."), + this->qtype_, this->qtype_, isc::dns::RRTTL(360), + ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + + EXPECT_THROW(finder->find(isc::dns::Name("badcname1.example.org."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badcname2.example.org."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badcname3.example.org."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badrdata.example.org."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badtype.example.org."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badttl.example.org."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badsig.example.org."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + + // Trigger the hardcoded exceptions and see if find() has cleaned up + if (this->is_mock_) { + EXPECT_THROW(finder->find(isc::dns::Name("dsexception.in.search."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("iscexception.in.search."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + isc::Exception); + EXPECT_THROW(finder->find(isc::dns::Name("basicexception.in.search."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + std::exception); + EXPECT_THROW(finder->find(isc::dns::Name("dsexception.in.getnext."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("iscexception.in.getnext."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + isc::Exception); + EXPECT_THROW(finder->find(isc::dns::Name("basicexception.in.getnext."), + this->qtype_, + NULL, ZoneFinder::FIND_DEFAULT), + std::exception); + } + + // This RRSIG has the wrong sigtype field, which should be + // an error if we decide to keep using that field + // Right now the field is ignored, so it does not error + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("badsigtype.example.org."), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); +} + +TYPED_TEST(DatabaseClientTest, findDelegation) { + shared_ptr finder(this->getFinder()); + + // The apex should not be considered delegation point and we can access + // data + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + doFindTest(*finder, isc::dns::Name("example.org."), + this->qtype_, this->qtype_, + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("ns.example.com."); + this->expected_sig_rdatas_.push_back("NS 5 3 3600 20000101000000 20000201000000 " + "12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("example.org."), + isc::dns::RRType::NS(), isc::dns::RRType::NS(), + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + + // Check when we ask for something below delegation point, we get the NS + // (Both when the RRset there exists and doesn't) + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + this->expected_rdatas_.push_back("ns.example.com."); + this->expected_rdatas_.push_back("ns.delegation.example.org."); + this->expected_sig_rdatas_.push_back("NS 5 3 3600 20000101000000 20000201000000 " + "12345 example.org. FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("ns.delegation.example.org."), + this->qtype_, isc::dns::RRType::NS(), + this->rrttl_, ZoneFinder::DELEGATION, this->expected_rdatas_, + this->expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); + doFindTest(*finder, isc::dns::Name("ns.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), + this->rrttl_, ZoneFinder::DELEGATION, this->expected_rdatas_, + this->expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); + doFindTest(*finder, isc::dns::Name("deep.below.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), + this->rrttl_, ZoneFinder::DELEGATION, this->expected_rdatas_, + this->expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); + + // Even when we check directly at the delegation point, we should get + // the NS + doFindTest(*finder, isc::dns::Name("delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), + this->rrttl_, ZoneFinder::DELEGATION, this->expected_rdatas_, + this->expected_sig_rdatas_); + + // And when we ask direcly for the NS, we should still get delegation + doFindTest(*finder, isc::dns::Name("delegation.example.org."), + isc::dns::RRType::NS(), isc::dns::RRType::NS(), + this->rrttl_, ZoneFinder::DELEGATION, this->expected_rdatas_, + this->expected_sig_rdatas_); + + // Now test delegation. If it is below the delegation point, we should get + // the DNAME (the one with data under DNAME is invalid zone, but we test + // the behaviour anyway just to make sure) + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("dname.example.com."); + this->expected_sig_rdatas_.clear(); + this->expected_sig_rdatas_.push_back("DNAME 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("below.dname.example.org."), + this->qtype_, isc::dns::RRType::DNAME(), + this->rrttl_, ZoneFinder::DNAME, this->expected_rdatas_, + this->expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); + doFindTest(*finder, isc::dns::Name("below.dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), + this->rrttl_, ZoneFinder::DNAME, this->expected_rdatas_, + this->expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); + doFindTest(*finder, isc::dns::Name("really.deep.below.dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), + this->rrttl_, ZoneFinder::DNAME, this->expected_rdatas_, + this->expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); + + // Asking direcly for DNAME should give SUCCESS + doFindTest(*finder, isc::dns::Name("dname.example.org."), + isc::dns::RRType::DNAME(), isc::dns::RRType::DNAME(), + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + + // But we don't delegate at DNAME point + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("dname.example.org."), + this->qtype_, this->qtype_, + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->expected_sig_rdatas_); + this->expected_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::NXRRSET, this->expected_rdatas_, + this->expected_sig_rdatas_); + + // This is broken dname, it contains two targets + EXPECT_THROW(finder->find(isc::dns::Name("below.baddname.example.org."), + this->qtype_, NULL, + ZoneFinder::FIND_DEFAULT), + DataSourceError); + + // Broken NS - it lives together with something else + EXPECT_THROW(finder->find(isc::dns::Name("brokenns1.example.org."), + this->qtype_, NULL, + ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("brokenns2.example.org."), + this->qtype_, NULL, + ZoneFinder::FIND_DEFAULT), + DataSourceError); +} + +TYPED_TEST(DatabaseClientTest, emptyDomain) { + shared_ptr finder(this->getFinder()); + + // This domain doesn't exist, but a subdomain of it does. + // Therefore we should pretend the domain is there, but contains no RRsets + doFindTest(*finder, isc::dns::Name("b.example.org."), this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_); +} + +// Glue-OK mode. Just go through NS delegations down (but not through +// DNAME) and pretend it is not there. +TYPED_TEST(DatabaseClientTest, glueOK) { + shared_ptr finder(this->getFinder()); + + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("ns.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_, + isc::dns::Name("ns.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + doFindTest(*finder, isc::dns::Name("nothere.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::NXDOMAIN, + this->expected_rdatas_, this->expected_sig_rdatas_, + isc::dns::Name("nothere.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + this->expected_rdatas_.push_back("192.0.2.1"); + doFindTest(*finder, isc::dns::Name("ns.delegation.example.org."), + this->qtype_, this->qtype_, + this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_, + isc::dns::Name("ns.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("ns.example.com."); + this->expected_rdatas_.push_back("ns.delegation.example.org."); + this->expected_sig_rdatas_.clear(); + this->expected_sig_rdatas_.push_back("NS 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + // When we request the NS, it should be SUCCESS, not DELEGATION + // (different in GLUE_OK) + doFindTest(*finder, isc::dns::Name("delegation.example.org."), + isc::dns::RRType::NS(), isc::dns::RRType::NS(), + this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_, + isc::dns::Name("delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("dname.example.com."); + this->expected_sig_rdatas_.clear(); + this->expected_sig_rdatas_.push_back("DNAME 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("below.dname.example.org."), + this->qtype_, isc::dns::RRType::DNAME(), + this->rrttl_, ZoneFinder::DNAME, this->expected_rdatas_, + this->expected_sig_rdatas_, + isc::dns::Name("dname.example.org."), ZoneFinder::FIND_GLUE_OK); + doFindTest(*finder, isc::dns::Name("below.dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), + this->rrttl_, ZoneFinder::DNAME, this->expected_rdatas_, + this->expected_sig_rdatas_, + isc::dns::Name("dname.example.org."), ZoneFinder::FIND_GLUE_OK); +} + +TYPED_TEST(DatabaseClientTest, wildcard) { + shared_ptr finder(this->getFinder()); + + // First, simple wildcard match + // Check also that the RRSIG is added from the wildcard (not modified) + this->expected_rdatas_.push_back("192.0.2.5"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("a.wild.example.org"), + this->qtype_, this->qtype_, this->rrttl_, + ZoneFinder::WILDCARD, this->expected_rdatas_, + this->expected_sig_rdatas_); + doFindTest(*finder, isc::dns::Name("b.a.wild.example.org"), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::WILDCARD, + this->expected_rdatas_, this->expected_sig_rdatas_); + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("a.wild.example.org"), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::WILDCARD_NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_); + doFindTest(*finder, isc::dns::Name("b.a.wild.example.org"), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::WILDCARD_NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_); + + // Direct request for this wildcard + this->expected_rdatas_.push_back("192.0.2.5"); + this->expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("*.wild.example.org"), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + this->expected_rdatas_.clear(); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("*.wild.example.org"), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::NXRRSET, this->expected_rdatas_, + this->expected_sig_rdatas_); + // This is nonsense, but check it doesn't match by some stupid accident + doFindTest(*finder, isc::dns::Name("a.*.wild.example.org"), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::NXDOMAIN, + this->expected_rdatas_, this->expected_sig_rdatas_); + // These should be canceled, since it is below a domain which exitsts + doFindTest(*finder, isc::dns::Name("nothing.here.wild.example.org"), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::NXDOMAIN, + this->expected_rdatas_, this->expected_sig_rdatas_); + doFindTest(*finder, isc::dns::Name("cancel.here.wild.example.org"), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_); + doFindTest(*finder, + isc::dns::Name("below.cancel.here.wild.example.org"), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::NXDOMAIN, + this->expected_rdatas_, this->expected_sig_rdatas_); + // And this should be just plain empty non-terminal domain, check + // the wildcard doesn't hurt it + doFindTest(*finder, isc::dns::Name("here.wild.example.org"), + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_); + // Also make sure that the wildcard doesn't hurt the original data + // below the wildcard + this->expected_rdatas_.push_back("2001:db8::5"); + doFindTest(*finder, isc::dns::Name("cancel.here.wild.example.org"), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + this->expected_rdatas_.clear(); + + // How wildcard go together with delegation + this->expected_rdatas_.push_back("ns.example.com."); + doFindTest(*finder, isc::dns::Name("below.delegatedwild.example.org"), + this->qtype_, isc::dns::RRType::NS(), this->rrttl_, + ZoneFinder::DELEGATION, this->expected_rdatas_, + this->expected_sig_rdatas_, + isc::dns::Name("delegatedwild.example.org")); + // FIXME: This doesn't look logically OK, GLUE_OK should make it transparent, + // so the match should either work or be canceled, but return NXDOMAIN + doFindTest(*finder, isc::dns::Name("below.delegatedwild.example.org"), + this->qtype_, isc::dns::RRType::NS(), this->rrttl_, + ZoneFinder::DELEGATION, this->expected_rdatas_, + this->expected_sig_rdatas_, + isc::dns::Name("delegatedwild.example.org"), + ZoneFinder::FIND_GLUE_OK); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.5"); + // These are direct matches + const char* positive_names[] = { + "wild.*.foo.example.org.", + "wild.*.foo.*.bar.example.org.", + NULL + }; + for (const char** name(positive_names); *name != NULL; ++ name) { + doFindTest(*finder, isc::dns::Name(*name), this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, + this->expected_sig_rdatas_); + } + + // These are wildcard matches against empty nonterminal asterisk + this->expected_rdatas_.clear(); + const char* negative_names[] = { + "a.foo.example.org.", + "*.foo.example.org.", + "foo.example.org.", + "wild.bar.foo.example.org.", + "baz.foo.*.bar.example.org", + "baz.foo.baz.bar.example.org", + "*.foo.baz.bar.example.org", + "*.foo.*.bar.example.org", + "foo.*.bar.example.org", + "*.bar.example.org", + "bar.example.org", + NULL + }; + for (const char** name(negative_names); *name != NULL; ++ name) { + doFindTest(*finder, isc::dns::Name(*name), this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_); + // FIXME: What should be returned in this case? How does the + // DNSSEC logic handle it? + } + + const char* negative_dnssec_names[] = { + "a.bar.example.org.", + "foo.baz.bar.example.org.", + "a.foo.bar.example.org.", + NULL + }; + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("wild.*.foo.*.bar.example.org. NSEC"); + this->expected_sig_rdatas_.clear(); + for (const char** name(negative_dnssec_names); *name != NULL; ++ name) { + doFindTest(*finder, isc::dns::Name(*name), this->qtype_, + RRType::NSEC(), this->rrttl_, ZoneFinder::WILDCARD_NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_, + Name("bao.example.org."), ZoneFinder::FIND_DNSSEC); + } + + // Some strange things in the wild node + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("www.example.org."); + this->expected_sig_rdatas_.clear(); + doFindTest(*finder, isc::dns::Name("a.cnamewild.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::CNAME(), + this->rrttl_, ZoneFinder::CNAME, + this->expected_rdatas_, this->expected_sig_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("ns.example.com."); + doFindTest(*finder, isc::dns::Name("a.nswild.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::NS(), + this->rrttl_, ZoneFinder::DELEGATION, + this->expected_rdatas_, this->expected_sig_rdatas_); +} + +TYPED_TEST(DatabaseClientTest, NXRRSET_NSEC) { + // The domain exists, but doesn't have this RRType + // So we should get its NSEC + shared_ptr finder(this->getFinder()); + + this->expected_rdatas_.push_back("www2.example.org. A AAAA NSEC RRSIG"); + this->expected_sig_rdatas_.push_back("NSEC 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::NSEC(), + this->rrttl_, ZoneFinder::NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_, + Name::ROOT_NAME(), ZoneFinder::FIND_DNSSEC); +} + +TYPED_TEST(DatabaseClientTest, wildcardNXRRSET_NSEC) { + // The domain exists, but doesn't have this RRType + // So we should get its NSEC + // + // The user will have to query us again to get the correct + // answer (eg. prove there's not an exact match) + shared_ptr finder(this->getFinder()); + + this->expected_rdatas_.push_back("cancel.here.wild.example.org. A NSEC " + "RRSIG"); + this->expected_sig_rdatas_.push_back("NSEC 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + // Note that the NSEC name should NOT be synthesized. + doFindTest(*finder, isc::dns::Name("a.wild.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::NSEC(), + this->rrttl_, ZoneFinder::WILDCARD_NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_, + Name("*.wild.example.org"), ZoneFinder::FIND_DNSSEC); +} + +TYPED_TEST(DatabaseClientTest, NXDOMAIN_NSEC) { + // The domain doesn't exist, so we must get the right NSEC + shared_ptr finder(this->getFinder()); + + this->expected_rdatas_.push_back("www2.example.org. A AAAA NSEC RRSIG"); + this->expected_sig_rdatas_.push_back("NSEC 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(*finder, isc::dns::Name("www1.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::NSEC(), + this->rrttl_, ZoneFinder::NXDOMAIN, + this->expected_rdatas_, this->expected_sig_rdatas_, + Name("www.example.org."), ZoneFinder::FIND_DNSSEC); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("acnamesig1.example.org. NS A NSEC RRSIG"); + // This tests it works correctly in apex (there was a bug, where a check + // for NS-alone was there and it would throw). + doFindTest(*finder, isc::dns::Name("aa.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::NSEC(), + this->rrttl_, ZoneFinder::NXDOMAIN, + this->expected_rdatas_, this->expected_sig_rdatas_, + Name("example.org."), ZoneFinder::FIND_DNSSEC); + + // Check that if the DB doesn't support it, the exception from there + // is not propagated and it only does not include the NSEC + if (!this->is_mock_) { + return; // We don't make the real DB to throw + } + EXPECT_NO_THROW(doFindTest(*finder, + isc::dns::Name("notimplnsec.example.org."), + isc::dns::RRType::TXT(), + isc::dns::RRType::NSEC(), this->rrttl_, + ZoneFinder::NXDOMAIN, this->empty_rdatas_, + this->empty_rdatas_, Name::ROOT_NAME(), + ZoneFinder::FIND_DNSSEC)); +} + +TYPED_TEST(DatabaseClientTest, emptyNonterminalNSEC) { + // Same as NXDOMAIN_NSEC, but with empty non-terminal + shared_ptr finder(this->getFinder()); + + this->expected_rdatas_.push_back("empty.nonterminal.example.org. NSEC"); + doFindTest(*finder, isc::dns::Name("nonterminal.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::NSEC(), this->rrttl_, + ZoneFinder::NXRRSET, + this->expected_rdatas_, this->expected_sig_rdatas_, + Name("l.example.org."), ZoneFinder::FIND_DNSSEC); + + // Check that if the DB doesn't support it, the exception from there + // is not propagated and it only does not include the NSEC + if (!this->is_mock_) { + return; // We don't make the real DB to throw + } + EXPECT_NO_THROW(doFindTest(*finder, + isc::dns::Name("here.wild.example.org."), + isc::dns::RRType::TXT(), + isc::dns::RRType::NSEC(), + this->rrttl_, ZoneFinder::NXRRSET, + this->empty_rdatas_, this->empty_rdatas_, + Name::ROOT_NAME(), ZoneFinder::FIND_DNSSEC)); +} + +TYPED_TEST(DatabaseClientTest, getOrigin) { + DataSourceClient::FindResult + zone(this->client_->findZone(Name("example.org"))); + ASSERT_EQ(result::SUCCESS, zone.code); + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + if (this->is_mock_) { + EXPECT_EQ(READONLY_ZONE_ID, finder->zone_id()); + } + EXPECT_EQ(this->zname_, finder->getOrigin()); +} + +TYPED_TEST(DatabaseClientTest, updaterFinder) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + ASSERT_TRUE(this->updater_); + + // If this update isn't replacing the zone, the finder should work + // just like the normal find() case. + if (this->is_mock_) { + DatabaseClient::Finder& finder = dynamic_cast( + this->updater_->getFinder()); + EXPECT_EQ(WRITABLE_ZONE_ID, finder.zone_id()); + } + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + doFindTest(this->updater_->getFinder(), this->qname_, + this->qtype_, this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + + // When replacing the zone, the updater's finder shouldn't see anything + // in the zone until something is added. + this->updater_.reset(); + this->updater_ = this->client_->getUpdater(this->zname_, true); + ASSERT_TRUE(this->updater_); + if (this->is_mock_) { + DatabaseClient::Finder& finder = dynamic_cast( + this->updater_->getFinder()); + EXPECT_EQ(WRITABLE_ZONE_ID, finder.zone_id()); + } + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::NXDOMAIN, + this->empty_rdatas_, this->empty_rdatas_); +} + +TYPED_TEST(DatabaseClientTest, flushZone) { + // A simple update case: flush the entire zone + shared_ptr finder(this->getFinder()); + + // Before update, the name exists. + EXPECT_EQ(ZoneFinder::SUCCESS, finder->find(this->qname_, + this->qtype_).code); + + // start update in the replace mode. the normal finder should still + // be able to see the record, but the updater's finder shouldn't. + this->updater_ = this->client_->getUpdater(this->zname_, true); + this->setUpdateAccessor(); + EXPECT_EQ(ZoneFinder::SUCCESS, + finder->find(this->qname_, this->qtype_).code); + EXPECT_EQ(ZoneFinder::NXDOMAIN, + this->updater_->getFinder().find(this->qname_, + this->qtype_).code); + + // commit the update. now the normal finder shouldn't see it. + this->updater_->commit(); + EXPECT_EQ(ZoneFinder::NXDOMAIN, finder->find(this->qname_, + this->qtype_).code); + + // Check rollback wasn't accidentally performed. + EXPECT_FALSE(this->isRollbacked()); +} + +TYPED_TEST(DatabaseClientTest, updateCancel) { + // similar to the previous test, but destruct the updater before commit. + + ZoneFinderPtr finder = this->client_->findZone(this->zname_).zone_finder; + EXPECT_EQ(ZoneFinder::SUCCESS, finder->find(this->qname_, + this->qtype_).code); + + this->updater_ = this->client_->getUpdater(this->zname_, true); + this->setUpdateAccessor(); + EXPECT_EQ(ZoneFinder::NXDOMAIN, + this->updater_->getFinder().find(this->qname_, + this->qtype_).code); + // DB should not have been rolled back yet. + EXPECT_FALSE(this->isRollbacked()); + this->updater_.reset(); // destruct without commit + + // reset() should have triggered rollback (although it doesn't affect + // anything to the mock accessor implementation except for the result of + // isRollbacked()) + EXPECT_TRUE(this->isRollbacked(true)); + EXPECT_EQ(ZoneFinder::SUCCESS, finder->find(this->qname_, + this->qtype_).code); +} + +TYPED_TEST(DatabaseClientTest, exceptionFromRollback) { + this->updater_ = this->client_->getUpdater(this->zname_, true); + + this->rrset_.reset(new RRset(Name("throw.example.org"), this->qclass_, + this->qtype_, this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "192.0.2.1")); + this->updater_->addRRset(*this->rrset_); + // destruct without commit. The added name will result in an exception + // in the MockAccessor's rollback method. It shouldn't be propagated. + EXPECT_NO_THROW(this->updater_.reset()); +} + +TYPED_TEST(DatabaseClientTest, duplicateCommit) { + // duplicate commit. should result in exception. + this->updater_ = this->client_->getUpdater(this->zname_, true); + this->updater_->commit(); + EXPECT_THROW(this->updater_->commit(), DataSourceError); +} + +TYPED_TEST(DatabaseClientTest, addRRsetToNewZone) { + // Add a single RRset to a fresh empty zone + this->updater_ = this->client_->getUpdater(this->zname_, true); + this->updater_->addRRset(*this->rrset_); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.2"); + { + SCOPED_TRACE("add RRset"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + } + + // Similar to the previous case, but with RRSIG + this->updater_.reset(); + this->updater_ = this->client_->getUpdater(this->zname_, true); + this->updater_->addRRset(*this->rrset_); + this->updater_->addRRset(*this->rrsigset_); + + // confirm the expected columns were passed to the accessor (if checkable). + const char* const rrsig_added[] = { + "www.example.org.", "org.example.www.", "3600", "RRSIG", "A", + "A 5 3 0 20000101000000 20000201000000 0 example.org. FAKEFAKEFAKE" + }; + this->checkLastAdded(rrsig_added); + + this->expected_sig_rdatas_.clear(); + this->expected_sig_rdatas_.push_back( + rrsig_added[DatabaseAccessor::ADD_RDATA]); + { + SCOPED_TRACE("add RRset with RRSIG"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->expected_sig_rdatas_); + } + + // Add the non RRSIG RRset again, to see the attempt of adding RRSIG + // causes any unexpected effect, in particular, whether the SIGTYPE + // field might remain. + this->updater_->addRRset(*this->rrset_); + const char* const rrset_added[] = { + "www.example.org.", "org.example.www.", "3600", "A", "", "192.0.2.2" + }; + this->checkLastAdded(rrset_added); +} + +TYPED_TEST(DatabaseClientTest, addRRsetToCurrentZone) { + // Similar to the previous test, but not replacing the existing data. + shared_ptr finder(this->getFinder()); + + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->addRRset(*this->rrset_); + + // We should see both old and new data. + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.2"); + { + SCOPED_TRACE("add RRset"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + } + this->updater_->commit(); + { + SCOPED_TRACE("add RRset after commit"); + doFindTest(*finder, this->qname_, this->qtype_, this->qtype_, + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, addMultipleRRs) { + // Similar to the previous case, but the added RRset contains multiple + // RRs. + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "192.0.2.3")); + this->updater_->addRRset(*this->rrset_); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.2"); + this->expected_rdatas_.push_back("192.0.2.3"); + { + SCOPED_TRACE("add multiple RRs"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, addRRsetOfLargerTTL) { + // Similar to the previous one, but the TTL of the added RRset is larger + // than that of the existing record. The finder should use the smaller + // one. + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->rrset_->setTTL(RRTTL(7200)); + this->updater_->addRRset(*this->rrset_); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.2"); + { + SCOPED_TRACE("add RRset of larger TTL"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, addRRsetOfSmallerTTL) { + // Similar to the previous one, but the added RRset has a smaller TTL. + // The added TTL should be used by the finder. + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->rrset_->setTTL(RRTTL(1800)); + this->updater_->addRRset(*this->rrset_); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.2"); + { + SCOPED_TRACE("add RRset of smaller TTL"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, RRTTL(1800), ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, addSameRR) { + // Add the same RR as that is already in the data source. + // Currently the add interface doesn't try to suppress the duplicate, + // neither does the finder. We may want to revisit it in future versions. + + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->rrset_.reset(new RRset(this->qname_, this->qclass_, this->qtype_, + this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "192.0.2.1")); + this->updater_->addRRset(*this->rrset_); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.1"); + { + SCOPED_TRACE("add same RR"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, addDeviantRR) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + + // RR class mismatch. This should be detected and rejected. + this->rrset_.reset(new RRset(this->qname_, RRClass::CH(), RRType::TXT(), + this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "test text")); + EXPECT_THROW(this->updater_->addRRset(*this->rrset_), DataSourceError); + + // Out-of-zone owner name. At a higher level this should be rejected, + // but it doesn't happen in this interface. + this->rrset_.reset(new RRset(Name("example.com"), this->qclass_, + this->qtype_, this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "192.0.2.100")); + this->updater_->addRRset(*this->rrset_); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.100"); + { + // Note: with the find() implementation being more strict about + // zone cuts, this test may fail. Then the test should be updated. + SCOPED_TRACE("add out-of-zone RR"); + doFindTest(this->updater_->getFinder(), Name("example.com"), + this->qtype_, this->qtype_, this->rrttl_, + ZoneFinder::SUCCESS, this->expected_rdatas_, + this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, addEmptyRRset) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->rrset_.reset(new RRset(this->qname_, this->qclass_, this->qtype_, + this->rrttl_)); + EXPECT_THROW(this->updater_->addRRset(*this->rrset_), DataSourceError); +} + +TYPED_TEST(DatabaseClientTest, addAfterCommit) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->commit(); + EXPECT_THROW(this->updater_->addRRset(*this->rrset_), DataSourceError); +} + +TYPED_TEST(DatabaseClientTest, addRRsetWithRRSIG) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->rrset_->addRRsig(*this->rrsigset_); + EXPECT_THROW(this->updater_->addRRset(*this->rrset_), DataSourceError); +} + +TYPED_TEST(DatabaseClientTest, deleteRRset) { + shared_ptr finder(this->getFinder()); + + this->rrset_.reset(new RRset(this->qname_, this->qclass_, this->qtype_, + this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "192.0.2.1")); + + // Delete one RR from an RRset + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->deleteRRset(*this->rrset_); + + // Delete the only RR of a name + this->rrset_.reset(new RRset(Name("cname.example.org"), this->qclass_, + RRType::CNAME(), this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "www.example.org")); + this->updater_->deleteRRset(*this->rrset_); + + // The this->updater_ finder should immediately see the deleted results. + { + SCOPED_TRACE("delete RRset"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::NXRRSET, + this->empty_rdatas_, this->empty_rdatas_); + doFindTest(this->updater_->getFinder(), Name("cname.example.org"), + this->qtype_, this->qtype_, this->rrttl_, + ZoneFinder::NXDOMAIN, this->empty_rdatas_, + this->empty_rdatas_); + } + + // before committing the change, the original finder should see the + // original record. + { + SCOPED_TRACE("delete RRset before commit"); + this->expected_rdatas_.push_back("192.0.2.1"); + doFindTest(*finder, this->qname_, this->qtype_, this->qtype_, + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->empty_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("www.example.org."); + doFindTest(*finder, Name("cname.example.org"), this->qtype_, + RRType::CNAME(), this->rrttl_, ZoneFinder::CNAME, + this->expected_rdatas_, this->empty_rdatas_); + } + + // once committed, the record should be removed from the original finder's + // view, too. + this->updater_->commit(); + { + SCOPED_TRACE("delete RRset after commit"); + doFindTest(*finder, this->qname_, this->qtype_, this->qtype_, + this->rrttl_, ZoneFinder::NXRRSET, this->empty_rdatas_, + this->empty_rdatas_); + doFindTest(*finder, Name("cname.example.org"), this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::NXDOMAIN, + this->empty_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, deleteRRsetToNXDOMAIN) { + // similar to the previous case, but it removes the only record of the + // given name. a subsequent find() should result in NXDOMAIN. + this->rrset_.reset(new RRset(Name("cname.example.org"), this->qclass_, + RRType::CNAME(), this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "www.example.org")); + + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->deleteRRset(*this->rrset_); + { + SCOPED_TRACE("delete RRset to NXDOMAIN"); + doFindTest(this->updater_->getFinder(), Name("cname.example.org"), + this->qtype_, this->qtype_, this->rrttl_, + ZoneFinder::NXDOMAIN, this->empty_rdatas_, + this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, deleteMultipleRRs) { + this->rrset_.reset(new RRset(this->qname_, this->qclass_, RRType::AAAA(), + this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "2001:db8::1")); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "2001:db8::2")); + + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->deleteRRset(*this->rrset_); + + { + SCOPED_TRACE("delete multiple RRs"); + doFindTest(this->updater_->getFinder(), this->qname_, RRType::AAAA(), + this->qtype_, this->rrttl_, ZoneFinder::NXRRSET, + this->empty_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, partialDelete) { + this->rrset_.reset(new RRset(this->qname_, this->qclass_, RRType::AAAA(), + this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "2001:db8::1")); + // This does not exist in the test data source: + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "2001:db8::3")); + + // deleteRRset should succeed "silently", and subsequent find() should + // find the remaining RR. + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->deleteRRset(*this->rrset_); + { + SCOPED_TRACE("partial delete"); + this->expected_rdatas_.push_back("2001:db8::2"); + doFindTest(this->updater_->getFinder(), this->qname_, RRType::AAAA(), + RRType::AAAA(), this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, deleteNoMatch) { + // similar to the previous test, but there's not even a match in the + // specified RRset. Essentially there's no difference in the result. + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->deleteRRset(*this->rrset_); + { + SCOPED_TRACE("delete no match"); + this->expected_rdatas_.push_back("192.0.2.1"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, deleteWithDifferentTTL) { + // Our delete interface simply ignores TTL (may change in a future version) + this->rrset_.reset(new RRset(this->qname_, this->qclass_, this->qtype_, + RRTTL(1800))); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "192.0.2.1")); + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->deleteRRset(*this->rrset_); + { + SCOPED_TRACE("delete RRset with a different TTL"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::NXRRSET, + this->empty_rdatas_, this->empty_rdatas_); + } +} + +TYPED_TEST(DatabaseClientTest, deleteDeviantRR) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + + // RR class mismatch. This should be detected and rejected. + this->rrset_.reset(new RRset(this->qname_, RRClass::CH(), RRType::TXT(), + this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "test text")); + EXPECT_THROW(this->updater_->deleteRRset(*this->rrset_), DataSourceError); + + // Out-of-zone owner name. At a higher level this should be rejected, + // but it doesn't happen in this interface. + this->rrset_.reset(new RRset(Name("example.com"), this->qclass_, + this->qtype_, this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "192.0.2.100")); + EXPECT_NO_THROW(this->updater_->deleteRRset(*this->rrset_)); +} + +TYPED_TEST(DatabaseClientTest, deleteAfterCommit) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->updater_->commit(); + EXPECT_THROW(this->updater_->deleteRRset(*this->rrset_), DataSourceError); +} + +TYPED_TEST(DatabaseClientTest, deleteEmptyRRset) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->rrset_.reset(new RRset(this->qname_, this->qclass_, this->qtype_, + this->rrttl_)); + EXPECT_THROW(this->updater_->deleteRRset(*this->rrset_), DataSourceError); +} + +TYPED_TEST(DatabaseClientTest, deleteRRsetWithRRSIG) { + this->updater_ = this->client_->getUpdater(this->zname_, false); + this->rrset_->addRRsig(*this->rrsigset_); + EXPECT_THROW(this->updater_->deleteRRset(*this->rrset_), DataSourceError); +} + +TYPED_TEST(DatabaseClientTest, compoundUpdate) { + // This test case performs an arbitrary chosen add/delete operations + // in a single update transaction. Essentially there is nothing new to + // test here, but there may be some bugs that was overlooked and can + // only happen in the compound update scenario, so we test it. + + this->updater_ = this->client_->getUpdater(this->zname_, false); + + // add a new RR to an existing RRset + this->updater_->addRRset(*this->rrset_); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.1"); + this->expected_rdatas_.push_back("192.0.2.2"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + + // delete an existing RR + this->rrset_.reset(new RRset(Name("www.example.org"), this->qclass_, + this->qtype_, this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "192.0.2.1")); + this->updater_->deleteRRset(*this->rrset_); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.2"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + + // re-add it + this->updater_->addRRset(*this->rrset_); + this->expected_rdatas_.push_back("192.0.2.1"); + doFindTest(this->updater_->getFinder(), this->qname_, this->qtype_, + this->qtype_, this->rrttl_, ZoneFinder::SUCCESS, + this->expected_rdatas_, this->empty_rdatas_); + + // add a new RR with a new name + const Name newname("newname.example.org"); + const RRType newtype(RRType::AAAA()); + doFindTest(this->updater_->getFinder(), newname, newtype, newtype, + this->rrttl_, ZoneFinder::NXDOMAIN, this->empty_rdatas_, + this->empty_rdatas_); + this->rrset_.reset(new RRset(newname, this->qclass_, newtype, + this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "2001:db8::10")); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "2001:db8::11")); + this->updater_->addRRset(*this->rrset_); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("2001:db8::10"); + this->expected_rdatas_.push_back("2001:db8::11"); + doFindTest(this->updater_->getFinder(), newname, newtype, newtype, + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->empty_rdatas_); + + // delete one RR from the previous set + this->rrset_.reset(new RRset(newname, this->qclass_, newtype, + this->rrttl_)); + this->rrset_->addRdata(rdata::createRdata(this->rrset_->getType(), + this->rrset_->getClass(), + "2001:db8::11")); + this->updater_->deleteRRset(*this->rrset_); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("2001:db8::10"); + doFindTest(this->updater_->getFinder(), newname, newtype, newtype, + this->rrttl_, ZoneFinder::SUCCESS, this->expected_rdatas_, + this->empty_rdatas_); + + // Commit the changes, confirm the entire changes applied. + this->updater_->commit(); + shared_ptr finder(this->getFinder()); + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("192.0.2.2"); + this->expected_rdatas_.push_back("192.0.2.1"); + doFindTest(*finder, this->qname_, this->qtype_, this->qtype_, this->rrttl_, + ZoneFinder::SUCCESS, this->expected_rdatas_, + this->empty_rdatas_); + + this->expected_rdatas_.clear(); + this->expected_rdatas_.push_back("2001:db8::10"); + doFindTest(*finder, newname, newtype, newtype, this->rrttl_, + ZoneFinder::SUCCESS, this->expected_rdatas_, + this->empty_rdatas_); +} + +TYPED_TEST(DatabaseClientTest, previous) { + shared_ptr finder(this->getFinder()); + + EXPECT_EQ(Name("www.example.org."), + finder->findPreviousName(Name("www2.example.org."))); + // Check a name that doesn't exist there + EXPECT_EQ(Name("www.example.org."), + finder->findPreviousName(Name("www1.example.org."))); + if (this->is_mock_) { // We can't really force the DB to throw + // Check it doesn't crash or anything if the underlying DB throws + DataSourceClient::FindResult + zone(this->client_->findZone(Name("bad.example.org"))); + finder = + dynamic_pointer_cast(zone.zone_finder); + + EXPECT_THROW(finder->findPreviousName(Name("bad.example.org")), + isc::NotImplemented); + } else { + // No need to test this on mock one, because we test only that + // the exception gets through + + // A name before the origin + EXPECT_THROW(finder->findPreviousName(Name("example.com")), + isc::NotImplemented); + } +} + +TYPED_TEST(DatabaseClientTest, invalidRdata) { + shared_ptr finder(this->getFinder()); + + EXPECT_THROW(finder->find(Name("invalidrdata.example.org."), RRType::A()), + DataSourceError); + EXPECT_THROW(finder->find(Name("invalidrdata2.example.org."), RRType::A()), + DataSourceError); +} + +TEST_F(MockDatabaseClientTest, missingNSEC) { + shared_ptr finder(this->getFinder()); + + /* + * FIXME: For now, we can't really distinguish this bogus input + * from not-signed zone so we can't throw. But once we can, + * enable the original test. + */ +#if 0 + EXPECT_THROW(finder->find(Name("badnsec2.example.org."), RRType::A(), NULL, + ZoneFinder::FIND_DNSSEC), + DataSourceError); +#endif + doFindTest(*finder, Name("badnsec2.example.org."), RRType::A(), + RRType::A(), this->rrttl_, ZoneFinder::NXDOMAIN, + this->expected_rdatas_, this->expected_sig_rdatas_); +} + +TEST_F(MockDatabaseClientTest, badName) { + shared_ptr finder(this->getFinder()); + + EXPECT_THROW(finder->findPreviousName(Name("brokenname.example.org.")), + DataSourceError); +} + +} diff --git a/src/lib/datasrc/tests/factory_unittest.cc b/src/lib/datasrc/tests/factory_unittest.cc new file mode 100644 index 0000000000..94d11189ed --- /dev/null +++ b/src/lib/datasrc/tests/factory_unittest.cc @@ -0,0 +1,175 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include +#include +#include + +#include +#include + +#include + +using namespace isc::datasrc; +using namespace isc::data; + +std::string SQLITE_DBFILE_EXAMPLE_ORG = TEST_DATA_DIR "/example.org.sqlite3"; + +namespace { + +TEST(FactoryTest, sqlite3ClientBadConfig) { + // We start out by building the configuration data bit by bit, + // testing each form of 'bad config', until we have a good one. + // Then we do some very basic operation on the client (detailed + // tests are left to the implementation-specific backends) + ElementPtr config; + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config = Element::create("asdf"); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config = Element::createMap(); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config->set("class", ElementPtr()); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config->set("class", Element::create(1)); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config->set("class", Element::create("FOO")); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config->set("class", Element::create("IN")); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config->set("database_file", ElementPtr()); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config->set("database_file", Element::create(1)); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + DataSourceConfigError); + + config->set("database_file", Element::create("/foo/bar/doesnotexist")); + ASSERT_THROW(DataSourceClientContainer("sqlite3", config), + SQLite3Error); + + config->set("database_file", Element::create(SQLITE_DBFILE_EXAMPLE_ORG)); + DataSourceClientContainer dsc("sqlite3", config); + + DataSourceClient::FindResult result1( + dsc.getInstance().findZone(isc::dns::Name("example.org."))); + ASSERT_EQ(result::SUCCESS, result1.code); + + DataSourceClient::FindResult result2( + dsc.getInstance().findZone(isc::dns::Name("no.such.zone."))); + ASSERT_EQ(result::NOTFOUND, result2.code); + + ZoneIteratorPtr iterator(dsc.getInstance().getIterator( + isc::dns::Name("example.org."))); + + ZoneUpdaterPtr updater(dsc.getInstance().getUpdater( + isc::dns::Name("example.org."), false)); +} + +TEST(FactoryTest, memoryClient) { + // We start out by building the configuration data bit by bit, + // testing each form of 'bad config', until we have a good one. + // Then we do some very basic operation on the client (detailed + // tests are left to the implementation-specific backends) + ElementPtr config; + ASSERT_THROW(DataSourceClientContainer client("memory", config), + DataSourceConfigError); + + config = Element::create("asdf"); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config = Element::createMap(); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("type", ElementPtr()); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("type", Element::create(1)); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("type", Element::create("FOO")); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("type", Element::create("memory")); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("class", ElementPtr()); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("class", Element::create(1)); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("class", Element::create("FOO")); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("class", Element::create("IN")); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("zones", ElementPtr()); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("zones", Element::create(1)); + ASSERT_THROW(DataSourceClientContainer("memory", config), + DataSourceConfigError); + + config->set("zones", Element::createList()); + DataSourceClientContainer dsc("memory", config); + + // Once it is able to load some zones, we should add a few tests + // here to see that it does. + DataSourceClient::FindResult result( + dsc.getInstance().findZone(isc::dns::Name("no.such.zone."))); + ASSERT_EQ(result::NOTFOUND, result.code); + + ASSERT_THROW(dsc.getInstance().getIterator(isc::dns::Name("example.org.")), + DataSourceError); + + ASSERT_THROW(dsc.getInstance().getUpdater(isc::dns::Name("no.such.zone."), + false), isc::NotImplemented); +} + +TEST(FactoryTest, badType) { + ASSERT_THROW(DataSourceClientContainer("foo", ElementPtr()), + DataSourceError); +} + +} // end anonymous namespace + diff --git a/src/lib/datasrc/tests/memory_datasrc_unittest.cc b/src/lib/datasrc/tests/memory_datasrc_unittest.cc index 83fbb58da5..2b854db368 100644 --- a/src/lib/datasrc/tests/memory_datasrc_unittest.cc +++ b/src/lib/datasrc/tests/memory_datasrc_unittest.cc @@ -29,6 +29,8 @@ #include #include +#include +#include #include @@ -42,119 +44,173 @@ namespace { using result::SUCCESS; using result::EXIST; -class MemoryDataSrcTest : public ::testing::Test { +class InMemoryClientTest : public ::testing::Test { protected: - MemoryDataSrcTest() : rrclass(RRClass::IN()) + InMemoryClientTest() : rrclass(RRClass::IN()) {} RRClass rrclass; - MemoryDataSrc memory_datasrc; + InMemoryClient memory_client; }; -TEST_F(MemoryDataSrcTest, add_find_Zone) { +TEST_F(InMemoryClientTest, add_find_Zone) { // test add zone // Bogus zone (NULL) - EXPECT_THROW(memory_datasrc.addZone(ZonePtr()), isc::InvalidParameter); + EXPECT_THROW(memory_client.addZone(ZoneFinderPtr()), + isc::InvalidParameter); // add zones with different names one by one - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::IN(), Name("a"))))); - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::CH(), Name("b"))))); - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::IN(), Name("c"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::IN(), + Name("a"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::CH(), + Name("b"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::IN(), + Name("c"))))); // add zones with the same name suffix - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::CH(), - Name("x.d.e.f"))))); - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::CH(), - Name("o.w.y.d.e.f"))))); - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::CH(), - Name("p.w.y.d.e.f"))))); - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::IN(), - Name("q.w.y.d.e.f"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::CH(), + Name("x.d.e.f"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::CH(), + Name("o.w.y.d.e.f"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::CH(), + Name("p.w.y.d.e.f"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::IN(), + Name("q.w.y.d.e.f"))))); // add super zone and its subzone - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::CH(), Name("g.h"))))); - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::IN(), Name("i.g.h"))))); - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::IN(), - Name("z.d.e.f"))))); - EXPECT_EQ(result::SUCCESS, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::IN(), - Name("j.z.d.e.f"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::CH(), + Name("g.h"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::IN(), + Name("i.g.h"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::IN(), + Name("z.d.e.f"))))); + EXPECT_EQ(result::SUCCESS, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::IN(), + Name("j.z.d.e.f"))))); // different zone class isn't allowed. - EXPECT_EQ(result::EXIST, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::CH(), - Name("q.w.y.d.e.f"))))); + EXPECT_EQ(result::EXIST, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::CH(), + Name("q.w.y.d.e.f"))))); // names are compared in a case insensitive manner. - EXPECT_EQ(result::EXIST, memory_datasrc.addZone( - ZonePtr(new MemoryZone(RRClass::IN(), - Name("Q.W.Y.d.E.f"))))); + EXPECT_EQ(result::EXIST, memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::IN(), + Name("Q.W.Y.d.E.f"))))); // test find zone - EXPECT_EQ(result::SUCCESS, memory_datasrc.findZone(Name("a")).code); + EXPECT_EQ(result::SUCCESS, memory_client.findZone(Name("a")).code); EXPECT_EQ(Name("a"), - memory_datasrc.findZone(Name("a")).zone->getOrigin()); + memory_client.findZone(Name("a")).zone_finder->getOrigin()); EXPECT_EQ(result::SUCCESS, - memory_datasrc.findZone(Name("j.z.d.e.f")).code); + memory_client.findZone(Name("j.z.d.e.f")).code); EXPECT_EQ(Name("j.z.d.e.f"), - memory_datasrc.findZone(Name("j.z.d.e.f")).zone->getOrigin()); + memory_client.findZone(Name("j.z.d.e.f")).zone_finder-> + getOrigin()); // NOTFOUND - EXPECT_EQ(result::NOTFOUND, memory_datasrc.findZone(Name("d.e.f")).code); - EXPECT_EQ(ConstZonePtr(), memory_datasrc.findZone(Name("d.e.f")).zone); + EXPECT_EQ(result::NOTFOUND, memory_client.findZone(Name("d.e.f")).code); + EXPECT_EQ(ConstZoneFinderPtr(), + memory_client.findZone(Name("d.e.f")).zone_finder); EXPECT_EQ(result::NOTFOUND, - memory_datasrc.findZone(Name("w.y.d.e.f")).code); - EXPECT_EQ(ConstZonePtr(), - memory_datasrc.findZone(Name("w.y.d.e.f")).zone); + memory_client.findZone(Name("w.y.d.e.f")).code); + EXPECT_EQ(ConstZoneFinderPtr(), + memory_client.findZone(Name("w.y.d.e.f")).zone_finder); // there's no exact match. the result should be the longest match, // and the code should be PARTIALMATCH. EXPECT_EQ(result::PARTIALMATCH, - memory_datasrc.findZone(Name("j.g.h")).code); + memory_client.findZone(Name("j.g.h")).code); EXPECT_EQ(Name("g.h"), - memory_datasrc.findZone(Name("g.h")).zone->getOrigin()); + memory_client.findZone(Name("g.h")).zone_finder->getOrigin()); EXPECT_EQ(result::PARTIALMATCH, - memory_datasrc.findZone(Name("z.i.g.h")).code); + memory_client.findZone(Name("z.i.g.h")).code); EXPECT_EQ(Name("i.g.h"), - memory_datasrc.findZone(Name("z.i.g.h")).zone->getOrigin()); + memory_client.findZone(Name("z.i.g.h")).zone_finder-> + getOrigin()); } -TEST_F(MemoryDataSrcTest, getZoneCount) { - EXPECT_EQ(0, memory_datasrc.getZoneCount()); - memory_datasrc.addZone( - ZonePtr(new MemoryZone(rrclass, Name("example.com")))); - EXPECT_EQ(1, memory_datasrc.getZoneCount()); +TEST_F(InMemoryClientTest, iterator) { + // Just some preparations of data + boost::shared_ptr + zone(new InMemoryZoneFinder(RRClass::IN(), Name("a"))); + RRsetPtr aRRsetA(new RRset(Name("a"), RRClass::IN(), RRType::A(), + RRTTL(300))); + aRRsetA->addRdata(rdata::in::A("192.0.2.1")); + RRsetPtr aRRsetAAAA(new RRset(Name("a"), RRClass::IN(), RRType::AAAA(), + RRTTL(300))); + aRRsetAAAA->addRdata(rdata::in::AAAA("2001:db8::1")); + aRRsetAAAA->addRdata(rdata::in::AAAA("2001:db8::2")); + RRsetPtr subRRsetA(new RRset(Name("sub.x.a"), RRClass::IN(), RRType::A(), + RRTTL(300))); + subRRsetA->addRdata(rdata::in::A("192.0.2.2")); + EXPECT_EQ(result::SUCCESS, memory_client.addZone(zone)); + // First, the zone is not there, so it should throw + EXPECT_THROW(memory_client.getIterator(Name("b")), DataSourceError); + // This zone is not there either, even when there's a zone containing this + EXPECT_THROW(memory_client.getIterator(Name("x.a")), DataSourceError); + // Now, an empty zone + ZoneIteratorPtr iterator(memory_client.getIterator(Name("a"))); + EXPECT_EQ(ConstRRsetPtr(), iterator->getNextRRset()); + // It throws Unexpected when we are past the end + EXPECT_THROW(iterator->getNextRRset(), isc::Unexpected); + EXPECT_EQ(result::SUCCESS, zone->add(aRRsetA)); + EXPECT_EQ(result::SUCCESS, zone->add(aRRsetAAAA)); + EXPECT_EQ(result::SUCCESS, zone->add(subRRsetA)); + // Check it with full zone, one by one. + // It should be in ascending order in case of InMemory data source + // (isn't guaranteed in general) + iterator = memory_client.getIterator(Name("a")); + EXPECT_EQ(aRRsetA, iterator->getNextRRset()); + EXPECT_EQ(aRRsetAAAA, iterator->getNextRRset()); + EXPECT_EQ(subRRsetA, iterator->getNextRRset()); + EXPECT_EQ(ConstRRsetPtr(), iterator->getNextRRset()); +} + +TEST_F(InMemoryClientTest, getZoneCount) { + EXPECT_EQ(0, memory_client.getZoneCount()); + memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(rrclass, + Name("example.com")))); + EXPECT_EQ(1, memory_client.getZoneCount()); // duplicate add. counter shouldn't change - memory_datasrc.addZone( - ZonePtr(new MemoryZone(rrclass, Name("example.com")))); - EXPECT_EQ(1, memory_datasrc.getZoneCount()); + memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(rrclass, + Name("example.com")))); + EXPECT_EQ(1, memory_client.getZoneCount()); // add one more - memory_datasrc.addZone( - ZonePtr(new MemoryZone(rrclass, Name("example.org")))); - EXPECT_EQ(2, memory_datasrc.getZoneCount()); + memory_client.addZone( + ZoneFinderPtr(new InMemoryZoneFinder(rrclass, + Name("example.org")))); + EXPECT_EQ(2, memory_client.getZoneCount()); } -// A helper callback of masterLoad() used in MemoryZoneTest. +TEST_F(InMemoryClientTest, startUpdateZone) { + EXPECT_THROW(memory_client.getUpdater(Name("example.org"), false), + isc::NotImplemented); +} + +// A helper callback of masterLoad() used in InMemoryZoneFinderTest. void setRRset(RRsetPtr rrset, vector::iterator& it) { *(*it) = rrset; ++it; } -/// \brief Test fixture for the MemoryZone class -class MemoryZoneTest : public ::testing::Test { +/// \brief Test fixture for the InMemoryZoneFinder class +class InMemoryZoneFinderTest : public ::testing::Test { // A straightforward pair of textual RR(set) and a RRsetPtr variable // to store the RRset. Used to build test data below. struct RRsetData { @@ -162,10 +218,10 @@ class MemoryZoneTest : public ::testing::Test { RRsetPtr* rrset; }; public: - MemoryZoneTest() : + InMemoryZoneFinderTest() : class_(RRClass::IN()), origin_("example.org"), - zone_(class_, origin_) + zone_finder_(class_, origin_) { // Build test RRsets. Below, we construct an RRset for // each textual RR(s) of zone_data, and assign it to the corresponding @@ -224,8 +280,8 @@ public: // Some data to test with const RRClass class_; const Name origin_; - // The zone to torture by tests - MemoryZone zone_; + // The zone finder to torture by tests + InMemoryZoneFinder zone_finder_; /* * Some RRsets to put inside the zone. @@ -262,9 +318,9 @@ public: RRsetPtr rr_not_wild_another_; /** - * \brief Test one find query to the zone. + * \brief Test one find query to the zone finder. * - * Asks a query to the zone and checks it does not throw and returns + * Asks a query to the zone finder and checks it does not throw and returns * expected results. It returns nothing, it just signals failures * to GTEST. * @@ -274,29 +330,31 @@ public: * \param check_answer Should a check against equality of the answer be * done? * \param answer The expected rrset, if any should be returned. - * \param zone Check different MemoryZone object than zone_ (if NULL, - * uses zone_) + * \param zone_finder Check different InMemoryZoneFinder object than + * zone_finder_ (if NULL, uses zone_finder_) * \param check_wild_answer Checks that the answer has the same RRs, type * class and TTL as the eqxpected answer and that the name corresponds * to the one searched. It is meant for checking answers for wildcard * queries. */ - void findTest(const Name& name, const RRType& rrtype, Zone::Result result, + void findTest(const Name& name, const RRType& rrtype, + ZoneFinder::Result result, bool check_answer = true, const ConstRRsetPtr& answer = ConstRRsetPtr(), RRsetList* target = NULL, - MemoryZone* zone = NULL, - Zone::FindOptions options = Zone::FIND_DEFAULT, + InMemoryZoneFinder* zone_finder = NULL, + ZoneFinder::FindOptions options = ZoneFinder::FIND_DEFAULT, bool check_wild_answer = false) { - if (!zone) { - zone = &zone_; + if (zone_finder == NULL) { + zone_finder = &zone_finder_; } // The whole block is inside, because we need to check the result and // we can't assign to FindResult EXPECT_NO_THROW({ - Zone::FindResult find_result(zone->find(name, rrtype, target, - options)); + ZoneFinder::FindResult find_result(zone_finder->find( + name, rrtype, + target, options)); // Check it returns correct answers EXPECT_EQ(result, find_result.code); if (check_answer) { @@ -337,14 +395,22 @@ public: }; /** - * \brief Test MemoryZone::MemoryZone constructor. + * \brief Check that findPreviousName throws as it should now. + */ +TEST_F(InMemoryZoneFinderTest, findPreviousName) { + EXPECT_THROW(zone_finder_.findPreviousName(Name("www.example.org")), + isc::NotImplemented); +} + +/** + * \brief Test InMemoryZoneFinder::InMemoryZoneFinder constructor. * - * Takes the created zone and checks its properties they are the same + * Takes the created zone finder and checks its properties they are the same * as passed parameters. */ -TEST_F(MemoryZoneTest, constructor) { - ASSERT_EQ(class_, zone_.getClass()); - ASSERT_EQ(origin_, zone_.getOrigin()); +TEST_F(InMemoryZoneFinderTest, constructor) { + ASSERT_EQ(class_, zone_finder_.getClass()); + ASSERT_EQ(origin_, zone_finder_.getOrigin()); } /** * \brief Test adding. @@ -352,174 +418,178 @@ TEST_F(MemoryZoneTest, constructor) { * We test that it throws at the correct moments and the correct exceptions. * And we test the return value. */ -TEST_F(MemoryZoneTest, add) { +TEST_F(InMemoryZoneFinderTest, add) { // This one does not belong to this zone - EXPECT_THROW(zone_.add(rr_out_), MemoryZone::OutOfZone); + EXPECT_THROW(zone_finder_.add(rr_out_), InMemoryZoneFinder::OutOfZone); // Test null pointer - EXPECT_THROW(zone_.add(ConstRRsetPtr()), MemoryZone::NullRRset); + EXPECT_THROW(zone_finder_.add(ConstRRsetPtr()), + InMemoryZoneFinder::NullRRset); // Now put all the data we have there. It should throw nothing - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_a_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_aaaa_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_a_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_a_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_aaaa_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_a_))); // Try putting there something twice, it should be rejected - EXPECT_NO_THROW(EXPECT_EQ(EXIST, zone_.add(rr_ns_))); - EXPECT_NO_THROW(EXPECT_EQ(EXIST, zone_.add(rr_ns_a_))); + EXPECT_NO_THROW(EXPECT_EQ(EXIST, zone_finder_.add(rr_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(EXIST, zone_finder_.add(rr_ns_a_))); } -TEST_F(MemoryZoneTest, addMultipleCNAMEs) { +TEST_F(InMemoryZoneFinderTest, addMultipleCNAMEs) { rr_cname_->addRdata(generic::CNAME("canonical2.example.org.")); - EXPECT_THROW(zone_.add(rr_cname_), MemoryZone::AddError); + EXPECT_THROW(zone_finder_.add(rr_cname_), InMemoryZoneFinder::AddError); } -TEST_F(MemoryZoneTest, addCNAMEThenOther) { - EXPECT_EQ(SUCCESS, zone_.add(rr_cname_)); - EXPECT_THROW(zone_.add(rr_cname_a_), MemoryZone::AddError); +TEST_F(InMemoryZoneFinderTest, addCNAMEThenOther) { + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_cname_)); + EXPECT_THROW(zone_finder_.add(rr_cname_a_), InMemoryZoneFinder::AddError); } -TEST_F(MemoryZoneTest, addOtherThenCNAME) { - EXPECT_EQ(SUCCESS, zone_.add(rr_cname_a_)); - EXPECT_THROW(zone_.add(rr_cname_), MemoryZone::AddError); +TEST_F(InMemoryZoneFinderTest, addOtherThenCNAME) { + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_cname_a_)); + EXPECT_THROW(zone_finder_.add(rr_cname_), InMemoryZoneFinder::AddError); } -TEST_F(MemoryZoneTest, findCNAME) { +TEST_F(InMemoryZoneFinderTest, findCNAME) { // install CNAME RR - EXPECT_EQ(SUCCESS, zone_.add(rr_cname_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_cname_)); // Find A RR of the same. Should match the CNAME - findTest(rr_cname_->getName(), RRType::NS(), Zone::CNAME, true, rr_cname_); + findTest(rr_cname_->getName(), RRType::NS(), ZoneFinder::CNAME, true, + rr_cname_); // Find the CNAME itself. Should result in normal SUCCESS - findTest(rr_cname_->getName(), RRType::CNAME(), Zone::SUCCESS, true, + findTest(rr_cname_->getName(), RRType::CNAME(), ZoneFinder::SUCCESS, true, rr_cname_); } -TEST_F(MemoryZoneTest, findCNAMEUnderZoneCut) { +TEST_F(InMemoryZoneFinderTest, findCNAMEUnderZoneCut) { // There's nothing special when we find a CNAME under a zone cut // (with FIND_GLUE_OK). The behavior is different from BIND 9, // so we test this case explicitly. - EXPECT_EQ(SUCCESS, zone_.add(rr_child_ns_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_child_ns_)); RRsetPtr rr_cname_under_cut_(new RRset(Name("cname.child.example.org"), class_, RRType::CNAME(), RRTTL(300))); - EXPECT_EQ(SUCCESS, zone_.add(rr_cname_under_cut_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_cname_under_cut_)); findTest(Name("cname.child.example.org"), RRType::AAAA(), - Zone::CNAME, true, rr_cname_under_cut_, NULL, NULL, - Zone::FIND_GLUE_OK); + ZoneFinder::CNAME, true, rr_cname_under_cut_, NULL, NULL, + ZoneFinder::FIND_GLUE_OK); } // Two DNAMEs at single domain are disallowed by RFC 2672, section 3) // Having a CNAME there is disallowed too, but it is tested by // addOtherThenCNAME and addCNAMEThenOther. -TEST_F(MemoryZoneTest, addMultipleDNAMEs) { +TEST_F(InMemoryZoneFinderTest, addMultipleDNAMEs) { rr_dname_->addRdata(generic::DNAME("target2.example.org.")); - EXPECT_THROW(zone_.add(rr_dname_), MemoryZone::AddError); + EXPECT_THROW(zone_finder_.add(rr_dname_), InMemoryZoneFinder::AddError); } /* * These two tests ensure that we can't have DNAME and NS at the same * node with the exception of the apex of zone (forbidden by RFC 2672) */ -TEST_F(MemoryZoneTest, addDNAMEThenNS) { - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_dname_))); - EXPECT_THROW(zone_.add(rr_dname_ns_), MemoryZone::AddError); +TEST_F(InMemoryZoneFinderTest, addDNAMEThenNS) { + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_dname_))); + EXPECT_THROW(zone_finder_.add(rr_dname_ns_), InMemoryZoneFinder::AddError); } -TEST_F(MemoryZoneTest, addNSThenDNAME) { - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_dname_ns_))); - EXPECT_THROW(zone_.add(rr_dname_), MemoryZone::AddError); +TEST_F(InMemoryZoneFinderTest, addNSThenDNAME) { + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_dname_ns_))); + EXPECT_THROW(zone_finder_.add(rr_dname_), InMemoryZoneFinder::AddError); } // It is allowed to have NS and DNAME at apex -TEST_F(MemoryZoneTest, DNAMEAndNSAtApex) { - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_dname_apex_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_))); +TEST_F(InMemoryZoneFinderTest, DNAMEAndNSAtApex) { + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_dname_apex_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_))); // The NS should be possible to be found, below should be DNAME, not // delegation - findTest(origin_, RRType::NS(), Zone::SUCCESS, true, rr_ns_); - findTest(rr_child_ns_->getName(), RRType::A(), Zone::DNAME, true, + findTest(origin_, RRType::NS(), ZoneFinder::SUCCESS, true, rr_ns_); + findTest(rr_child_ns_->getName(), RRType::A(), ZoneFinder::DNAME, true, rr_dname_apex_); } -TEST_F(MemoryZoneTest, NSAndDNAMEAtApex) { - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_dname_apex_))); +TEST_F(InMemoryZoneFinderTest, NSAndDNAMEAtApex) { + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_dname_apex_))); } // TODO: Test (and implement) adding data under DNAME. That is forbidden by // 2672 as well. // Search under a DNAME record. It should return the DNAME -TEST_F(MemoryZoneTest, findBelowDNAME) { - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_dname_))); - findTest(Name("below.dname.example.org"), RRType::A(), Zone::DNAME, true, - rr_dname_); +TEST_F(InMemoryZoneFinderTest, findBelowDNAME) { + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_dname_))); + findTest(Name("below.dname.example.org"), RRType::A(), ZoneFinder::DNAME, + true, rr_dname_); } // Search at the domain with DNAME. It should act as DNAME isn't there, DNAME // influences only the data below (see RFC 2672, section 3) -TEST_F(MemoryZoneTest, findAtDNAME) { - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_dname_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_dname_a_))); +TEST_F(InMemoryZoneFinderTest, findAtDNAME) { + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_dname_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_dname_a_))); const Name dname_name(rr_dname_->getName()); - findTest(dname_name, RRType::A(), Zone::SUCCESS, true, rr_dname_a_); - findTest(dname_name, RRType::DNAME(), Zone::SUCCESS, true, rr_dname_); - findTest(dname_name, RRType::TXT(), Zone::NXRRSET, true); + findTest(dname_name, RRType::A(), ZoneFinder::SUCCESS, true, rr_dname_a_); + findTest(dname_name, RRType::DNAME(), ZoneFinder::SUCCESS, true, + rr_dname_); + findTest(dname_name, RRType::TXT(), ZoneFinder::NXRRSET, true); } // Try searching something that is both under NS and DNAME, without and with // GLUE_OK mode (it should stop at the NS and DNAME respectively). -TEST_F(MemoryZoneTest, DNAMEUnderNS) { - zone_.add(rr_child_ns_); - zone_.add(rr_child_dname_); +TEST_F(InMemoryZoneFinderTest, DNAMEUnderNS) { + zone_finder_.add(rr_child_ns_); + zone_finder_.add(rr_child_dname_); Name lowName("below.dname.child.example.org."); - findTest(lowName, RRType::A(), Zone::DELEGATION, true, rr_child_ns_); - findTest(lowName, RRType::A(), Zone::DNAME, true, rr_child_dname_, NULL, - NULL, Zone::FIND_GLUE_OK); + findTest(lowName, RRType::A(), ZoneFinder::DELEGATION, true, rr_child_ns_); + findTest(lowName, RRType::A(), ZoneFinder::DNAME, true, rr_child_dname_, + NULL, NULL, ZoneFinder::FIND_GLUE_OK); } // Test adding child zones and zone cut handling -TEST_F(MemoryZoneTest, delegationNS) { +TEST_F(InMemoryZoneFinderTest, delegationNS) { // add in-zone data - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_))); // install a zone cut - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_child_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_child_ns_))); // below the zone cut - findTest(Name("www.child.example.org"), RRType::A(), Zone::DELEGATION, - true, rr_child_ns_); + findTest(Name("www.child.example.org"), RRType::A(), + ZoneFinder::DELEGATION, true, rr_child_ns_); // at the zone cut - findTest(Name("child.example.org"), RRType::A(), Zone::DELEGATION, + findTest(Name("child.example.org"), RRType::A(), ZoneFinder::DELEGATION, true, rr_child_ns_); - findTest(Name("child.example.org"), RRType::NS(), Zone::DELEGATION, + findTest(Name("child.example.org"), RRType::NS(), ZoneFinder::DELEGATION, true, rr_child_ns_); // finding NS for the apex (origin) node. This must not be confused // with delegation due to the existence of an NS RR. - findTest(origin_, RRType::NS(), Zone::SUCCESS, true, rr_ns_); + findTest(origin_, RRType::NS(), ZoneFinder::SUCCESS, true, rr_ns_); // unusual case of "nested delegation": the highest cut should be used. - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_grandchild_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_grandchild_ns_))); findTest(Name("www.grand.child.example.org"), RRType::A(), - Zone::DELEGATION, true, rr_child_ns_); // note: !rr_grandchild_ns_ + // note: !rr_grandchild_ns_ + ZoneFinder::DELEGATION, true, rr_child_ns_); } -TEST_F(MemoryZoneTest, findAny) { - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_a_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_child_glue_))); +TEST_F(InMemoryZoneFinderTest, findAny) { + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_a_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_child_glue_))); // origin RRsetList origin_rrsets; - findTest(origin_, RRType::ANY(), Zone::SUCCESS, true, + findTest(origin_, RRType::ANY(), ZoneFinder::SUCCESS, true, ConstRRsetPtr(), &origin_rrsets); EXPECT_EQ(2, origin_rrsets.size()); EXPECT_EQ(rr_a_, origin_rrsets.findRRset(RRType::A(), RRClass::IN())); @@ -527,13 +597,13 @@ TEST_F(MemoryZoneTest, findAny) { // out zone name RRsetList out_rrsets; - findTest(Name("example.com"), RRType::ANY(), Zone::NXDOMAIN, true, + findTest(Name("example.com"), RRType::ANY(), ZoneFinder::NXDOMAIN, true, ConstRRsetPtr(), &out_rrsets); EXPECT_EQ(0, out_rrsets.size()); RRsetList glue_child_rrsets; - findTest(rr_child_glue_->getName(), RRType::ANY(), Zone::SUCCESS, true, - ConstRRsetPtr(), &glue_child_rrsets); + findTest(rr_child_glue_->getName(), RRType::ANY(), ZoneFinder::SUCCESS, + true, ConstRRsetPtr(), &glue_child_rrsets); EXPECT_EQ(rr_child_glue_, glue_child_rrsets.findRRset(RRType::A(), RRClass::IN())); EXPECT_EQ(1, glue_child_rrsets.size()); @@ -542,59 +612,60 @@ TEST_F(MemoryZoneTest, findAny) { // been implemented // add zone cut - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_child_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_child_ns_))); // zone cut RRsetList child_rrsets; - findTest(rr_child_ns_->getName(), RRType::ANY(), Zone::DELEGATION, true, - rr_child_ns_, &child_rrsets); + findTest(rr_child_ns_->getName(), RRType::ANY(), ZoneFinder::DELEGATION, + true, rr_child_ns_, &child_rrsets); EXPECT_EQ(0, child_rrsets.size()); // glue for this zone cut RRsetList new_glue_child_rrsets; - findTest(rr_child_glue_->getName(), RRType::ANY(), Zone::DELEGATION, true, - rr_child_ns_, &new_glue_child_rrsets); + findTest(rr_child_glue_->getName(), RRType::ANY(), ZoneFinder::DELEGATION, + true, rr_child_ns_, &new_glue_child_rrsets); EXPECT_EQ(0, new_glue_child_rrsets.size()); } -TEST_F(MemoryZoneTest, glue) { +TEST_F(InMemoryZoneFinderTest, glue) { // install zone data: // a zone cut - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_child_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_child_ns_))); // glue for this cut - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_child_glue_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_child_glue_))); // a nested zone cut (unusual) - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_grandchild_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_grandchild_ns_))); // glue under the deeper zone cut - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_grandchild_glue_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_grandchild_glue_))); // by default glue is hidden due to the zone cut - findTest(rr_child_glue_->getName(), RRType::A(), Zone::DELEGATION, true, - rr_child_ns_); + findTest(rr_child_glue_->getName(), RRType::A(), ZoneFinder::DELEGATION, + true, rr_child_ns_); // If we do it in the "glue OK" mode, we should find the exact match. - findTest(rr_child_glue_->getName(), RRType::A(), Zone::SUCCESS, true, - rr_child_glue_, NULL, NULL, Zone::FIND_GLUE_OK); + findTest(rr_child_glue_->getName(), RRType::A(), ZoneFinder::SUCCESS, true, + rr_child_glue_, NULL, NULL, ZoneFinder::FIND_GLUE_OK); // glue OK + NXRRSET case - findTest(rr_child_glue_->getName(), RRType::AAAA(), Zone::NXRRSET, true, - ConstRRsetPtr(), NULL, NULL, Zone::FIND_GLUE_OK); + findTest(rr_child_glue_->getName(), RRType::AAAA(), ZoneFinder::NXRRSET, + true, ConstRRsetPtr(), NULL, NULL, ZoneFinder::FIND_GLUE_OK); // glue OK + NXDOMAIN case - findTest(Name("www.child.example.org"), RRType::A(), Zone::DELEGATION, - true, rr_child_ns_, NULL, NULL, Zone::FIND_GLUE_OK); + findTest(Name("www.child.example.org"), RRType::A(), + ZoneFinder::DELEGATION, true, rr_child_ns_, NULL, NULL, + ZoneFinder::FIND_GLUE_OK); // nested cut case. The glue should be found. findTest(rr_grandchild_glue_->getName(), RRType::AAAA(), - Zone::SUCCESS, - true, rr_grandchild_glue_, NULL, NULL, Zone::FIND_GLUE_OK); + ZoneFinder::SUCCESS, + true, rr_grandchild_glue_, NULL, NULL, ZoneFinder::FIND_GLUE_OK); // A non-existent name in nested cut. This should result in delegation // at the highest zone cut. findTest(Name("www.grand.child.example.org"), RRType::TXT(), - Zone::DELEGATION, true, rr_child_ns_, NULL, NULL, - Zone::FIND_GLUE_OK); + ZoneFinder::DELEGATION, true, rr_child_ns_, NULL, NULL, + ZoneFinder::FIND_GLUE_OK); } /** @@ -604,28 +675,29 @@ TEST_F(MemoryZoneTest, glue) { * \todo This doesn't do any kind of CNAME and so on. If it isn't * directly there, it just tells it doesn't exist. */ -TEST_F(MemoryZoneTest, find) { +TEST_F(InMemoryZoneFinderTest, find) { // Fill some data inside // Now put all the data we have there. It should throw nothing - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_a_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_ns_aaaa_))); - EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_.add(rr_a_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_a_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_ns_aaaa_))); + EXPECT_NO_THROW(EXPECT_EQ(SUCCESS, zone_finder_.add(rr_a_))); // These two should be successful - findTest(origin_, RRType::NS(), Zone::SUCCESS, true, rr_ns_); - findTest(rr_ns_a_->getName(), RRType::A(), Zone::SUCCESS, true, rr_ns_a_); + findTest(origin_, RRType::NS(), ZoneFinder::SUCCESS, true, rr_ns_); + findTest(rr_ns_a_->getName(), RRType::A(), ZoneFinder::SUCCESS, true, + rr_ns_a_); // These domain exist but don't have the provided RRType - findTest(origin_, RRType::AAAA(), Zone::NXRRSET); - findTest(rr_ns_a_->getName(), RRType::NS(), Zone::NXRRSET); + findTest(origin_, RRType::AAAA(), ZoneFinder::NXRRSET); + findTest(rr_ns_a_->getName(), RRType::NS(), ZoneFinder::NXRRSET); // These domains don't exist (and one is out of the zone) - findTest(Name("nothere.example.org"), RRType::A(), Zone::NXDOMAIN); - findTest(Name("example.net"), RRType::A(), Zone::NXDOMAIN); + findTest(Name("nothere.example.org"), RRType::A(), ZoneFinder::NXDOMAIN); + findTest(Name("example.net"), RRType::A(), ZoneFinder::NXDOMAIN); } -TEST_F(MemoryZoneTest, emptyNode) { +TEST_F(InMemoryZoneFinderTest, emptyNode) { /* * The backend RBTree for this test should look like as follows: * example.org @@ -645,52 +717,53 @@ TEST_F(MemoryZoneTest, emptyNode) { for (int i = 0; names[i] != NULL; ++i) { ConstRRsetPtr rrset(new RRset(Name(names[i]), class_, RRType::A(), RRTTL(300))); - EXPECT_EQ(SUCCESS, zone_.add(rrset)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rrset)); } // empty node matching, easy case: the node for 'baz' exists with // no data. - findTest(Name("baz.example.org"), RRType::A(), Zone::NXRRSET); + findTest(Name("baz.example.org"), RRType::A(), ZoneFinder::NXRRSET); // empty node matching, a trickier case: the node for 'foo' is part of // "x.foo", which should be considered an empty node. - findTest(Name("foo.example.org"), RRType::A(), Zone::NXRRSET); + findTest(Name("foo.example.org"), RRType::A(), ZoneFinder::NXRRSET); // "org" is contained in "example.org", but it shouldn't be treated as // NXRRSET because it's out of zone. // Note: basically we don't expect such a query to be performed (the common // operation is to identify the best matching zone first then perform // search it), but we shouldn't be confused even in the unexpected case. - findTest(Name("org"), RRType::A(), Zone::NXDOMAIN); + findTest(Name("org"), RRType::A(), ZoneFinder::NXDOMAIN); } -TEST_F(MemoryZoneTest, load) { +TEST_F(InMemoryZoneFinderTest, load) { // Put some data inside the zone - EXPECT_NO_THROW(EXPECT_EQ(result::SUCCESS, zone_.add(rr_ns_))); + EXPECT_NO_THROW(EXPECT_EQ(result::SUCCESS, zone_finder_.add(rr_ns_))); // Loading with different origin should fail - EXPECT_THROW(zone_.load(TEST_DATA_DIR "/root.zone"), MasterLoadError); + EXPECT_THROW(zone_finder_.load(TEST_DATA_DIR "/root.zone"), + MasterLoadError); // See the original data is still there, survived the exception - findTest(origin_, RRType::NS(), Zone::SUCCESS, true, rr_ns_); + findTest(origin_, RRType::NS(), ZoneFinder::SUCCESS, true, rr_ns_); // Create correct zone - MemoryZone rootzone(class_, Name(".")); + InMemoryZoneFinder rootzone(class_, Name(".")); // Try putting something inside EXPECT_NO_THROW(EXPECT_EQ(result::SUCCESS, rootzone.add(rr_ns_aaaa_))); // Load the zone. It should overwrite/remove the above RRset EXPECT_NO_THROW(rootzone.load(TEST_DATA_DIR "/root.zone")); // Now see there are some rrsets (we don't look inside, though) - findTest(Name("."), RRType::SOA(), Zone::SUCCESS, false, ConstRRsetPtr(), - NULL, &rootzone); - findTest(Name("."), RRType::NS(), Zone::SUCCESS, false, ConstRRsetPtr(), - NULL, &rootzone); - findTest(Name("a.root-servers.net."), RRType::A(), Zone::SUCCESS, false, - ConstRRsetPtr(), NULL, &rootzone); + findTest(Name("."), RRType::SOA(), ZoneFinder::SUCCESS, false, + ConstRRsetPtr(), NULL, &rootzone); + findTest(Name("."), RRType::NS(), ZoneFinder::SUCCESS, false, + ConstRRsetPtr(), NULL, &rootzone); + findTest(Name("a.root-servers.net."), RRType::A(), ZoneFinder::SUCCESS, + false, ConstRRsetPtr(), NULL, &rootzone); // But this should no longer be here - findTest(rr_ns_a_->getName(), RRType::AAAA(), Zone::NXDOMAIN, true, + findTest(rr_ns_a_->getName(), RRType::AAAA(), ZoneFinder::NXDOMAIN, true, ConstRRsetPtr(), NULL, &rootzone); // Try loading zone that is wrong in a different way - EXPECT_THROW(zone_.load(TEST_DATA_DIR "/duplicate_rrset.zone"), + EXPECT_THROW(zone_finder_.load(TEST_DATA_DIR "/duplicate_rrset.zone"), MasterLoadError); } @@ -698,7 +771,7 @@ TEST_F(MemoryZoneTest, load) { * Test that puts a (simple) wildcard into the zone and checks we can * correctly find the data. */ -TEST_F(MemoryZoneTest, wildcard) { +TEST_F(InMemoryZoneFinderTest, wildcard) { /* * example.org. * | @@ -706,40 +779,41 @@ TEST_F(MemoryZoneTest, wildcard) { * | * * */ - EXPECT_EQ(SUCCESS, zone_.add(rr_wild_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_wild_)); // Search at the parent. The parent will not have the A, but it will // be in the wildcard (so check the wildcard isn't matched at the parent) { SCOPED_TRACE("Search at parrent"); - findTest(Name("wild.example.org"), RRType::A(), Zone::NXRRSET); + findTest(Name("wild.example.org"), RRType::A(), ZoneFinder::NXRRSET); } // Search the original name of wildcard { SCOPED_TRACE("Search directly at *"); - findTest(Name("*.wild.example.org"), RRType::A(), Zone::SUCCESS, true, - rr_wild_); + findTest(Name("*.wild.example.org"), RRType::A(), ZoneFinder::SUCCESS, + true, rr_wild_); } // Search "created" name. { SCOPED_TRACE("Search at created child"); - findTest(Name("a.wild.example.org"), RRType::A(), Zone::SUCCESS, false, - rr_wild_, NULL, NULL, Zone::FIND_DEFAULT, true); + findTest(Name("a.wild.example.org"), RRType::A(), ZoneFinder::SUCCESS, + false, rr_wild_, NULL, NULL, ZoneFinder::FIND_DEFAULT, true); } // Search another created name, this time little bit lower { SCOPED_TRACE("Search at created grand-child"); - findTest(Name("a.b.wild.example.org"), RRType::A(), Zone::SUCCESS, - false, rr_wild_, NULL, NULL, Zone::FIND_DEFAULT, true); + findTest(Name("a.b.wild.example.org"), RRType::A(), + ZoneFinder::SUCCESS, false, rr_wild_, NULL, NULL, + ZoneFinder::FIND_DEFAULT, true); } - EXPECT_EQ(SUCCESS, zone_.add(rr_under_wild_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_under_wild_)); { SCOPED_TRACE("Search under non-wildcard"); findTest(Name("bar.foo.wild.example.org"), RRType::A(), - Zone::NXDOMAIN); + ZoneFinder::NXDOMAIN); } } @@ -750,33 +824,34 @@ TEST_F(MemoryZoneTest, wildcard) { * - When the query is in another zone. That is, delegation cancels * the wildcard defaults." */ -TEST_F(MemoryZoneTest, delegatedWildcard) { - EXPECT_EQ(SUCCESS, zone_.add(rr_child_wild_)); - EXPECT_EQ(SUCCESS, zone_.add(rr_child_ns_)); +TEST_F(InMemoryZoneFinderTest, delegatedWildcard) { + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_child_wild_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_child_ns_)); { SCOPED_TRACE("Looking under delegation point"); - findTest(Name("a.child.example.org"), RRType::A(), Zone::DELEGATION, - true, rr_child_ns_); + findTest(Name("a.child.example.org"), RRType::A(), + ZoneFinder::DELEGATION, true, rr_child_ns_); } { SCOPED_TRACE("Looking under delegation point in GLUE_OK mode"); - findTest(Name("a.child.example.org"), RRType::A(), Zone::DELEGATION, - true, rr_child_ns_, NULL, NULL, Zone::FIND_GLUE_OK); + findTest(Name("a.child.example.org"), RRType::A(), + ZoneFinder::DELEGATION, true, rr_child_ns_, NULL, NULL, + ZoneFinder::FIND_GLUE_OK); } } // Tests combination of wildcard and ANY. -TEST_F(MemoryZoneTest, anyWildcard) { - EXPECT_EQ(SUCCESS, zone_.add(rr_wild_)); +TEST_F(InMemoryZoneFinderTest, anyWildcard) { + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_wild_)); // First try directly the name (normal match) { SCOPED_TRACE("Asking direcly for *"); RRsetList target; - findTest(Name("*.wild.example.org"), RRType::ANY(), Zone::SUCCESS, - true, ConstRRsetPtr(), &target); + findTest(Name("*.wild.example.org"), RRType::ANY(), + ZoneFinder::SUCCESS, true, ConstRRsetPtr(), &target); ASSERT_EQ(1, target.size()); EXPECT_EQ(RRType::A(), (*target.begin())->getType()); EXPECT_EQ(Name("*.wild.example.org"), (*target.begin())->getName()); @@ -786,8 +861,8 @@ TEST_F(MemoryZoneTest, anyWildcard) { { SCOPED_TRACE("Asking in the wild way"); RRsetList target; - findTest(Name("a.wild.example.org"), RRType::ANY(), Zone::SUCCESS, - true, ConstRRsetPtr(), &target); + findTest(Name("a.wild.example.org"), RRType::ANY(), + ZoneFinder::SUCCESS, true, ConstRRsetPtr(), &target); ASSERT_EQ(1, target.size()); EXPECT_EQ(RRType::A(), (*target.begin())->getType()); EXPECT_EQ(Name("a.wild.example.org"), (*target.begin())->getName()); @@ -796,56 +871,56 @@ TEST_F(MemoryZoneTest, anyWildcard) { // Test there's nothing in the wildcard in the middle if we load // wild.*.foo.example.org. -TEST_F(MemoryZoneTest, emptyWildcard) { +TEST_F(InMemoryZoneFinderTest, emptyWildcard) { /* * example.org. * foo * * * wild */ - EXPECT_EQ(SUCCESS, zone_.add(rr_emptywild_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_emptywild_)); { SCOPED_TRACE("Asking for the original record under wildcard"); - findTest(Name("wild.*.foo.example.org"), RRType::A(), Zone::SUCCESS, - true, rr_emptywild_); + findTest(Name("wild.*.foo.example.org"), RRType::A(), + ZoneFinder::SUCCESS, true, rr_emptywild_); } { SCOPED_TRACE("Asking for A record"); - findTest(Name("a.foo.example.org"), RRType::A(), Zone::NXRRSET); - findTest(Name("*.foo.example.org"), RRType::A(), Zone::NXRRSET); - findTest(Name("foo.example.org"), RRType::A(), Zone::NXRRSET); + findTest(Name("a.foo.example.org"), RRType::A(), ZoneFinder::NXRRSET); + findTest(Name("*.foo.example.org"), RRType::A(), ZoneFinder::NXRRSET); + findTest(Name("foo.example.org"), RRType::A(), ZoneFinder::NXRRSET); } { SCOPED_TRACE("Asking for ANY record"); RRsetList normalTarget; - findTest(Name("*.foo.example.org"), RRType::ANY(), Zone::NXRRSET, true, - ConstRRsetPtr(), &normalTarget); + findTest(Name("*.foo.example.org"), RRType::ANY(), ZoneFinder::NXRRSET, + true, ConstRRsetPtr(), &normalTarget); EXPECT_EQ(0, normalTarget.size()); RRsetList wildTarget; - findTest(Name("a.foo.example.org"), RRType::ANY(), Zone::NXRRSET, true, - ConstRRsetPtr(), &wildTarget); + findTest(Name("a.foo.example.org"), RRType::ANY(), + ZoneFinder::NXRRSET, true, ConstRRsetPtr(), &wildTarget); EXPECT_EQ(0, wildTarget.size()); } { SCOPED_TRACE("Asking on the non-terminal"); findTest(Name("wild.bar.foo.example.org"), RRType::A(), - Zone::NXRRSET); + ZoneFinder::NXRRSET); } } // Same as emptyWildcard, but with multiple * in the path. -TEST_F(MemoryZoneTest, nestedEmptyWildcard) { - EXPECT_EQ(SUCCESS, zone_.add(rr_nested_emptywild_)); +TEST_F(InMemoryZoneFinderTest, nestedEmptyWildcard) { + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_nested_emptywild_)); { SCOPED_TRACE("Asking for the original record under wildcards"); findTest(Name("wild.*.foo.*.bar.example.org"), RRType::A(), - Zone::SUCCESS, true, rr_nested_emptywild_); + ZoneFinder::SUCCESS, true, rr_nested_emptywild_); } { @@ -860,7 +935,7 @@ TEST_F(MemoryZoneTest, nestedEmptyWildcard) { for (const char** name(names); *name != NULL; ++ name) { SCOPED_TRACE(string("Node ") + *name); - findTest(Name(*name), RRType::A(), Zone::NXRRSET); + findTest(Name(*name), RRType::A(), ZoneFinder::NXRRSET); } } @@ -878,7 +953,7 @@ TEST_F(MemoryZoneTest, nestedEmptyWildcard) { for (const char** name(names); *name != NULL; ++ name) { SCOPED_TRACE(string("Node ") + *name); - findTest(Name(*name), RRType::A(), Zone::NXRRSET); + findTest(Name(*name), RRType::A(), ZoneFinder::NXRRSET); } } @@ -889,7 +964,7 @@ TEST_F(MemoryZoneTest, nestedEmptyWildcard) { SCOPED_TRACE(string("Node ") + *name); RRsetList target; - findTest(Name(*name), RRType::ANY(), Zone::NXRRSET, true, + findTest(Name(*name), RRType::ANY(), ZoneFinder::NXRRSET, true, ConstRRsetPtr(), &target); EXPECT_EQ(0, target.size()); } @@ -899,21 +974,21 @@ TEST_F(MemoryZoneTest, nestedEmptyWildcard) { // We run this part twice from the below test, in two slightly different // situations void -MemoryZoneTest::doCancelWildcardTest() { +InMemoryZoneFinderTest::doCancelWildcardTest() { // These should be canceled { SCOPED_TRACE("Canceled under foo.wild.example.org"); findTest(Name("aaa.foo.wild.example.org"), RRType::A(), - Zone::NXDOMAIN); + ZoneFinder::NXDOMAIN); findTest(Name("zzz.foo.wild.example.org"), RRType::A(), - Zone::NXDOMAIN); + ZoneFinder::NXDOMAIN); } // This is existing, non-wildcard domain, shouldn't wildcard at all { SCOPED_TRACE("Existing domain under foo.wild.example.org"); - findTest(Name("bar.foo.wild.example.org"), RRType::A(), Zone::SUCCESS, - true, rr_not_wild_); + findTest(Name("bar.foo.wild.example.org"), RRType::A(), + ZoneFinder::SUCCESS, true, rr_not_wild_); } // These should be caught by the wildcard @@ -930,15 +1005,16 @@ MemoryZoneTest::doCancelWildcardTest() { for (const char** name(names); *name != NULL; ++ name) { SCOPED_TRACE(string("Node ") + *name); - findTest(Name(*name), RRType::A(), Zone::SUCCESS, false, rr_wild_, - NULL, NULL, Zone::FIND_DEFAULT, true); + findTest(Name(*name), RRType::A(), ZoneFinder::SUCCESS, false, + rr_wild_, NULL, NULL, ZoneFinder::FIND_DEFAULT, true); } } // This shouldn't be wildcarded, it's an existing domain { SCOPED_TRACE("The foo.wild.example.org itself"); - findTest(Name("foo.wild.example.org"), RRType::A(), Zone::NXRRSET); + findTest(Name("foo.wild.example.org"), RRType::A(), + ZoneFinder::NXRRSET); } } @@ -952,9 +1028,9 @@ MemoryZoneTest::doCancelWildcardTest() { * Tests few cases "around" the canceled wildcard match, to see something that * shouldn't be canceled isn't. */ -TEST_F(MemoryZoneTest, cancelWildcard) { - EXPECT_EQ(SUCCESS, zone_.add(rr_wild_)); - EXPECT_EQ(SUCCESS, zone_.add(rr_not_wild_)); +TEST_F(InMemoryZoneFinderTest, cancelWildcard) { + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_wild_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_not_wild_)); { SCOPED_TRACE("Runnig with single entry under foo.wild.example.org"); @@ -964,61 +1040,63 @@ TEST_F(MemoryZoneTest, cancelWildcard) { // Try putting another one under foo.wild.... // The result should be the same but it will be done in another way in the // code, because the foo.wild.example.org will exist in the tree. - EXPECT_EQ(SUCCESS, zone_.add(rr_not_wild_another_)); + EXPECT_EQ(SUCCESS, zone_finder_.add(rr_not_wild_another_)); { SCOPED_TRACE("Runnig with two entries under foo.wild.example.org"); doCancelWildcardTest(); } } -TEST_F(MemoryZoneTest, loadBadWildcard) { +TEST_F(InMemoryZoneFinderTest, loadBadWildcard) { // We reject loading the zone if it contains a wildcard name for // NS or DNAME. - EXPECT_THROW(zone_.add(rr_nswild_), MemoryZone::AddError); - EXPECT_THROW(zone_.add(rr_dnamewild_), MemoryZone::AddError); + EXPECT_THROW(zone_finder_.add(rr_nswild_), InMemoryZoneFinder::AddError); + EXPECT_THROW(zone_finder_.add(rr_dnamewild_), + InMemoryZoneFinder::AddError); } -TEST_F(MemoryZoneTest, swap) { - // build one zone with some data - MemoryZone zone1(class_, origin_); - EXPECT_EQ(result::SUCCESS, zone1.add(rr_ns_)); - EXPECT_EQ(result::SUCCESS, zone1.add(rr_ns_aaaa_)); +TEST_F(InMemoryZoneFinderTest, swap) { + // build one zone finder with some data + InMemoryZoneFinder finder1(class_, origin_); + EXPECT_EQ(result::SUCCESS, finder1.add(rr_ns_)); + EXPECT_EQ(result::SUCCESS, finder1.add(rr_ns_aaaa_)); - // build another zone of a different RR class with some other data + // build another zone finder of a different RR class with some other data const Name other_origin("version.bind"); ASSERT_NE(origin_, other_origin); // make sure these two are different - MemoryZone zone2(RRClass::CH(), other_origin); + InMemoryZoneFinder finder2(RRClass::CH(), other_origin); EXPECT_EQ(result::SUCCESS, - zone2.add(RRsetPtr(new RRset(Name("version.bind"), + finder2.add(RRsetPtr(new RRset(Name("version.bind"), RRClass::CH(), RRType::TXT(), RRTTL(0))))); - zone1.swap(zone2); - EXPECT_EQ(other_origin, zone1.getOrigin()); - EXPECT_EQ(origin_, zone2.getOrigin()); - EXPECT_EQ(RRClass::CH(), zone1.getClass()); - EXPECT_EQ(RRClass::IN(), zone2.getClass()); + finder1.swap(finder2); + EXPECT_EQ(other_origin, finder1.getOrigin()); + EXPECT_EQ(origin_, finder2.getOrigin()); + EXPECT_EQ(RRClass::CH(), finder1.getClass()); + EXPECT_EQ(RRClass::IN(), finder2.getClass()); // make sure the zone data is swapped, too - findTest(origin_, RRType::NS(), Zone::NXDOMAIN, false, ConstRRsetPtr(), - NULL, &zone1); - findTest(other_origin, RRType::TXT(), Zone::SUCCESS, false, - ConstRRsetPtr(), NULL, &zone1); - findTest(origin_, RRType::NS(), Zone::SUCCESS, false, ConstRRsetPtr(), - NULL, &zone2); - findTest(other_origin, RRType::TXT(), Zone::NXDOMAIN, false, - ConstRRsetPtr(), NULL, &zone2); + findTest(origin_, RRType::NS(), ZoneFinder::NXDOMAIN, false, + ConstRRsetPtr(), NULL, &finder1); + findTest(other_origin, RRType::TXT(), ZoneFinder::SUCCESS, false, + ConstRRsetPtr(), NULL, &finder1); + findTest(origin_, RRType::NS(), ZoneFinder::SUCCESS, false, + ConstRRsetPtr(), NULL, &finder2); + findTest(other_origin, RRType::TXT(), ZoneFinder::NXDOMAIN, false, + ConstRRsetPtr(), NULL, &finder2); } -TEST_F(MemoryZoneTest, getFileName) { +TEST_F(InMemoryZoneFinderTest, getFileName) { // for an empty zone the file name should also be empty. - EXPECT_TRUE(zone_.getFileName().empty()); + EXPECT_TRUE(zone_finder_.getFileName().empty()); // if loading a zone fails the file name shouldn't be set. - EXPECT_THROW(zone_.load(TEST_DATA_DIR "/root.zone"), MasterLoadError); - EXPECT_TRUE(zone_.getFileName().empty()); + EXPECT_THROW(zone_finder_.load(TEST_DATA_DIR "/root.zone"), + MasterLoadError); + EXPECT_TRUE(zone_finder_.getFileName().empty()); // after a successful load, the specified file name should be set - MemoryZone rootzone(class_, Name(".")); + InMemoryZoneFinder rootzone(class_, Name(".")); EXPECT_NO_THROW(rootzone.load(TEST_DATA_DIR "/root.zone")); EXPECT_EQ(TEST_DATA_DIR "/root.zone", rootzone.getFileName()); // overriding load, which will fail @@ -1028,9 +1106,8 @@ TEST_F(MemoryZoneTest, getFileName) { EXPECT_EQ(TEST_DATA_DIR "/root.zone", rootzone.getFileName()); // After swap, file names should also be swapped. - zone_.swap(rootzone); - EXPECT_EQ(TEST_DATA_DIR "/root.zone", zone_.getFileName()); + zone_finder_.swap(rootzone); + EXPECT_EQ(TEST_DATA_DIR "/root.zone", zone_finder_.getFileName()); EXPECT_TRUE(rootzone.getFileName().empty()); } - } diff --git a/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc b/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc new file mode 100644 index 0000000000..3974977553 --- /dev/null +++ b/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc @@ -0,0 +1,773 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include + +#include + +#include + +#include +#include +#include +#include + +using namespace std; +using namespace isc::datasrc; +using boost::shared_ptr; +using isc::data::ConstElementPtr; +using isc::data::Element; +using isc::dns::RRClass; +using isc::dns::Name; + +namespace { +// Some test data +std::string SQLITE_DBFILE_EXAMPLE = TEST_DATA_DIR "/test.sqlite3"; +std::string SQLITE_DBFILE_EXAMPLE2 = TEST_DATA_DIR "/example2.com.sqlite3"; +std::string SQLITE_DBNAME_EXAMPLE2 = "sqlite3_example2.com.sqlite3"; +std::string SQLITE_DBFILE_EXAMPLE_ROOT = TEST_DATA_DIR "/test-root.sqlite3"; +std::string SQLITE_DBNAME_EXAMPLE_ROOT = "sqlite3_test-root.sqlite3"; +std::string SQLITE_DBFILE_BROKENDB = TEST_DATA_DIR "/brokendb.sqlite3"; +std::string SQLITE_DBFILE_MEMORY = ":memory:"; +std::string SQLITE_DBFILE_EXAMPLE_ORG = TEST_DATA_DIR "/example.org.sqlite3"; + +// The following file must be non existent and must be non"creatable"; +// the sqlite3 library will try to create a new DB file if it doesn't exist, +// so to test a failure case the create operation should also fail. +// The "nodir", a non existent directory, is inserted for this purpose. +std::string SQLITE_DBFILE_NOTEXIST = TEST_DATA_DIR "/nodir/notexist"; + +// new db file, we don't need this to be a std::string, and given the +// raw calls we use it in a const char* is more convenient +const char* SQLITE_NEW_DBFILE = TEST_DATA_BUILDDIR "/newdb.sqlite3"; + +// Opening works (the content is tested in different tests) +TEST(SQLite3Open, common) { + EXPECT_NO_THROW(SQLite3Accessor accessor(SQLITE_DBFILE_EXAMPLE, + RRClass::IN())); +} + +// The file can't be opened +TEST(SQLite3Open, notExist) { + EXPECT_THROW(SQLite3Accessor accessor(SQLITE_DBFILE_NOTEXIST, + RRClass::IN()), SQLite3Error); +} + +// It rejects broken DB +TEST(SQLite3Open, brokenDB) { + EXPECT_THROW(SQLite3Accessor accessor(SQLITE_DBFILE_BROKENDB, + RRClass::IN()), SQLite3Error); +} + +// Test we can create the schema on the fly +TEST(SQLite3Open, memoryDB) { + EXPECT_NO_THROW(SQLite3Accessor accessor(SQLITE_DBFILE_MEMORY, + RRClass::IN())); +} + +// Test fixture for querying the db +class SQLite3AccessorTest : public ::testing::Test { +public: + SQLite3AccessorTest() { + initAccessor(SQLITE_DBFILE_EXAMPLE, RRClass::IN()); + } + // So it can be re-created with different data + void initAccessor(const std::string& filename, const RRClass& rrclass) { + accessor.reset(new SQLite3Accessor(filename, rrclass)); + } + // The tested accessor + boost::shared_ptr accessor; +}; + +// This zone exists in the data, so it should be found +TEST_F(SQLite3AccessorTest, getZone) { + std::pair result(accessor->getZone("example.com.")); + EXPECT_TRUE(result.first); + EXPECT_EQ(1, result.second); +} + +// But it should find only the zone, nothing below it +TEST_F(SQLite3AccessorTest, subZone) { + EXPECT_FALSE(accessor->getZone("sub.example.com.").first); +} + +// This zone is not there at all +TEST_F(SQLite3AccessorTest, noZone) { + EXPECT_FALSE(accessor->getZone("example.org.").first); +} + +// This zone is there, but in different class +TEST_F(SQLite3AccessorTest, noClass) { + initAccessor(SQLITE_DBFILE_EXAMPLE, RRClass::CH()); + EXPECT_FALSE(accessor->getZone("example.com.").first); +} + +// This tests the iterator context +TEST_F(SQLite3AccessorTest, iterator) { + // Our test zone is conveniently small, but not empty + initAccessor(SQLITE_DBFILE_EXAMPLE_ORG, RRClass::IN()); + + const std::pair zone_info(accessor->getZone("example.org.")); + ASSERT_TRUE(zone_info.first); + + // Get the iterator context + DatabaseAccessor::IteratorContextPtr + context(accessor->getAllRecords(zone_info.second)); + ASSERT_NE(DatabaseAccessor::IteratorContextPtr(), context); + + std::string data[DatabaseAccessor::COLUMN_COUNT]; + // Get and check the first and only record + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("DNAME", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("dname.example.info.", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("dname.example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("DNAME", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("dname2.example.info.", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("dname2.foo.example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("MX", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("10 mail.example.org.", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("NS", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("ns1.example.org.", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("NS", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("ns2.example.org.", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("NS", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("ns3.example.org.", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("SOA", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("ns1.example.org. admin.example.org. " + "1234 3600 1800 2419200 7200", + data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("A", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("192.0.2.10", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("mail.example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("A", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("192.0.2.101", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("ns.sub.example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("NS", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("ns.sub.example.org.", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("sub.example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + EXPECT_TRUE(context->getNext(data)); + EXPECT_EQ("A", data[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ("3600", data[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ("192.0.2.1", data[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ("www.example.org.", data[DatabaseAccessor::NAME_COLUMN]); + + // Check there's no other + EXPECT_FALSE(context->getNext(data)); + + // And make sure calling it again won't cause problems. + EXPECT_FALSE(context->getNext(data)); +} + +TEST(SQLite3Open, getDBNameExample2) { + SQLite3Accessor accessor(SQLITE_DBFILE_EXAMPLE2, RRClass::IN()); + EXPECT_EQ(SQLITE_DBNAME_EXAMPLE2, accessor.getDBName()); +} + +TEST(SQLite3Open, getDBNameExampleROOT) { + SQLite3Accessor accessor(SQLITE_DBFILE_EXAMPLE_ROOT, RRClass::IN()); + EXPECT_EQ(SQLITE_DBNAME_EXAMPLE_ROOT, accessor.getDBName()); +} + +// Simple function to cound the number of records for +// any name +void +checkRecordRow(const std::string columns[], + const std::string& field0, + const std::string& field1, + const std::string& field2, + const std::string& field3, + const std::string& field4) +{ + EXPECT_EQ(field0, columns[DatabaseAccessor::TYPE_COLUMN]); + EXPECT_EQ(field1, columns[DatabaseAccessor::TTL_COLUMN]); + EXPECT_EQ(field2, columns[DatabaseAccessor::SIGTYPE_COLUMN]); + EXPECT_EQ(field3, columns[DatabaseAccessor::RDATA_COLUMN]); + EXPECT_EQ(field4, columns[DatabaseAccessor::NAME_COLUMN]); +} + +TEST_F(SQLite3AccessorTest, getRecords) { + const std::pair zone_info(accessor->getZone("example.com.")); + ASSERT_TRUE(zone_info.first); + + const int zone_id = zone_info.second; + ASSERT_EQ(1, zone_id); + + std::string columns[DatabaseAccessor::COLUMN_COUNT]; + + DatabaseAccessor::IteratorContextPtr + context(accessor->getRecords("foo.bar", 1)); + ASSERT_NE(DatabaseAccessor::IteratorContextPtr(), + context); + EXPECT_FALSE(context->getNext(columns)); + checkRecordRow(columns, "", "", "", "", ""); + + // now try some real searches + context = accessor->getRecords("foo.example.com.", zone_id); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "CNAME", "3600", "", + "cnametest.example.org.", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "RRSIG", "3600", "CNAME", + "CNAME 5 3 3600 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "NSEC", "7200", "", + "mail.example.com. CNAME RRSIG NSEC", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 3 7200 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE", ""); + EXPECT_FALSE(context->getNext(columns)); + + // with no more records, the array should not have been modified + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 3 7200 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE", ""); + + context = accessor->getRecords("example.com.", zone_id); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "SOA", "3600", "", + "master.example.com. admin.example.com. " + "1234 3600 1800 2419200 7200", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "RRSIG", "3600", "SOA", + "SOA 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "NS", "1200", "", "dns01.example.com.", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "NS", "3600", "", "dns02.example.com.", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "NS", "1800", "", "dns03.example.com.", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "RRSIG", "3600", "NS", + "NS 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "MX", "3600", "", "10 mail.example.com.", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "MX", "3600", "", + "20 mail.subzone.example.com.", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "RRSIG", "3600", "MX", + "MX 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "NSEC", "7200", "", + "cname-ext.example.com. NS SOA MX RRSIG NSEC DNSKEY", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 2 7200 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "DNSKEY", "3600", "", + "256 3 5 AwEAAcOUBllYc1hf7ND9uDy+Yz1BF3sI0m4q NGV7W" + "cTD0WEiuV7IjXgHE36fCmS9QsUxSSOV o1I/FMxI2PJVqTYHkX" + "FBS7AzLGsQYMU7UjBZ SotBJ6Imt5pXMu+lEDNy8TOUzG3xm7g" + "0qcbW YF6qCEfvZoBtAqi5Rk7Mlrqs8agxYyMx", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "DNSKEY", "3600", "", + "257 3 5 AwEAAe5WFbxdCPq2jZrZhlMj7oJdff3W7syJ tbvzg" + "62tRx0gkoCDoBI9DPjlOQG0UAbj+xUV 4HQZJStJaZ+fHU5AwV" + "NT+bBZdtV+NujSikhd THb4FYLg2b3Cx9NyJvAVukHp/91HnWu" + "G4T36 CzAFrfPwsHIrBz9BsaIQ21VRkcmj7DswfI/i DGd8j6b" + "qiODyNZYQ+ZrLmF0KIJ2yPN3iO6Zq 23TaOrVTjB7d1a/h31OD" + "fiHAxFHrkY3t3D5J R9Nsl/7fdRmSznwtcSDgLXBoFEYmw6p86" + "Acv RyoYNcL1SXjaKVLG5jyU3UR+LcGZT5t/0xGf oIK/aKwEN" + "rsjcKZZj660b1M=", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "4456 example.com. FAKEFAKEFAKEFAKE", ""); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE", ""); + EXPECT_FALSE(context->getNext(columns)); + // getnextrecord returning false should mean array is not altered + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE", ""); + + // check that another getNext does not cause problems + EXPECT_FALSE(context->getNext(columns)); + + // Try searching for subdomain + // There's foo.bar.example.com in the data + context = accessor->getRecords("bar.example.com.", zone_id, true); + ASSERT_TRUE(context->getNext(columns)); + checkRecordRow(columns, "A", "3600", "", "192.0.2.1", ""); + EXPECT_FALSE(context->getNext(columns)); + // But we shouldn't match mix.example.com here + context = accessor->getRecords("ix.example.com.", zone_id, true); + EXPECT_FALSE(context->getNext(columns)); +} + +TEST_F(SQLite3AccessorTest, findPrevious) { + EXPECT_EQ("dns01.example.com.", + accessor->findPreviousName(1, "com.example.dns02.")); + // A name that doesn't exist + EXPECT_EQ("dns01.example.com.", + accessor->findPreviousName(1, "com.example.dns01x.")); + // Largest name + EXPECT_EQ("www.example.com.", + accessor->findPreviousName(1, "com.example.wwww")); + // Out of zone after the last name + EXPECT_EQ("www.example.com.", + accessor->findPreviousName(1, "org.example.")); + // Case insensitive? + EXPECT_EQ("dns01.example.com.", + accessor->findPreviousName(1, "com.exaMple.DNS02.")); + // A name that doesn't exist + EXPECT_EQ("dns01.example.com.", + accessor->findPreviousName(1, "com.exaMple.DNS01X.")); + // The DB contains foo.bar.example.com., which would be in between + // these two names. However, that one does not have an NSEC record, + // which is how this database recognizes glue data, so it should + // be skipped. + EXPECT_EQ("example.com.", + accessor->findPreviousName(1, "com.example.cname-ext.")); + // Throw when we are before the origin + EXPECT_THROW(accessor->findPreviousName(1, "com.example."), + isc::NotImplemented); + EXPECT_THROW(accessor->findPreviousName(1, "a.example."), + isc::NotImplemented); +} + +TEST_F(SQLite3AccessorTest, findPreviousNoData) { + // This one doesn't hold any NSEC records, so it shouldn't work + // The underlying DB/data don't support DNSSEC, so it's not implemented + // (does it make sense? Or different exception here?) + EXPECT_THROW(accessor->findPreviousName(3, "com.example.sql2.www."), + isc::NotImplemented); +} + +// Test fixture for creating a db that automatically deletes it before start, +// and when done +class SQLite3Create : public ::testing::Test { +public: + SQLite3Create() { + remove(SQLITE_NEW_DBFILE); + } + + ~SQLite3Create() { + remove(SQLITE_NEW_DBFILE); + } +}; + +bool isReadable(const char* filename) { + return (std::ifstream(filename).is_open()); +} + +TEST_F(SQLite3Create, creationtest) { + ASSERT_FALSE(isReadable(SQLITE_NEW_DBFILE)); + // Should simply be created + SQLite3Accessor accessor(SQLITE_NEW_DBFILE, RRClass::IN()); + ASSERT_TRUE(isReadable(SQLITE_NEW_DBFILE)); +} + +TEST_F(SQLite3Create, emptytest) { + ASSERT_FALSE(isReadable(SQLITE_NEW_DBFILE)); + + // open one manualle + sqlite3* db; + ASSERT_EQ(SQLITE_OK, sqlite3_open(SQLITE_NEW_DBFILE, &db)); + + // empty, but not locked, so creating it now should work + SQLite3Accessor accessor2(SQLITE_NEW_DBFILE, RRClass::IN()); + + sqlite3_close(db); + + // should work now that we closed it + SQLite3Accessor accessor3(SQLITE_NEW_DBFILE, RRClass::IN()); +} + +TEST_F(SQLite3Create, lockedtest) { + ASSERT_FALSE(isReadable(SQLITE_NEW_DBFILE)); + + // open one manually + sqlite3* db; + ASSERT_EQ(SQLITE_OK, sqlite3_open(SQLITE_NEW_DBFILE, &db)); + sqlite3_exec(db, "BEGIN EXCLUSIVE TRANSACTION", NULL, NULL, NULL); + + // should not be able to open it + EXPECT_THROW(SQLite3Accessor accessor2(SQLITE_NEW_DBFILE, RRClass::IN()), + SQLite3Error); + + sqlite3_exec(db, "ROLLBACK TRANSACTION", NULL, NULL, NULL); + + // should work now that we closed it + SQLite3Accessor accessor3(SQLITE_NEW_DBFILE, RRClass::IN()); +} + +TEST_F(SQLite3AccessorTest, clone) { + shared_ptr cloned = accessor->clone(); + EXPECT_EQ(accessor->getDBName(), cloned->getDBName()); + + // The cloned accessor should have a separate connection and search + // context, so it should be able to perform search in concurrent with + // the original accessor. + string columns1[DatabaseAccessor::COLUMN_COUNT]; + string columns2[DatabaseAccessor::COLUMN_COUNT]; + + const std::pair zone_info1( + accessor->getZone("example.com.")); + DatabaseAccessor::IteratorContextPtr iterator1 = + accessor->getRecords("foo.example.com.", zone_info1.second); + const std::pair zone_info2( + accessor->getZone("example.com.")); + DatabaseAccessor::IteratorContextPtr iterator2 = + cloned->getRecords("foo.example.com.", zone_info2.second); + + ASSERT_TRUE(iterator1->getNext(columns1)); + checkRecordRow(columns1, "CNAME", "3600", "", "cnametest.example.org.", + ""); + + ASSERT_TRUE(iterator2->getNext(columns2)); + checkRecordRow(columns2, "CNAME", "3600", "", "cnametest.example.org.", + ""); +} + +// +// Commonly used data for update tests +// +const char* const common_expected_data[] = { + // Test record already stored in the tested sqlite3 DB file. + "foo.bar.example.com.", "com.example.bar.foo.", "3600", "A", "", + "192.0.2.1" +}; +const char* const new_data[] = { + // Newly added data commonly used by some of the tests below + "newdata.example.com.", "com.example.newdata.", "3600", "A", "", + "192.0.2.1" +}; +const char* const deleted_data[] = { + // Existing data to be removed commonly used by some of the tests below + "foo.bar.example.com.", "A", "192.0.2.1" +}; + +class SQLite3Update : public SQLite3AccessorTest { +protected: + SQLite3Update() { + // Note: if "installing" the test file fails some of the subsequent + // tests would fail. + const char *install_cmd = INSTALL_PROG " " TEST_DATA_DIR + "/test.sqlite3 " TEST_DATA_BUILDDIR + "/test.sqlite3.copied"; + if (system(install_cmd) != 0) { + // any exception will do, this is failure in test setup, but nice + // to show the command that fails, and shouldn't be caught + isc_throw(isc::Exception, + "Error setting up; command failed: " << install_cmd); + }; + initAccessor(TEST_DATA_BUILDDIR "/test.sqlite3.copied", RRClass::IN()); + zone_id = accessor->getZone("example.com.").second; + another_accessor.reset(new SQLite3Accessor( + TEST_DATA_BUILDDIR "/test.sqlite3.copied", + RRClass::IN())); + expected_stored.push_back(common_expected_data); + } + + int zone_id; + std::string get_columns[DatabaseAccessor::COLUMN_COUNT]; + std::string add_columns[DatabaseAccessor::ADD_COLUMN_COUNT]; + std::string del_params[DatabaseAccessor::DEL_PARAM_COUNT]; + + vector expected_stored; // placeholder for checkRecords + vector empty_stored; // indicate no corresponding data + + // Another accessor, emulating one running on a different process/thread + shared_ptr another_accessor; + DatabaseAccessor::IteratorContextPtr iterator; +}; + +void +checkRecords(SQLite3Accessor& accessor, int zone_id, const std::string& name, + vector expected_rows) +{ + DatabaseAccessor::IteratorContextPtr iterator = + accessor.getRecords(name, zone_id); + std::string columns[DatabaseAccessor::COLUMN_COUNT]; + vector::const_iterator it = expected_rows.begin(); + while (iterator->getNext(columns)) { + ASSERT_TRUE(it != expected_rows.end()); + checkRecordRow(columns, (*it)[3], (*it)[2], (*it)[4], (*it)[5], ""); + ++it; + } + EXPECT_TRUE(it == expected_rows.end()); +} + +TEST_F(SQLite3Update, emptyUpdate) { + // If we do nothing between start and commit, the zone content + // should be intact. + + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); + zone_id = accessor->startUpdateZone("example.com.", false).second; + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); + accessor->commitUpdateZone(); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); +} + +TEST_F(SQLite3Update, flushZone) { + // With 'replace' being true startUpdateZone() will flush the existing + // zone content. + + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); + zone_id = accessor->startUpdateZone("example.com.", true).second; + checkRecords(*accessor, zone_id, "foo.bar.example.com.", empty_stored); + accessor->commitUpdateZone(); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", empty_stored); +} + +TEST_F(SQLite3Update, readWhileUpdate) { + zone_id = accessor->startUpdateZone("example.com.", true).second; + checkRecords(*accessor, zone_id, "foo.bar.example.com.", empty_stored); + + // Until commit is done, the other accessor should see the old data + checkRecords(*another_accessor, zone_id, "foo.bar.example.com.", + expected_stored); + + // Once the changes are committed, the other accessor will see the new + // data. + accessor->commitUpdateZone(); + checkRecords(*another_accessor, zone_id, "foo.bar.example.com.", + empty_stored); +} + +TEST_F(SQLite3Update, rollback) { + zone_id = accessor->startUpdateZone("example.com.", true).second; + checkRecords(*accessor, zone_id, "foo.bar.example.com.", empty_stored); + + // Rollback will revert the change made by startUpdateZone(, true). + accessor->rollbackUpdateZone(); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); +} + +TEST_F(SQLite3Update, rollbackFailure) { + // This test emulates a rare scenario of making rollback attempt fail. + // The iterator is paused in the middle of getting records, which prevents + // the rollback operation at the end of the test. + + string columns[DatabaseAccessor::COLUMN_COUNT]; + iterator = accessor->getRecords("example.com.", zone_id); + EXPECT_TRUE(iterator->getNext(columns)); + + accessor->startUpdateZone("example.com.", true); + EXPECT_THROW(accessor->rollbackUpdateZone(), DataSourceError); +} + +TEST_F(SQLite3Update, commitConflict) { + // Start reading the DB by another accessor. We should stop at a single + // call to getNextRecord() to keep holding the lock. + iterator = another_accessor->getRecords("foo.example.com.", zone_id); + EXPECT_TRUE(iterator->getNext(get_columns)); + + // Due to getNextRecord() above, the other accessor holds a DB lock, + // which will prevent commit. + zone_id = accessor->startUpdateZone("example.com.", true).second; + checkRecords(*accessor, zone_id, "foo.bar.example.com.", empty_stored); + EXPECT_THROW(accessor->commitUpdateZone(), DataSourceError); + accessor->rollbackUpdateZone(); // rollback should still succeed + + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); +} + +TEST_F(SQLite3Update, updateConflict) { + // Similar to the previous case, but this is a conflict with another + // update attempt. Note that these two accessors modify disjoint sets + // of data; sqlite3 only has a coarse-grained lock so we cannot allow + // these updates to run concurrently. + EXPECT_TRUE(another_accessor->startUpdateZone("sql1.example.com.", + true).first); + EXPECT_THROW(accessor->startUpdateZone("example.com.", true), + DataSourceError); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); + + // Once we rollback the other attempt of change, we should be able to + // start and commit the transaction using the main accessor. + another_accessor->rollbackUpdateZone(); + accessor->startUpdateZone("example.com.", true); + accessor->commitUpdateZone(); +} + +TEST_F(SQLite3Update, duplicateUpdate) { + accessor->startUpdateZone("example.com.", false); + EXPECT_THROW(accessor->startUpdateZone("example.com.", false), + DataSourceError); +} + +TEST_F(SQLite3Update, commitWithoutTransaction) { + EXPECT_THROW(accessor->commitUpdateZone(), DataSourceError); +} + +TEST_F(SQLite3Update, rollbackWithoutTransaction) { + EXPECT_THROW(accessor->rollbackUpdateZone(), DataSourceError); +} + +TEST_F(SQLite3Update, addRecord) { + // Before update, there should be no record for this name + checkRecords(*accessor, zone_id, "newdata.example.com.", empty_stored); + + zone_id = accessor->startUpdateZone("example.com.", false).second; + copy(new_data, new_data + DatabaseAccessor::ADD_COLUMN_COUNT, + add_columns); + accessor->addRecordToZone(add_columns); + + expected_stored.clear(); + expected_stored.push_back(new_data); + checkRecords(*accessor, zone_id, "newdata.example.com.", expected_stored); + + // Commit the change, and confirm the new data is still there. + accessor->commitUpdateZone(); + checkRecords(*accessor, zone_id, "newdata.example.com.", expected_stored); +} + +TEST_F(SQLite3Update, addThenRollback) { + zone_id = accessor->startUpdateZone("example.com.", false).second; + copy(new_data, new_data + DatabaseAccessor::ADD_COLUMN_COUNT, + add_columns); + accessor->addRecordToZone(add_columns); + + expected_stored.clear(); + expected_stored.push_back(new_data); + checkRecords(*accessor, zone_id, "newdata.example.com.", expected_stored); + + accessor->rollbackUpdateZone(); + checkRecords(*accessor, zone_id, "newdata.example.com.", empty_stored); +} + +TEST_F(SQLite3Update, duplicateAdd) { + const char* const dup_data[] = { + "foo.bar.example.com.", "com.example.bar.foo.", "3600", "A", "", + "192.0.2.1" + }; + expected_stored.clear(); + expected_stored.push_back(dup_data); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); + + // Adding exactly the same data. As this backend is "dumb", another + // row of the same content will be inserted. + copy(dup_data, dup_data + DatabaseAccessor::ADD_COLUMN_COUNT, + add_columns); + zone_id = accessor->startUpdateZone("example.com.", false).second; + accessor->addRecordToZone(add_columns); + expected_stored.push_back(dup_data); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); +} + +TEST_F(SQLite3Update, invalidAdd) { + // An attempt of add before an explicit start of transaction + EXPECT_THROW(accessor->addRecordToZone(add_columns), DataSourceError); +} + +TEST_F(SQLite3Update, deleteRecord) { + zone_id = accessor->startUpdateZone("example.com.", false).second; + + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); + + copy(deleted_data, deleted_data + DatabaseAccessor::DEL_PARAM_COUNT, + del_params); + accessor->deleteRecordInZone(del_params); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", empty_stored); + + // Commit the change, and confirm the deleted data still isn't there. + accessor->commitUpdateZone(); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", empty_stored); +} + +TEST_F(SQLite3Update, deleteThenRollback) { + zone_id = accessor->startUpdateZone("example.com.", false).second; + + copy(deleted_data, deleted_data + DatabaseAccessor::DEL_PARAM_COUNT, + del_params); + accessor->deleteRecordInZone(del_params); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", empty_stored); + + // Rollback the change, and confirm the data still exists. + accessor->rollbackUpdateZone(); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); +} + +TEST_F(SQLite3Update, deleteNonexistent) { + zone_id = accessor->startUpdateZone("example.com.", false).second; + copy(deleted_data, deleted_data + DatabaseAccessor::DEL_PARAM_COUNT, + del_params); + + // Replace the name with a non existent one, then try to delete it. + // nothing should happen. + del_params[DatabaseAccessor::DEL_NAME] = "no-such-name.example.com."; + checkRecords(*accessor, zone_id, "no-such-name.example.com.", + empty_stored); + accessor->deleteRecordInZone(del_params); + checkRecords(*accessor, zone_id, "no-such-name.example.com.", + empty_stored); + + // Name exists but the RR type is different. Delete attempt shouldn't + // delete only by name. + copy(deleted_data, deleted_data + DatabaseAccessor::DEL_PARAM_COUNT, + del_params); + del_params[DatabaseAccessor::DEL_TYPE] = "AAAA"; + accessor->deleteRecordInZone(del_params); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); + + // Similar to the previous case, but RDATA is different. + copy(deleted_data, deleted_data + DatabaseAccessor::DEL_PARAM_COUNT, + del_params); + del_params[DatabaseAccessor::DEL_RDATA] = "192.0.2.2"; + accessor->deleteRecordInZone(del_params); + checkRecords(*accessor, zone_id, "foo.bar.example.com.", expected_stored); +} + +TEST_F(SQLite3Update, invalidDelete) { + // An attempt of delete before an explicit start of transaction + EXPECT_THROW(accessor->deleteRecordInZone(del_params), DataSourceError); +} +} // end anonymous namespace diff --git a/src/lib/datasrc/tests/static_unittest.cc b/src/lib/datasrc/tests/static_unittest.cc index a11e889f1e..4c9fe42edb 100644 --- a/src/lib/datasrc/tests/static_unittest.cc +++ b/src/lib/datasrc/tests/static_unittest.cc @@ -53,6 +53,7 @@ protected: // NOTE: in addition, the order of the following items matter. authors_data.push_back("Chen Zhengzhang"); + authors_data.push_back("Dmitriy Volodin"); authors_data.push_back("Evan Hunt"); authors_data.push_back("Haidong Wang"); authors_data.push_back("Han Feng"); diff --git a/src/lib/datasrc/tests/testdata/Makefile.am b/src/lib/datasrc/tests/testdata/Makefile.am new file mode 100644 index 0000000000..64ae9559ae --- /dev/null +++ b/src/lib/datasrc/tests/testdata/Makefile.am @@ -0,0 +1,6 @@ +CLEANFILES = *.copied +BUILT_SOURCES = rwtest.sqlite3.copied + +# We use install-sh with the -m option to make sure it's writable +rwtest.sqlite3.copied: $(srcdir)/rwtest.sqlite3 + $(top_srcdir)/install-sh -m 644 $(srcdir)/rwtest.sqlite3 $@ diff --git a/src/lib/datasrc/tests/testdata/rwtest.sqlite3 b/src/lib/datasrc/tests/testdata/rwtest.sqlite3 new file mode 100644 index 0000000000..ce95a1d7fe Binary files /dev/null and b/src/lib/datasrc/tests/testdata/rwtest.sqlite3 differ diff --git a/src/lib/datasrc/tests/zonetable_unittest.cc b/src/lib/datasrc/tests/zonetable_unittest.cc index a117176ad2..fa74c0eb8c 100644 --- a/src/lib/datasrc/tests/zonetable_unittest.cc +++ b/src/lib/datasrc/tests/zonetable_unittest.cc @@ -18,7 +18,7 @@ #include #include -// We use MemoryZone to put something into the table +// We use InMemoryZone to put something into the table #include #include @@ -28,31 +28,32 @@ using namespace isc::datasrc; namespace { TEST(ZoneTest, init) { - MemoryZone zone(RRClass::IN(), Name("example.com")); + InMemoryZoneFinder zone(RRClass::IN(), Name("example.com")); EXPECT_EQ(Name("example.com"), zone.getOrigin()); EXPECT_EQ(RRClass::IN(), zone.getClass()); - MemoryZone ch_zone(RRClass::CH(), Name("example")); + InMemoryZoneFinder ch_zone(RRClass::CH(), Name("example")); EXPECT_EQ(Name("example"), ch_zone.getOrigin()); EXPECT_EQ(RRClass::CH(), ch_zone.getClass()); } TEST(ZoneTest, find) { - MemoryZone zone(RRClass::IN(), Name("example.com")); - EXPECT_EQ(Zone::NXDOMAIN, + InMemoryZoneFinder zone(RRClass::IN(), Name("example.com")); + EXPECT_EQ(ZoneFinder::NXDOMAIN, zone.find(Name("www.example.com"), RRType::A()).code); } class ZoneTableTest : public ::testing::Test { protected: - ZoneTableTest() : zone1(new MemoryZone(RRClass::IN(), - Name("example.com"))), - zone2(new MemoryZone(RRClass::IN(), - Name("example.net"))), - zone3(new MemoryZone(RRClass::IN(), Name("example"))) + ZoneTableTest() : zone1(new InMemoryZoneFinder(RRClass::IN(), + Name("example.com"))), + zone2(new InMemoryZoneFinder(RRClass::IN(), + Name("example.net"))), + zone3(new InMemoryZoneFinder(RRClass::IN(), + Name("example"))) {} ZoneTable zone_table; - ZonePtr zone1, zone2, zone3; + ZoneFinderPtr zone1, zone2, zone3; }; TEST_F(ZoneTableTest, addZone) { @@ -60,7 +61,8 @@ TEST_F(ZoneTableTest, addZone) { EXPECT_EQ(result::EXIST, zone_table.addZone(zone1)); // names are compared in a case insensitive manner. EXPECT_EQ(result::EXIST, zone_table.addZone( - ZonePtr(new MemoryZone(RRClass::IN(), Name("EXAMPLE.COM"))))); + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::IN(), + Name("EXAMPLE.COM"))))); EXPECT_EQ(result::SUCCESS, zone_table.addZone(zone2)); EXPECT_EQ(result::SUCCESS, zone_table.addZone(zone3)); @@ -68,11 +70,11 @@ TEST_F(ZoneTableTest, addZone) { // Zone table is indexed only by name. Duplicate origin name with // different zone class isn't allowed. EXPECT_EQ(result::EXIST, zone_table.addZone( - ZonePtr(new MemoryZone(RRClass::CH(), - Name("example.com"))))); + ZoneFinderPtr(new InMemoryZoneFinder(RRClass::CH(), + Name("example.com"))))); /// Bogus zone (NULL) - EXPECT_THROW(zone_table.addZone(ZonePtr()), isc::InvalidParameter); + EXPECT_THROW(zone_table.addZone(ZoneFinderPtr()), isc::InvalidParameter); } TEST_F(ZoneTableTest, DISABLED_removeZone) { @@ -95,7 +97,7 @@ TEST_F(ZoneTableTest, findZone) { EXPECT_EQ(result::NOTFOUND, zone_table.findZone(Name("example.org")).code); - EXPECT_EQ(ConstZonePtr(), + EXPECT_EQ(ConstZoneFinderPtr(), zone_table.findZone(Name("example.org")).zone); // there's no exact match. the result should be the longest match, @@ -107,7 +109,7 @@ TEST_F(ZoneTableTest, findZone) { // make sure the partial match is indeed the longest match by adding // a zone with a shorter origin and query again. - ZonePtr zone_com(new MemoryZone(RRClass::IN(), Name("com"))); + ZoneFinderPtr zone_com(new InMemoryZoneFinder(RRClass::IN(), Name("com"))); EXPECT_EQ(result::SUCCESS, zone_table.addZone(zone_com)); EXPECT_EQ(Name("example.com"), zone_table.findZone(Name("www.example.com")).zone->getOrigin()); diff --git a/src/lib/datasrc/zone.h b/src/lib/datasrc/zone.h index 1252c94f8b..c83b14b779 100644 --- a/src/lib/datasrc/zone.h +++ b/src/lib/datasrc/zone.h @@ -15,59 +15,89 @@ #ifndef __ZONE_H #define __ZONE_H 1 -#include +#include #include +#include + namespace isc { namespace datasrc { -/// \brief The base class for a single authoritative zone +/// \brief The base class to search a zone for RRsets /// -/// The \c Zone class is an abstract base class for representing -/// a DNS zone as part of data source. +/// The \c ZoneFinder class is an abstract base class for representing +/// an object that performs DNS lookups in a specific zone accessible via +/// a data source. In general, different types of data sources (in-memory, +/// database-based, etc) define their own derived classes of \c ZoneFinder, +/// implementing ways to retrieve the required data through the common +/// interfaces declared in the base class. Each concrete \c ZoneFinder +/// object is therefore (conceptually) associated with a specific zone +/// of one specific data source instance. /// -/// At the moment this is provided mainly for making the \c ZoneTable class -/// and the authoritative query logic testable, and only provides a minimal -/// set of features. -/// This is why this class is defined in the same header file, but it may -/// have to move to a separate header file when we understand what is -/// necessary for this class for actual operation. +/// The origin name and the RR class of the associated zone are available +/// via the \c getOrigin() and \c getClass() methods, respectively. /// -/// The idea is to provide a specific derived zone class for each data -/// source, beginning with in memory one. At that point the derived classes -/// will have more specific features. For example, they will maintain -/// information about the location of a zone file, whether it's loaded in -/// memory, etc. +/// The most important method of this class is \c find(), which performs +/// the lookup for a given domain and type. See the description of the +/// method for details. /// -/// It's not yet clear how the derived zone classes work with various other -/// data sources when we integrate these components, but one possibility is -/// something like this: -/// - If the underlying database such as some variant of SQL doesn't have an -/// explicit representation of zones (as part of public interface), we can -/// probably use a "default" zone class that simply encapsulates the -/// corresponding data source and calls a common "find" like method. -/// - Some data source may want to specialize it by inheritance as an -/// optimization. For example, in the current schema design of the sqlite3 -/// data source, its (derived) zone class would contain the information of -/// the "zone ID". -/// -/// Note: Unlike some other abstract base classes we don't name the -/// class beginning with "Abstract". This is because we want to have -/// commonly used definitions such as \c Result and \c ZonePtr, and we want -/// to make them look more intuitive. -class Zone { +/// \note It's not clear whether we should request that a zone finder form a +/// "transaction", that is, whether to ensure the finder is not susceptible +/// to changes made by someone else than the creator of the finder. If we +/// don't request that, for example, two different lookup results for the +/// same name and type can be different if other threads or programs make +/// updates to the zone between the lookups. We should revisit this point +/// as we gain more experiences. +class ZoneFinder { public: /// Result codes of the \c find() method. /// /// Note: the codes are tentative. We may need more, or we may find /// some of them unnecessary as we implement more details. + /// + /// Some are synonyms of others in terms of RCODE returned to user. + /// But they help the logic to decide if it should ask for a NSEC + /// that covers something or not (for example, in case of NXRRSET, + /// the directly returned NSEC is sufficient, but with wildcard one, + /// we need to add one proving there's no exact match and this is + /// actually the best wildcard we have). Data sources that don't + /// support DNSSEC don't need to distinguish them. + /// + /// In case of NXRRSET related results, the returned NSEC record + /// belongs to the domain which would provide the result if it + /// contained the correct type (in case of NXRRSET, it is the queried + /// domain, in case of WILDCARD_NXRRSET, it is the wildcard domain + /// that matched the query name). In case of an empty nonterminal, + /// an NSEC is provided for the interval where the empty nonterminal + /// lives. The end of the interval is the subdomain causing existence + /// of the empty nonterminal (if there's sub.x.example.com, and no record + /// in x.example.com, then x.example.com exists implicitly - is the empty + /// nonterminal and sub.x.example.com is the subdomain causing it). + /// + /// Examples: if zone "example.com" has the following record: + /// \code + /// a.b.example.com. NSEC c.example.com. + /// \endcode + /// a call to \c find() for "b.example.com." will result in NXRRSET, + /// and if the FIND_DNSSEC option is set this NSEC will be returned. + /// Likewise, if zone "example.org" has the following record, + /// \code + /// x.*.example.org. NSEC a.example.org. + /// \endcode + /// a call to \c find() for "y.example.org" will result in + /// WILDCARD_NXRRSET (*.example.org is an empty nonterminal wildcard node), + /// and if the FIND_DNSSEC option is set this NSEC will be returned. + /// + /// In case of NXDOMAIN, the returned NSEC covers the queried domain. enum Result { SUCCESS, ///< An exact match is found. DELEGATION, ///< The search encounters a zone cut. NXDOMAIN, ///< There is no domain name that matches the search name NXRRSET, ///< There is a matching name but no RRset of the search type CNAME, ///< The search encounters and returns a CNAME RR - DNAME ///< The search encounters and returns a DNAME RR + DNAME, ///< The search encounters and returns a DNAME RR + WILDCARD, ///< Succes by wildcard match, for DNSSEC + WILDCARD_NXRRSET ///< NXRRSET on wildcard, for DNSSEC }; /// A helper structure to represent the search result of \c find(). @@ -107,7 +137,11 @@ public: /// performed on these values to express compound options. enum FindOptions { FIND_DEFAULT = 0, ///< The default options - FIND_GLUE_OK = 1 ///< Allow search under a zone cut + FIND_GLUE_OK = 1, ///< Allow search under a zone cut + FIND_DNSSEC = 2 ///< Require DNSSEC data in the answer + ///< (RRSIG, NSEC, etc.). The implementation + ///< is allowed to include it even if it is + ///< not set. }; /// @@ -119,10 +153,10 @@ protected: /// /// This is intentionally defined as \c protected as this base class should /// never be instantiated (except as part of a derived class). - Zone() {} + ZoneFinder() {} public: /// The destructor. - virtual ~Zone() {} + virtual ~ZoneFinder() {} //@} /// @@ -131,14 +165,14 @@ public: /// These methods should never throw an exception. //@{ /// Return the origin name of the zone. - virtual const isc::dns::Name& getOrigin() const = 0; + virtual isc::dns::Name getOrigin() const = 0; /// Return the RR class of the zone. - virtual const isc::dns::RRClass& getClass() const = 0; + virtual isc::dns::RRClass getClass() const = 0; //@} /// - /// \name Search Method + /// \name Search Methods /// //@{ /// Search the zone for a given pair of domain name and RR type. @@ -170,8 +204,8 @@ public: /// We should revisit the interface before we heavily rely on it. /// /// The \c options parameter specifies customized behavior of the search. - /// Their semantics is as follows: - /// - \c GLUE_OK Allow search under a zone cut. By default the search + /// Their semantics is as follows (they are or bit-field): + /// - \c FIND_GLUE_OK Allow search under a zone cut. By default the search /// will stop once it encounters a zone cut. If this option is specified /// it remembers information about the highest zone cut and continues /// the search until it finds an exact match for the given name or it @@ -179,6 +213,9 @@ public: /// RRsets for that name are searched just like the normal case; /// otherwise, if the search has encountered a zone cut, \c DELEGATION /// with the information of the highest zone cut will be returned. + /// - \c FIND_DNSSEC Request that DNSSEC data (like NSEC, RRSIGs) are + /// returned with the answer. It is allowed for the data source to + /// include them even when not requested. /// /// A derived version of this method may involve internal resource /// allocation, especially for constructing the resulting RRset, and may @@ -197,19 +234,274 @@ public: const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, const FindOptions options - = FIND_DEFAULT) const = 0; + = FIND_DEFAULT) = 0; + + /// \brief Get previous name in the zone + /// + /// Gets the previous name in the DNSSEC order. This can be used + /// to find the correct NSEC records for proving nonexistence + /// of domains. + /// + /// The concrete implementation might throw anything it thinks appropriate, + /// however it is recommended to stick to the ones listed here. The user + /// of this method should be able to handle any exceptions. + /// + /// This method does not include under-zone-cut data (glue data). + /// + /// \param query The name for which one we look for a previous one. The + /// queried name doesn't have to exist in the zone. + /// \return The preceding name + /// + /// \throw NotImplemented in case the data source backend doesn't support + /// DNSSEC or there is no previous in the zone (NSEC records might be + /// missing in the DB, the queried name is less or equal to the apex). + /// \throw DataSourceError for low-level or internal datasource errors + /// (like broken connection to database, wrong data living there). + /// \throw std::bad_alloc For allocation errors. + virtual isc::dns::Name findPreviousName(const isc::dns::Name& query) + const = 0; //@} }; -/// \brief A pointer-like type pointing to a \c Zone object. -typedef boost::shared_ptr ZonePtr; - -/// \brief A pointer-like type pointing to a \c Zone object. -typedef boost::shared_ptr ConstZonePtr; - -} +/// \brief Operator to combine FindOptions +/// +/// We would need to manually static-cast the options if we put or +/// between them, which is undesired with bit-flag options. Therefore +/// we hide the cast here, which is the simplest solution and it still +/// provides reasonable level of type safety. +inline ZoneFinder::FindOptions operator |(ZoneFinder::FindOptions a, + ZoneFinder::FindOptions b) +{ + return (static_cast(static_cast(a) | + static_cast(b))); } +/// \brief A pointer-like type pointing to a \c ZoneFinder object. +typedef boost::shared_ptr ZoneFinderPtr; + +/// \brief A pointer-like type pointing to a \c ZoneFinder object. +typedef boost::shared_ptr ConstZoneFinderPtr; + +/// The base class to make updates to a single zone. +/// +/// On construction, each derived class object will start a "transaction" +/// for making updates to a specific zone (this means a constructor of +/// a derived class would normally take parameters to identify the zone +/// to be updated). The underlying realization of a "transaction" will differ +/// for different derived classes; if it uses a general purpose database +/// as a backend, it will involve performing some form of "begin transaction" +/// statement for the database. +/// +/// Updates (adding or deleting RRs) are made via \c addRRset() and +/// \c deleteRRset() methods. Until the \c commit() method is called the +/// changes are local to the updater object. For example, they won't be +/// visible via a \c ZoneFinder object except the one returned by the +/// updater's own \c getFinder() method. The \c commit() completes the +/// transaction and makes the changes visible to others. +/// +/// This class does not provide an explicit "rollback" interface. If +/// something wrong or unexpected happens during the updates and the +/// caller wants to cancel the intermediate updates, the caller should +/// simply destruct the updater object without calling \c commit(). +/// The destructor is supposed to perform the "rollback" operation, +/// depending on the internal details of the derived class. +/// +/// \note This initial implementation provides a quite simple interface of +/// adding and deleting RRs (see the description of the related methods). +/// It may be revisited as we gain more experiences. +class ZoneUpdater { +protected: + /// The default constructor. + /// + /// This is intentionally defined as protected to ensure that this base + /// class is never instantiated directly. + ZoneUpdater() {} + +public: + /// The destructor + /// + /// Each derived class implementation must ensure that if \c commit() + /// has not been performed by the time of the call to it, then it + /// "rollbacks" the updates made via the updater so far. + virtual ~ZoneUpdater() {} + + /// Return a finder for the zone being updated. + /// + /// The returned finder provides the functionalities of \c ZoneFinder + /// for the zone as updates are made via the updater. That is, before + /// making any update, the finder will be able to find all RRsets that + /// exist in the zone at the time the updater is created. If RRsets + /// are added or deleted via \c addRRset() or \c deleteRRset(), + /// this finder will find the added ones or miss the deleted ones + /// respectively. + /// + /// The finder returned by this method is effective only while the updates + /// are performed, i.e., from the construction of the corresponding + /// updater until \c commit() is performed or the updater is destructed + /// without commit. The result of a subsequent call to this method (or + /// the use of the result) after that is undefined. + /// + /// \return A reference to a \c ZoneFinder for the updated zone + virtual ZoneFinder& getFinder() = 0; + + /// Add an RRset to a zone via the updater + /// + /// This may be revisited in a future version, but right now the intended + /// behavior of this method is simple: It "naively" adds the specified + /// RRset to the zone specified on creation of the updater. + /// It performs minimum level of validation on the specified RRset: + /// - Whether the RR class is identical to that for the zone to be updated + /// - Whether the RRset is not empty, i.e., it has at least one RDATA + /// - Whether the RRset is not associated with an RRSIG, i.e., + /// whether \c getRRsig() on the RRset returns a NULL pointer. + /// + /// and otherwise does not check any oddity. For example, it doesn't + /// check whether the owner name of the specified RRset is a subdomain + /// of the zone's origin; it doesn't care whether or not there is already + /// an RRset of the same name and RR type in the zone, and if there is, + /// whether any of the existing RRs have duplicate RDATA with the added + /// ones. If these conditions matter the calling application must examine + /// the existing data beforehand using the \c ZoneFinder returned by + /// \c getFinder(). + /// + /// The validation requirement on the associated RRSIG is temporary. + /// If we find it more reasonable and useful to allow adding a pair of + /// RRset and its RRSIG RRset as we gain experiences with the interface, + /// we may remove this restriction. Until then we explicitly check it + /// to prevent accidental misuse. + /// + /// Conceptually, on successful call to this method, the zone will have + /// the specified RRset, and if there is already an RRset of the same + /// name and RR type, these two sets will be "merged". "Merged" means + /// that a subsequent call to \c ZoneFinder::find() for the name and type + /// will result in success and the returned RRset will contain all + /// previously existing and newly added RDATAs with the TTL being the + /// minimum of the two RRsets. The underlying representation of the + /// "merged" RRsets may vary depending on the characteristic of the + /// underlying data source. For example, if it uses a general purpose + /// database that stores each RR of the same RRset separately, it may + /// simply be a larger sets of RRs based on both the existing and added + /// RRsets; the TTLs of the RRs may be different within the database, and + /// there may even be duplicate RRs in different database rows. As long + /// as the RRset returned via \c ZoneFinder::find() conforms to the + /// concept of "merge", the actual internal representation is up to the + /// implementation. + /// + /// This method must not be called once commit() is performed. If it + /// calls after \c commit() the implementation must throw a + /// \c DataSourceError exception. + /// + /// \todo As noted above we may have to revisit the design details as we + /// gain experiences: + /// + /// - we may want to check (and maybe reject) if there is already a + /// duplicate RR (that has the same RDATA). + /// - we may want to check (and maybe reject) if there is already an + /// RRset of the same name and RR type with different TTL + /// - we may even want to check if there is already any RRset of the + /// same name and RR type. + /// - we may want to add an "options" parameter that can control the + /// above points + /// - we may want to have this method return a value containing the + /// information on whether there's a duplicate, etc. + /// + /// \exception DataSourceError Called after \c commit(), RRset is invalid + /// (see above), internal data source error + /// \exception std::bad_alloc Resource allocation failure + /// + /// \param rrset The RRset to be added + virtual void addRRset(const isc::dns::RRset& rrset) = 0; + + /// Delete an RRset from a zone via the updater + /// + /// Like \c addRRset(), the detailed semantics and behavior of this method + /// may have to be revisited in a future version. The following are + /// based on the initial implementation decisions. + /// + /// On successful completion of this method, it will remove from the zone + /// the RRs of the specified owner name and RR type that match one of + /// the RDATAs of the specified RRset. There are several points to be + /// noted: + /// - Existing RRs that don't match any of the specified RDATAs will + /// remain in the zone. + /// - Any RRs of the specified RRset that doesn't exist in the zone will + /// simply be ignored; the implementation of this method is not supposed + /// to check that condition. + /// - The TTL of the RRset is ignored; matching is only performed by + /// the owner name, RR type and RDATA + /// + /// Ignoring the TTL may not look sensible, but it's based on the + /// observation that it will result in more intuitive result, especially + /// when the underlying data source is a general purpose database. + /// See also \c DatabaseAccessor::deleteRecordInZone() on this point. + /// It also matches the dynamic update protocol (RFC2136), where TTLs + /// are ignored when deleting RRs. + /// + /// \note Since the TTL is ignored, this method could take the RRset + /// to be deleted as a tuple of name, RR type, and a list of RDATAs. + /// But in practice, it's quite likely that the caller has the RRset + /// in the form of the \c RRset object (e.g., extracted from a dynamic + /// update request message), so this interface would rather be more + /// convenient. If it turns out not to be true we can change or extend + /// the method signature. + /// + /// This method performs minimum level of validation on the specified + /// RRset: + /// - Whether the RR class is identical to that for the zone to be updated + /// - Whether the RRset is not empty, i.e., it has at least one RDATA + /// - Whether the RRset is not associated with an RRSIG, i.e., + /// whether \c getRRsig() on the RRset returns a NULL pointer. + /// + /// This method must not be called once commit() is performed. If it + /// calls after \c commit() the implementation must throw a + /// \c DataSourceError exception. + /// + /// \todo As noted above we may have to revisit the design details as we + /// gain experiences: + /// + /// - we may want to check (and maybe reject) if some or all of the RRs + /// for the specified RRset don't exist in the zone + /// - we may want to allow an option to "delete everything" for specified + /// name and/or specified name + RR type. + /// - as mentioned above, we may want to include the TTL in matching the + /// deleted RRs + /// - we may want to add an "options" parameter that can control the + /// above points + /// - we may want to have this method return a value containing the + /// information on whether there's any RRs that are specified but don't + /// exit, the number of actually deleted RRs, etc. + /// + /// \exception DataSourceError Called after \c commit(), RRset is invalid + /// (see above), internal data source error + /// \exception std::bad_alloc Resource allocation failure + /// + /// \param rrset The RRset to be deleted + virtual void deleteRRset(const isc::dns::RRset& rrset) = 0; + + /// Commit the updates made in the updater to the zone + /// + /// This method completes the "transaction" started at the creation + /// of the updater. After successful completion of this method, the + /// updates will be visible outside the scope of the updater. + /// The actual internal behavior will defer for different derived classes. + /// For a derived class with a general purpose database as a backend, + /// for example, this method would perform a "commit" statement for the + /// database. + /// + /// This operation can only be performed at most once. A duplicate call + /// must result in a DatasourceError exception. + /// + /// \exception DataSourceError Duplicate call of the method, + /// internal data source error + virtual void commit() = 0; +}; + +/// \brief A pointer-like type pointing to a \c ZoneUpdater object. +typedef boost::shared_ptr ZoneUpdaterPtr; + +} // end of datasrc +} // end of isc + #endif // __ZONE_H // Local Variables: diff --git a/src/lib/datasrc/zonetable.cc b/src/lib/datasrc/zonetable.cc index bc09286563..644861cc2c 100644 --- a/src/lib/datasrc/zonetable.cc +++ b/src/lib/datasrc/zonetable.cc @@ -28,8 +28,8 @@ namespace datasrc { /// \short Private data and implementation of ZoneTable struct ZoneTable::ZoneTableImpl { // Type aliases to make it shorter - typedef RBTree ZoneTree; - typedef RBNode ZoneNode; + typedef RBTree ZoneTree; + typedef RBNode ZoneNode; // The actual storage ZoneTree zones_; @@ -40,7 +40,7 @@ struct ZoneTable::ZoneTableImpl { */ // Implementation of ZoneTable::addZone - result::Result addZone(ZonePtr zone) { + result::Result addZone(ZoneFinderPtr zone) { // Sanity check if (!zone) { isc_throw(InvalidParameter, @@ -85,12 +85,12 @@ struct ZoneTable::ZoneTableImpl { break; // We have no data there, so translate the pointer to NULL as well case ZoneTree::NOTFOUND: - return (FindResult(result::NOTFOUND, ZonePtr())); + return (FindResult(result::NOTFOUND, ZoneFinderPtr())); // Can Not Happen default: assert(0); // Because of warning - return (FindResult(result::NOTFOUND, ZonePtr())); + return (FindResult(result::NOTFOUND, ZoneFinderPtr())); } // Can Not Happen (remember, NOTFOUND is handled) @@ -108,7 +108,7 @@ ZoneTable::~ZoneTable() { } result::Result -ZoneTable::addZone(ZonePtr zone) { +ZoneTable::addZone(ZoneFinderPtr zone) { return (impl_->addZone(zone)); } diff --git a/src/lib/datasrc/zonetable.h b/src/lib/datasrc/zonetable.h index 5b873d1a07..5a3448045d 100644 --- a/src/lib/datasrc/zonetable.h +++ b/src/lib/datasrc/zonetable.h @@ -41,11 +41,11 @@ namespace datasrc { class ZoneTable { public: struct FindResult { - FindResult(result::Result param_code, const ZonePtr param_zone) : + FindResult(result::Result param_code, const ZoneFinderPtr param_zone) : code(param_code), zone(param_zone) {} const result::Result code; - const ZonePtr zone; + const ZoneFinderPtr zone; }; /// /// \name Constructors and Destructor. @@ -83,7 +83,7 @@ public: /// added to the zone table. /// \return \c result::EXIST The zone table already contains /// zone of the same origin. - result::Result addZone(ZonePtr zone); + result::Result addZone(ZoneFinderPtr zone); /// Remove a \c Zone of the given origin name from the \c ZoneTable. /// diff --git a/src/lib/dns/Makefile.am b/src/lib/dns/Makefile.am index 887ac09fee..0d2bffd59a 100644 --- a/src/lib/dns/Makefile.am +++ b/src/lib/dns/Makefile.am @@ -23,14 +23,22 @@ EXTRA_DIST += rdata/generic/cname_5.cc EXTRA_DIST += rdata/generic/cname_5.h EXTRA_DIST += rdata/generic/detail/nsec_bitmap.cc EXTRA_DIST += rdata/generic/detail/nsec_bitmap.h +EXTRA_DIST += rdata/generic/detail/txt_like.h +EXTRA_DIST += rdata/generic/detail/ds_like.h +EXTRA_DIST += rdata/generic/dlv_32769.cc +EXTRA_DIST += rdata/generic/dlv_32769.h EXTRA_DIST += rdata/generic/dname_39.cc EXTRA_DIST += rdata/generic/dname_39.h EXTRA_DIST += rdata/generic/dnskey_48.cc EXTRA_DIST += rdata/generic/dnskey_48.h EXTRA_DIST += rdata/generic/ds_43.cc EXTRA_DIST += rdata/generic/ds_43.h +EXTRA_DIST += rdata/generic/hinfo_13.cc +EXTRA_DIST += rdata/generic/hinfo_13.h EXTRA_DIST += rdata/generic/mx_15.cc EXTRA_DIST += rdata/generic/mx_15.h +EXTRA_DIST += rdata/generic/naptr_35.cc +EXTRA_DIST += rdata/generic/naptr_35.h EXTRA_DIST += rdata/generic/ns_2.cc EXTRA_DIST += rdata/generic/ns_2.h EXTRA_DIST += rdata/generic/nsec3_50.cc @@ -49,14 +57,24 @@ EXTRA_DIST += rdata/generic/rrsig_46.cc EXTRA_DIST += rdata/generic/rrsig_46.h EXTRA_DIST += rdata/generic/soa_6.cc EXTRA_DIST += rdata/generic/soa_6.h +EXTRA_DIST += rdata/generic/spf_99.cc +EXTRA_DIST += rdata/generic/spf_99.h EXTRA_DIST += rdata/generic/txt_16.cc EXTRA_DIST += rdata/generic/txt_16.h +EXTRA_DIST += rdata/generic/minfo_14.cc +EXTRA_DIST += rdata/generic/minfo_14.h +EXTRA_DIST += rdata/generic/afsdb_18.cc +EXTRA_DIST += rdata/generic/afsdb_18.h EXTRA_DIST += rdata/hs_4/a_1.cc EXTRA_DIST += rdata/hs_4/a_1.h EXTRA_DIST += rdata/in_1/a_1.cc EXTRA_DIST += rdata/in_1/a_1.h EXTRA_DIST += rdata/in_1/aaaa_28.cc EXTRA_DIST += rdata/in_1/aaaa_28.h +EXTRA_DIST += rdata/in_1/dhcid_49.cc +EXTRA_DIST += rdata/in_1/dhcid_49.h +EXTRA_DIST += rdata/in_1/srv_33.cc +EXTRA_DIST += rdata/in_1/srv_33.h #EXTRA_DIST += rdata/template.cc #EXTRA_DIST += rdata/template.h @@ -88,8 +106,11 @@ libdns___la_SOURCES += tsig.h tsig.cc libdns___la_SOURCES += tsigerror.h tsigerror.cc libdns___la_SOURCES += tsigkey.h tsigkey.cc libdns___la_SOURCES += tsigrecord.h tsigrecord.cc +libdns___la_SOURCES += character_string.h character_string.cc libdns___la_SOURCES += rdata/generic/detail/nsec_bitmap.h libdns___la_SOURCES += rdata/generic/detail/nsec_bitmap.cc +libdns___la_SOURCES += rdata/generic/detail/txt_like.h +libdns___la_SOURCES += rdata/generic/detail/ds_like.h libdns___la_CPPFLAGS = $(AM_CPPFLAGS) # Most applications of libdns++ will only implicitly rely on libcryptolink, diff --git a/src/lib/dns/benchmarks/Makefile.am b/src/lib/dns/benchmarks/Makefile.am index 864538518e..0d7856ff72 100644 --- a/src/lib/dns/benchmarks/Makefile.am +++ b/src/lib/dns/benchmarks/Makefile.am @@ -13,5 +13,6 @@ noinst_PROGRAMS = rdatarender_bench rdatarender_bench_SOURCES = rdatarender_bench.cc rdatarender_bench_LDADD = $(top_builddir)/src/lib/dns/libdns++.la +rdatarender_bench_LDADD += $(top_builddir)/src/lib/util/libutil.la rdatarender_bench_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la rdatarender_bench_LDADD += $(SQLITE_LIBS) diff --git a/src/lib/dns/character_string.cc b/src/lib/dns/character_string.cc new file mode 100644 index 0000000000..3a289acd49 --- /dev/null +++ b/src/lib/dns/character_string.cc @@ -0,0 +1,140 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include "character_string.h" +#include "rdata.h" + +using namespace std; +using namespace isc::dns::rdata; + +namespace isc { +namespace dns { + +namespace { +bool isDigit(char c) { + return (('0' <= c) && (c <= '9')); +} +} + +std::string +characterstr::getNextCharacterString(const std::string& input_str, + std::string::const_iterator& input_iterator) +{ + string result; + + // If the input string only contains white-spaces, it is an invalid + // + if (input_iterator >= input_str.end()) { + isc_throw(InvalidRdataText, "Invalid text format, \ + field is missing."); + } + + // Whether the is separated with double quotes (") + bool quotes_separated = (*input_iterator == '"'); + // Whether the quotes are pared if the string is quotes separated + bool quotes_paired = false; + + if (quotes_separated) { + ++input_iterator; + } + + while(input_iterator < input_str.end()){ + // Escaped characters processing + if (*input_iterator == '\\') { + if (input_iterator + 1 == input_str.end()) { + isc_throw(InvalidRdataText, " ended \ + prematurely."); + } else { + if (isDigit(*(input_iterator + 1))) { + // \DDD where each D is a digit. It its the octet + // corresponding to the decimal number described by DDD + if (input_iterator + 3 >= input_str.end()) { + isc_throw(InvalidRdataText, " ended \ + prematurely."); + } else { + int n = 0; + ++input_iterator; + for (int i = 0; i < 3; ++i) { + if (isDigit(*input_iterator)) { + n = n*10 + (*input_iterator - '0'); + ++input_iterator; + } else { + isc_throw(InvalidRdataText, "Illegal decimal \ + escaping series"); + } + } + if (n > 255) { + isc_throw(InvalidRdataText, "Illegal octet \ + number"); + } + result.push_back(n); + continue; + } + } else { + ++input_iterator; + result.push_back(*input_iterator); + ++input_iterator; + continue; + } + } + } + + if (quotes_separated) { + // If the is seperated with quotes symbol and + // another quotes symbol is encountered, it is the end of the + // + if (*input_iterator == '"') { + quotes_paired = true; + ++input_iterator; + // Reach the end of character string + break; + } + } else if (*input_iterator == ' ') { + // If the is not seperated with quotes symbol, + // it is seperated with char + break; + } + + result.push_back(*input_iterator); + + ++input_iterator; + } + + if (result.size() > MAX_CHARSTRING_LEN) { + isc_throw(CharStringTooLong, " is too long"); + } + + if (quotes_separated && !quotes_paired) { + isc_throw(InvalidRdataText, "The quotes are not paired"); + } + + return (result); +} + +std::string +characterstr::getNextCharacterString(util::InputBuffer& buffer, size_t len) { + uint8_t str_len = buffer.readUint8(); + + size_t pos = buffer.getPosition(); + if (len - pos < str_len) { + isc_throw(InvalidRdataLength, "Invalid string length"); + } + + uint8_t buf[MAX_CHARSTRING_LEN]; + buffer.readData(buf, str_len); + return (string(buf, buf + str_len)); +} + +} // end of namespace dns +} // end of namespace isc diff --git a/src/lib/dns/character_string.h b/src/lib/dns/character_string.h new file mode 100644 index 0000000000..7961274826 --- /dev/null +++ b/src/lib/dns/character_string.h @@ -0,0 +1,57 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __CHARACTER_STRING_H +#define __CHARACTER_STRING_H + +#include +#include +#include + +namespace isc { +namespace dns { + +// \brief Some utility functions to extract from string +// or InputBuffer +// +// is expressed in one or two ways: as a contiguous set +// of characters without interior spaces, or as a string beginning with a " +// and ending with a ". Inside a " delimited string any character can +// occur, except for a " itself, which must be quoted using \ (back slash). +// Ref. RFC1035 + + +namespace characterstr { + /// Get a from a string + /// + /// \param input_str The input string + /// \param input_iterator The iterator from which to start extracting, + /// the iterator will be updated to new position after the function + /// is returned + /// \return A std::string that contains the extracted + std::string getNextCharacterString(const std::string& input_str, + std::string::const_iterator& input_iterator); + + /// Get a from a input buffer + /// + /// \param buffer The input buffer + /// \param len The input buffer total length + /// \return A std::string that contains the extracted + std::string getNextCharacterString(util::InputBuffer& buffer, size_t len); + +} // namespace characterstr +} // namespace dns +} // namespace isc + +#endif // __CHARACTER_STRING_H diff --git a/src/lib/dns/gen-rdatacode.py.in b/src/lib/dns/gen-rdatacode.py.in index b3c8da23ab..f3cd5df81a 100755 --- a/src/lib/dns/gen-rdatacode.py.in +++ b/src/lib/dns/gen-rdatacode.py.in @@ -133,7 +133,15 @@ def import_definitions(classcode2txt, typecode2txt, typeandclass): if classdir_mtime < getmtime('@srcdir@/rdata'): classdir_mtime = getmtime('@srcdir@/rdata') - for dir in list(os.listdir('@srcdir@/rdata')): + # Sort directories before iterating through them so that the directory + # list is processed in the same order on all systems. The resulting + # files should compile regardless of the order in which the components + # are included but... Having a fixed order for the directories should + # eliminate system-dependent problems. (Note that the drectory names + # in BIND 10 are ASCII, so the order should be locale-independent.) + dirlist = os.listdir('@srcdir@/rdata') + dirlist.sort() + for dir in dirlist: classdir = '@srcdir@/rdata' + os.sep + dir m = re_typecode.match(dir) if os.path.isdir(classdir) and (m != None or dir == 'generic'): @@ -145,7 +153,12 @@ def import_definitions(classcode2txt, typecode2txt, typeandclass): class_code = m.group(2) if not class_code in classcode2txt: classcode2txt[class_code] = class_txt - for file in list(os.listdir(classdir)): + + # Same considerations as directories regarding sorted order + # also apply to files. + filelist = os.listdir(classdir) + filelist.sort() + for file in filelist: file = classdir + os.sep + file m = re_typecode.match(os.path.split(file)[1]) if m != None: diff --git a/src/lib/dns/message.cc b/src/lib/dns/message.cc index bf7ccd52be..b3e9229ae8 100644 --- a/src/lib/dns/message.cc +++ b/src/lib/dns/message.cc @@ -124,10 +124,12 @@ public: void setOpcode(const Opcode& opcode); void setRcode(const Rcode& rcode); int parseQuestion(InputBuffer& buffer); - int parseSection(const Message::Section section, InputBuffer& buffer); + int parseSection(const Message::Section section, InputBuffer& buffer, + Message::ParseOptions options); void addRR(Message::Section section, const Name& name, const RRClass& rrclass, const RRType& rrtype, - const RRTTL& ttl, ConstRdataPtr rdata); + const RRTTL& ttl, ConstRdataPtr rdata, + Message::ParseOptions options); void addEDNS(Message::Section section, const Name& name, const RRClass& rrclass, const RRType& rrtype, const RRTTL& ttl, const Rdata& rdata); @@ -239,7 +241,28 @@ MessageImpl::toWire(AbstractMessageRenderer& renderer, TSIGContext* tsig_ctx) { "Message rendering attempted without Opcode set"); } + // Reserve the space for TSIG (if needed) so that we can handle truncation + // case correctly later when that happens. orig_xxx variables remember + // some configured parameters of renderer in case they are needed in + // truncation processing below. + const size_t tsig_len = (tsig_ctx != NULL) ? tsig_ctx->getTSIGLength() : 0; + const size_t orig_msg_len_limit = renderer.getLengthLimit(); + const AbstractMessageRenderer::CompressMode orig_compress_mode = + renderer.getCompressMode(); + if (tsig_len > 0) { + if (tsig_len > orig_msg_len_limit) { + isc_throw(InvalidParameter, "Failed to render DNS message: " + "too small limit for a TSIG (" << + orig_msg_len_limit << ")"); + } + renderer.setLengthLimit(orig_msg_len_limit - tsig_len); + } + // reserve room for the header + if (renderer.getLengthLimit() < HEADERLEN) { + isc_throw(InvalidParameter, "Failed to render DNS message: " + "too small limit for a Header"); + } renderer.skip(HEADERLEN); uint16_t qdcount = @@ -284,6 +307,22 @@ MessageImpl::toWire(AbstractMessageRenderer& renderer, TSIGContext* tsig_ctx) { } } + // If we're adding a TSIG to a truncated message, clear all RRsets + // from the message except for the question before adding the TSIG. + // If even (some of) the question doesn't fit, don't include it. + if (tsig_ctx != NULL && renderer.isTruncated()) { + renderer.clear(); + renderer.setLengthLimit(orig_msg_len_limit - tsig_len); + renderer.setCompressMode(orig_compress_mode); + renderer.skip(HEADERLEN); + qdcount = for_each(questions_.begin(), questions_.end(), + RenderSection(renderer, + false)).getTotalCount(); + ancount = 0; + nscount = 0; + arcount = 0; + } + // Adjust the counter buffer. // XXX: these may not be equal to the number of corresponding entries // in rrsets_[] or questions_ if truncation occurred or an EDNS OPT RR @@ -315,10 +354,16 @@ MessageImpl::toWire(AbstractMessageRenderer& renderer, TSIGContext* tsig_ctx) { renderer.writeUint16At(arcount, header_pos); // Add TSIG, if necessary, at the end of the message. - // TODO: truncate case consideration if (tsig_ctx != NULL) { - tsig_ctx->sign(qid_, renderer.getData(), - renderer.getLength())->toWire(renderer); + // Release the reserved space in the renderer. + renderer.setLengthLimit(orig_msg_len_limit); + + const int tsig_count = + tsig_ctx->sign(qid_, renderer.getData(), + renderer.getLength())->toWire(renderer); + if (tsig_count != 1) { + isc_throw(Unexpected, "Failed to render a TSIG RR"); + } // update the ARCOUNT for the TSIG RR. Note that for a sane DNS // message arcount should never overflow to 0. @@ -571,7 +616,7 @@ Message::parseHeader(InputBuffer& buffer) { } void -Message::fromWire(InputBuffer& buffer) { +Message::fromWire(InputBuffer& buffer, ParseOptions options) { if (impl_->mode_ != Message::PARSE) { isc_throw(InvalidMessageOperation, "Message parse attempted in non parse mode"); @@ -583,11 +628,11 @@ Message::fromWire(InputBuffer& buffer) { impl_->counts_[SECTION_QUESTION] = impl_->parseQuestion(buffer); impl_->counts_[SECTION_ANSWER] = - impl_->parseSection(SECTION_ANSWER, buffer); + impl_->parseSection(SECTION_ANSWER, buffer, options); impl_->counts_[SECTION_AUTHORITY] = - impl_->parseSection(SECTION_AUTHORITY, buffer); + impl_->parseSection(SECTION_AUTHORITY, buffer, options); impl_->counts_[SECTION_ADDITIONAL] = - impl_->parseSection(SECTION_ADDITIONAL, buffer); + impl_->parseSection(SECTION_ADDITIONAL, buffer, options); } int @@ -663,7 +708,7 @@ struct MatchRR : public unary_function { // is hardcoded here. int MessageImpl::parseSection(const Message::Section section, - InputBuffer& buffer) + InputBuffer& buffer, Message::ParseOptions options) { assert(section < MessageImpl::NUM_SECTIONS); @@ -695,7 +740,7 @@ MessageImpl::parseSection(const Message::Section section, addTSIG(section, count, buffer, start_position, name, rrclass, ttl, *rdata); } else { - addRR(section, name, rrclass, rrtype, ttl, rdata); + addRR(section, name, rrclass, rrtype, ttl, rdata, options); ++added; } } @@ -706,19 +751,22 @@ MessageImpl::parseSection(const Message::Section section, void MessageImpl::addRR(Message::Section section, const Name& name, const RRClass& rrclass, const RRType& rrtype, - const RRTTL& ttl, ConstRdataPtr rdata) + const RRTTL& ttl, ConstRdataPtr rdata, + Message::ParseOptions options) { - vector::iterator it = - find_if(rrsets_[section].begin(), rrsets_[section].end(), - MatchRR(name, rrtype, rrclass)); - if (it != rrsets_[section].end()) { - (*it)->setTTL(min((*it)->getTTL(), ttl)); - (*it)->addRdata(rdata); - } else { - RRsetPtr rrset(new RRset(name, rrclass, rrtype, ttl)); - rrset->addRdata(rdata); - rrsets_[section].push_back(rrset); + if ((options & Message::PRESERVE_ORDER) == 0) { + vector::iterator it = + find_if(rrsets_[section].begin(), rrsets_[section].end(), + MatchRR(name, rrtype, rrclass)); + if (it != rrsets_[section].end()) { + (*it)->setTTL(min((*it)->getTTL(), ttl)); + (*it)->addRdata(rdata); + return; + } } + RRsetPtr rrset(new RRset(name, rrclass, rrtype, ttl)); + rrset->addRdata(rdata); + rrsets_[section].push_back(rrset); } void diff --git a/src/lib/dns/message.h b/src/lib/dns/message.h index fcc53e92a0..f286c6791f 100644 --- a/src/lib/dns/message.h +++ b/src/lib/dns/message.h @@ -565,16 +565,74 @@ public: /// \c tsig_ctx will be updated based on the fact it was used for signing /// and with the latest MAC. /// + /// \exception InvalidMessageOperation The message is not in the Render + /// mode, or either Rcode or Opcode is not set. + /// \exception InvalidParameter The allowable limit of \c renderer is too + /// small for a TSIG or the Header section. Note that this shouldn't + /// happen with parameters as defined in the standard protocols, + /// so it's more likely a program bug. + /// \exception Unexpected Rendering the TSIG RR fails. The implementation + /// internally makes sure this doesn't happen, so if that ever occurs + /// it should mean a bug either in the TSIG context or in the renderer + /// implementation. + /// /// \param renderer See the other version /// \param tsig_ctx A TSIG context that is to be used for signing the /// message void toWire(AbstractMessageRenderer& renderer, TSIGContext& tsig_ctx); + /// Parse options. + /// + /// describe PRESERVE_ORDER: note doesn't affect EDNS or TSIG. + /// + /// The option values are used as a parameter for \c fromWire(). + /// These are values of a bitmask type. Bitwise operations can be + /// performed on these values to express compound options. + enum ParseOptions { + PARSE_DEFAULT = 0, ///< The default options + PRESERVE_ORDER = 1 ///< Preserve RR order and don't combine them + }; + /// \brief Parse the header section of the \c Message. void parseHeader(isc::util::InputBuffer& buffer); - /// \brief Parse the \c Message. - void fromWire(isc::util::InputBuffer& buffer); + /// \brief (Re)build a \c Message object from wire-format data. + /// + /// This method parses the given wire format data to build a + /// complete Message object. On success, the values of the header section + /// fields can be accessible via corresponding get methods, and the + /// question and following sections can be accessible via the + /// corresponding iterators. If the message contains an EDNS or TSIG, + /// they can be accessible via \c getEDNS() and \c getTSIGRecord(), + /// respectively. + /// + /// This \c Message must be in the \c PARSE mode. + /// + /// This method performs strict validation on the given message based + /// on the DNS protocol specifications. If the given message data is + /// invalid, this method throws an exception (see the exception list). + /// + /// By default, this method combines RRs of the same name, RR type and + /// RR class in a section into a single RRset, even if they are interleaved + /// with a different type of RR (though it would be a rare case in + /// practice). If the \c PRESERVE_ORDER option is specified, it handles + /// each RR separately, in the appearing order, and converts it to a + /// separate RRset (so this RRset should contain exactly one Rdata). + /// This mode will be necessary when the higher level protocol is + /// ordering conscious. For example, in AXFR and IXFR, the position of + /// the SOA RRs are crucial. + /// + /// \exception InvalidMessageOperation \c Message is in the RENDER mode + /// \exception DNSMessageFORMERR The given message data is syntactically + /// \exception MessageTooShort The given data is shorter than a valid + /// header section + /// \exception std::bad_alloc Memory allocation failure + /// \exception Others \c Name, \c Rdata, and \c EDNS classes can also throw + /// + /// \param buffer A input buffer object that stores the wire data + /// \param options Parse options + void fromWire(isc::util::InputBuffer& buffer, ParseOptions options + = PARSE_DEFAULT); /// /// \name Protocol constants @@ -618,6 +676,6 @@ std::ostream& operator<<(std::ostream& os, const Message& message); } #endif // __MESSAGE_H -// Local Variables: +// Local Variables: // mode: c++ -// End: +// End: diff --git a/src/lib/dns/python/Makefile.am b/src/lib/dns/python/Makefile.am index 6c4ef54782..3b89358ad0 100644 --- a/src/lib/dns/python/Makefile.am +++ b/src/lib/dns/python/Makefile.am @@ -4,40 +4,47 @@ AM_CPPFLAGS = -I$(top_srcdir)/src/lib -I$(top_builddir)/src/lib AM_CPPFLAGS += $(BOOST_INCLUDES) AM_CXXFLAGS = $(B10_CXXFLAGS) -pyexec_LTLIBRARIES = pydnspp.la -pydnspp_la_SOURCES = pydnspp.cc pydnspp_common.cc pydnspp_towire.h -pydnspp_la_SOURCES += name_python.cc name_python.h -pydnspp_la_SOURCES += messagerenderer_python.cc messagerenderer_python.h -pydnspp_la_SOURCES += rcode_python.cc rcode_python.h -pydnspp_la_SOURCES += tsigkey_python.cc tsigkey_python.h -pydnspp_la_SOURCES += tsigerror_python.cc tsigerror_python.h -pydnspp_la_SOURCES += tsig_rdata_python.cc tsig_rdata_python.h -pydnspp_la_SOURCES += tsigrecord_python.cc tsigrecord_python.h -pydnspp_la_SOURCES += tsig_python.cc tsig_python.h +lib_LTLIBRARIES = libpydnspp.la +libpydnspp_la_SOURCES = pydnspp_common.cc pydnspp_common.h pydnspp_towire.h +libpydnspp_la_SOURCES += name_python.cc name_python.h +libpydnspp_la_SOURCES += rrset_python.cc rrset_python.h +libpydnspp_la_SOURCES += rrclass_python.cc rrclass_python.h +libpydnspp_la_SOURCES += rrtype_python.cc rrtype_python.h +libpydnspp_la_SOURCES += rrttl_python.cc rrttl_python.h +libpydnspp_la_SOURCES += rdata_python.cc rdata_python.h +libpydnspp_la_SOURCES += messagerenderer_python.cc messagerenderer_python.h +libpydnspp_la_SOURCES += rcode_python.cc rcode_python.h +libpydnspp_la_SOURCES += opcode_python.cc opcode_python.h +libpydnspp_la_SOURCES += question_python.cc question_python.h +libpydnspp_la_SOURCES += tsigkey_python.cc tsigkey_python.h +libpydnspp_la_SOURCES += tsigerror_python.cc tsigerror_python.h +libpydnspp_la_SOURCES += tsig_rdata_python.cc tsig_rdata_python.h +libpydnspp_la_SOURCES += tsigrecord_python.cc tsigrecord_python.h +libpydnspp_la_SOURCES += tsig_python.cc tsig_python.h +libpydnspp_la_SOURCES += edns_python.cc edns_python.h +libpydnspp_la_SOURCES += message_python.cc message_python.h +libpydnspp_la_CPPFLAGS = $(AM_CPPFLAGS) $(PYTHON_INCLUDES) +libpydnspp_la_CXXFLAGS = $(AM_CXXFLAGS) $(PYTHON_CXXFLAGS) +libpydnspp_la_LDFLAGS = $(PYTHON_LDFLAGS) + + + +pyexec_LTLIBRARIES = pydnspp.la +pydnspp_la_SOURCES = pydnspp.cc pydnspp_la_CPPFLAGS = $(AM_CPPFLAGS) $(PYTHON_INCLUDES) # Note: PYTHON_CXXFLAGS may have some -Wno... workaround, which must be # placed after -Wextra defined in AM_CXXFLAGS pydnspp_la_CXXFLAGS = $(AM_CXXFLAGS) $(PYTHON_CXXFLAGS) pydnspp_la_LDFLAGS = $(PYTHON_LDFLAGS) -# directly included from source files, so these don't have their own -# rules -EXTRA_DIST = pydnspp_common.h -EXTRA_DIST += edns_python.cc -EXTRA_DIST += message_python.cc -EXTRA_DIST += rrclass_python.cc -EXTRA_DIST += opcode_python.cc -EXTRA_DIST += rrset_python.cc -EXTRA_DIST += question_python.cc -EXTRA_DIST += rrttl_python.cc -EXTRA_DIST += rdata_python.cc -EXTRA_DIST += rrtype_python.cc -EXTRA_DIST += tsigerror_python_inc.cc +EXTRA_DIST = tsigerror_python_inc.cc +EXTRA_DIST += message_python_inc.cc # Python prefers .so, while some OSes (specifically MacOS) use a different # suffix for dynamic objects. -module is necessary to work this around. pydnspp_la_LDFLAGS += -module pydnspp_la_LIBADD = $(top_builddir)/src/lib/dns/libdns++.la pydnspp_la_LIBADD += $(top_builddir)/src/lib/exceptions/libexceptions.la +pydnspp_la_LIBADD += libpydnspp.la pydnspp_la_LIBADD += $(PYTHON_LIB) diff --git a/src/lib/dns/python/edns_python.cc b/src/lib/dns/python/edns_python.cc index 83c3bfa3b6..8f0f1a4213 100644 --- a/src/lib/dns/python/edns_python.cc +++ b/src/lib/dns/python/edns_python.cc @@ -12,38 +12,38 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include + #include #include +#include +#include +#include + +#include "edns_python.h" +#include "name_python.h" +#include "rrclass_python.h" +#include "rrtype_python.h" +#include "rrttl_python.h" +#include "rdata_python.h" +#include "messagerenderer_python.h" +#include "pydnspp_common.h" using namespace isc::dns; -using namespace isc::util; using namespace isc::dns::rdata; - -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description +using namespace isc::dns::python; +using namespace isc::util; +using namespace isc::util::python; namespace { -// -// EDNS -// - -// The s_* Class simply covers one instantiation of the object class s_EDNS : public PyObject { public: - EDNS* edns; + EDNS* cppobj; }; -// -// We declare the functions here, the definitions are below -// the type definition of the object, since both can use the other -// +typedef CPPPyObjectContainer EDNSContainer; // General creation and destruction int EDNS_init(s_EDNS* self, PyObject* args); @@ -103,6 +103,212 @@ PyMethodDef EDNS_methods[] = { { NULL, NULL, 0, NULL } }; +EDNS* +createFromRR(const Name& name, const RRClass& rrclass, const RRType& rrtype, + const RRTTL& rrttl, const Rdata& rdata, uint8_t& extended_rcode) +{ + try { + return (createEDNSFromRR(name, rrclass, rrtype, rrttl, rdata, + extended_rcode)); + } catch (const isc::InvalidParameter& ex) { + PyErr_SetString(po_InvalidParameter, ex.what()); + } catch (const DNSMessageFORMERR& ex) { + PyErr_SetString(po_DNSMessageFORMERR, ex.what()); + } catch (const DNSMessageBADVERS& ex) { + PyErr_SetString(po_DNSMessageBADVERS, ex.what()); + } catch (...) { + PyErr_SetString(po_IscException, "Unexpected exception"); + } + + return (NULL); +} +int +EDNS_init(s_EDNS* self, PyObject* args) { + uint8_t version = EDNS::SUPPORTED_VERSION; + const PyObject* name; + const PyObject* rrclass; + const PyObject* rrtype; + const PyObject* rrttl; + const PyObject* rdata; + + if (PyArg_ParseTuple(args, "|b", &version)) { + try { + self->cppobj = new EDNS(version); + } catch (const isc::InvalidParameter& ex) { + PyErr_SetString(po_InvalidParameter, ex.what()); + return (-1); + } catch (...) { + PyErr_SetString(po_IscException, "Unexpected exception"); + return (-1); + } + return (0); + } else if (PyArg_ParseTuple(args, "O!O!O!O!O!", &name_type, &name, + &rrclass_type, &rrclass, &rrtype_type, &rrtype, + &rrttl_type, &rrttl, &rdata_type, &rdata)) { + // We use createFromRR() even if we don't need to know extended_rcode + // in this context so that we can share the try-catch logic with + // EDNS_createFromRR() (see below). + uint8_t extended_rcode; + self->cppobj = createFromRR(PyName_ToName(name), + PyRRClass_ToRRClass(rrclass), + PyRRType_ToRRType(rrtype), + PyRRTTL_ToRRTTL(rrttl), + PyRdata_ToRdata(rdata), extended_rcode); + return (self->cppobj != NULL ? 0 : -1); + } + + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, "Invalid arguments to EDNS constructor"); + + return (-1); +} + +void +EDNS_destroy(s_EDNS* const self) { + delete self->cppobj; + self->cppobj = NULL; + Py_TYPE(self)->tp_free(self); +} + +PyObject* +EDNS_toText(const s_EDNS* const self) { + // Py_BuildValue makes python objects from native data + return (Py_BuildValue("s", self->cppobj->toText().c_str())); +} + +PyObject* +EDNS_str(PyObject* self) { + // Simply call the to_text method we already defined + return (PyObject_CallMethod(self, + const_cast("to_text"), + const_cast(""))); +} + +PyObject* +EDNS_toWire(const s_EDNS* const self, PyObject* args) { + PyObject* bytes; + uint8_t extended_rcode; + PyObject* renderer; + + if (PyArg_ParseTuple(args, "Ob", &bytes, &extended_rcode) && + PySequence_Check(bytes)) { + PyObject* bytes_o = bytes; + + OutputBuffer buffer(0); + self->cppobj->toWire(buffer, extended_rcode); + PyObject* rd_bytes = PyBytes_FromStringAndSize( + static_cast(buffer.getData()), buffer.getLength()); + PyObject* result = PySequence_InPlaceConcat(bytes_o, rd_bytes); + // We need to release the object we temporarily created here + // to prevent memory leak + Py_DECREF(rd_bytes); + return (result); + } else if (PyArg_ParseTuple(args, "O!b", &messagerenderer_type, + &renderer, &extended_rcode)) { + const unsigned int n = self->cppobj->toWire( + PyMessageRenderer_ToMessageRenderer(renderer), extended_rcode); + + return (Py_BuildValue("I", n)); + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, "Incorrect arguments for EDNS.to_wire()"); + return (NULL); +} + +PyObject* +EDNS_getVersion(const s_EDNS* const self) { + return (Py_BuildValue("B", self->cppobj->getVersion())); +} + +PyObject* +EDNS_getDNSSECAwareness(const s_EDNS* const self) { + if (self->cppobj->getDNSSECAwareness()) { + Py_RETURN_TRUE; + } else { + Py_RETURN_FALSE; + } +} + +PyObject* +EDNS_setDNSSECAwareness(s_EDNS* self, PyObject* args) { + const PyObject *b; + if (!PyArg_ParseTuple(args, "O!", &PyBool_Type, &b)) { + return (NULL); + } + self->cppobj->setDNSSECAwareness(b == Py_True); + Py_RETURN_NONE; +} + +PyObject* +EDNS_getUDPSize(const s_EDNS* const self) { + return (Py_BuildValue("I", self->cppobj->getUDPSize())); +} + +PyObject* +EDNS_setUDPSize(s_EDNS* self, PyObject* args) { + long size; + if (!PyArg_ParseTuple(args, "l", &size)) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "No valid type in set_udp_size argument"); + return (NULL); + } + if (size < 0 || size > 0xffff) { + PyErr_SetString(PyExc_ValueError, + "UDP size is not an unsigned 16-bit integer"); + return (NULL); + } + self->cppobj->setUDPSize(size); + Py_RETURN_NONE; +} + +PyObject* +EDNS_createFromRR(const s_EDNS* null_self, PyObject* args) { + const PyObject* name; + const PyObject* rrclass; + const PyObject* rrtype; + const PyObject* rrttl; + const PyObject* rdata; + s_EDNS* edns_obj = NULL; + + assert(null_self == NULL); + + if (PyArg_ParseTuple(args, "O!O!O!O!O!", &name_type, &name, + &rrclass_type, &rrclass, &rrtype_type, &rrtype, + &rrttl_type, &rrttl, &rdata_type, &rdata)) { + uint8_t extended_rcode; + edns_obj = PyObject_New(s_EDNS, &edns_type); + if (edns_obj == NULL) { + return (NULL); + } + + edns_obj->cppobj = createFromRR(PyName_ToName(name), + PyRRClass_ToRRClass(rrclass), + PyRRType_ToRRType(rrtype), + PyRRTTL_ToRRTTL(rrttl), + PyRdata_ToRdata(rdata), + extended_rcode); + if (edns_obj->cppobj != NULL) { + PyObject* extrcode_obj = Py_BuildValue("B", extended_rcode); + return (Py_BuildValue("OO", edns_obj, extrcode_obj)); + } + + Py_DECREF(edns_obj); + return (NULL); + } + + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "Incorrect arguments for EDNS.create_from_rr()"); + return (NULL); +} + +} // end of anonymous namespace + +namespace isc { +namespace dns { +namespace python { + // This defines the complete type for reflection in python and // parsing of PyObject* to s_EDNS // Most of the functions are not actually implemented and NULL here. @@ -120,7 +326,7 @@ PyTypeObject edns_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call EDNS_str, // tp_str NULL, // tp_getattro @@ -157,219 +363,31 @@ PyTypeObject edns_type = { 0 // tp_version_tag }; -EDNS* -createFromRR(const Name& name, const RRClass& rrclass, const RRType& rrtype, - const RRTTL& rrttl, const Rdata& rdata, uint8_t& extended_rcode) -{ - try { - return (createEDNSFromRR(name, rrclass, rrtype, rrttl, rdata, - extended_rcode)); - } catch (const isc::InvalidParameter& ex) { - PyErr_SetString(po_InvalidParameter, ex.what()); - } catch (const DNSMessageFORMERR& ex) { - PyErr_SetString(po_DNSMessageFORMERR, ex.what()); - } catch (const DNSMessageBADVERS& ex) { - PyErr_SetString(po_DNSMessageBADVERS, ex.what()); - } catch (...) { - PyErr_SetString(po_IscException, "Unexpected exception"); - } - - return (NULL); -} -int -EDNS_init(s_EDNS* self, PyObject* args) { - uint8_t version = EDNS::SUPPORTED_VERSION; - const s_Name* name; - const s_RRClass* rrclass; - const s_RRType* rrtype; - const s_RRTTL* rrttl; - const s_Rdata* rdata; - - if (PyArg_ParseTuple(args, "|b", &version)) { - try { - self->edns = new EDNS(version); - } catch (const isc::InvalidParameter& ex) { - PyErr_SetString(po_InvalidParameter, ex.what()); - return (-1); - } catch (...) { - PyErr_SetString(po_IscException, "Unexpected exception"); - return (-1); - } - return (0); - } else if (PyArg_ParseTuple(args, "O!O!O!O!O!", &name_type, &name, - &rrclass_type, &rrclass, &rrtype_type, &rrtype, - &rrttl_type, &rrttl, &rdata_type, &rdata)) { - // We use createFromRR() even if we don't need to know extended_rcode - // in this context so that we can share the try-catch logic with - // EDNS_createFromRR() (see below). - uint8_t extended_rcode; - self->edns = createFromRR(*name->cppobj, *rrclass->rrclass, - *rrtype->rrtype, *rrttl->rrttl, - *rdata->rdata, extended_rcode); - return (self->edns != NULL ? 0 : -1); - } - - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, "Invalid arguments to EDNS constructor"); - - return (-1); -} - -void -EDNS_destroy(s_EDNS* const self) { - delete self->edns; - self->edns = NULL; - Py_TYPE(self)->tp_free(self); -} - PyObject* -EDNS_toText(const s_EDNS* const self) { - // Py_BuildValue makes python objects from native data - return (Py_BuildValue("s", self->edns->toText().c_str())); +createEDNSObject(const EDNS& source) { + EDNSContainer container(PyObject_New(s_EDNS, &edns_type)); + container.set(new EDNS(source)); + return (container.release()); } -PyObject* -EDNS_str(PyObject* const self) { - // Simply call the to_text method we already defined - return (PyObject_CallMethod(self, - const_cast("to_text"), - const_cast(""))); -} - -PyObject* -EDNS_toWire(const s_EDNS* const self, PyObject* args) { - PyObject* bytes; - uint8_t extended_rcode; - s_MessageRenderer* renderer; - - if (PyArg_ParseTuple(args, "Ob", &bytes, &extended_rcode) && - PySequence_Check(bytes)) { - PyObject* bytes_o = bytes; - - OutputBuffer buffer(0); - self->edns->toWire(buffer, extended_rcode); - PyObject* rd_bytes = PyBytes_FromStringAndSize( - static_cast(buffer.getData()), buffer.getLength()); - PyObject* result = PySequence_InPlaceConcat(bytes_o, rd_bytes); - // We need to release the object we temporarily created here - // to prevent memory leak - Py_DECREF(rd_bytes); - return (result); - } else if (PyArg_ParseTuple(args, "O!b", &messagerenderer_type, - &renderer, &extended_rcode)) { - const unsigned int n = self->edns->toWire(*renderer->messagerenderer, - extended_rcode); - - return (Py_BuildValue("I", n)); - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, "Incorrect arguments for EDNS.to_wire()"); - return (NULL); -} - -PyObject* -EDNS_getVersion(const s_EDNS* const self) { - return (Py_BuildValue("B", self->edns->getVersion())); -} - -PyObject* -EDNS_getDNSSECAwareness(const s_EDNS* const self) { - if (self->edns->getDNSSECAwareness()) { - Py_RETURN_TRUE; - } else { - Py_RETURN_FALSE; - } -} - -PyObject* -EDNS_setDNSSECAwareness(s_EDNS* self, PyObject* args) { - const PyObject *b; - if (!PyArg_ParseTuple(args, "O!", &PyBool_Type, &b)) { - return (NULL); - } - self->edns->setDNSSECAwareness(b == Py_True); - Py_RETURN_NONE; -} - -PyObject* -EDNS_getUDPSize(const s_EDNS* const self) { - return (Py_BuildValue("I", self->edns->getUDPSize())); -} - -PyObject* -EDNS_setUDPSize(s_EDNS* self, PyObject* args) { - long size; - if (!PyArg_ParseTuple(args, "l", &size)) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "No valid type in set_udp_size argument"); - return (NULL); - } - if (size < 0 || size > 0xffff) { - PyErr_SetString(PyExc_ValueError, - "UDP size is not an unsigned 16-bit integer"); - return (NULL); - } - self->edns->setUDPSize(size); - Py_RETURN_NONE; -} - -PyObject* -EDNS_createFromRR(const s_EDNS* null_self, PyObject* args) { - const s_Name* name; - const s_RRClass* rrclass; - const s_RRType* rrtype; - const s_RRTTL* rrttl; - const s_Rdata* rdata; - s_EDNS* edns_obj = NULL; - - assert(null_self == NULL); - - if (PyArg_ParseTuple(args, "O!O!O!O!O!", &name_type, &name, - &rrclass_type, &rrclass, &rrtype_type, &rrtype, - &rrttl_type, &rrttl, &rdata_type, &rdata)) { - uint8_t extended_rcode; - edns_obj = PyObject_New(s_EDNS, &edns_type); - if (edns_obj == NULL) { - return (NULL); - } - - edns_obj->edns = createFromRR(*name->cppobj, *rrclass->rrclass, - *rrtype->rrtype, *rrttl->rrttl, - *rdata->rdata, extended_rcode); - if (edns_obj->edns != NULL) { - PyObject* extrcode_obj = Py_BuildValue("B", extended_rcode); - return (Py_BuildValue("OO", edns_obj, extrcode_obj)); - } - - Py_DECREF(edns_obj); - return (NULL); - } - - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "Incorrect arguments for EDNS.create_from_rr()"); - return (NULL); -} - -} // end of anonymous namespace -// end of EDNS - -// Module Initialization, all statics are initialized here bool -initModulePart_EDNS(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&edns_type) < 0) { - return (false); +PyEDNS_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&edns_type); - void* p = &edns_type; - PyModule_AddObject(mod, "EDNS", static_cast(p)); - - addClassVariable(edns_type, "SUPPORTED_VERSION", - Py_BuildValue("B", EDNS::SUPPORTED_VERSION)); - - return (true); + return (PyObject_TypeCheck(obj, &edns_type)); } + +const EDNS& +PyEDNS_ToEDNS(const PyObject* edns_obj) { + if (edns_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in EDNS PyObject conversion"); + } + const s_EDNS* edns = static_cast(edns_obj); + return (*edns->cppobj); +} + +} // end namespace python +} // end namespace dns +} // end namespace isc diff --git a/src/lib/dns/python/edns_python.h b/src/lib/dns/python/edns_python.h new file mode 100644 index 0000000000..30d92abe22 --- /dev/null +++ b/src/lib/dns/python/edns_python.h @@ -0,0 +1,64 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_EDNS_H +#define __PYTHON_EDNS_H 1 + +#include + +namespace isc { +namespace dns { +class EDNS; + +namespace python { + +extern PyTypeObject edns_type; + +/// This is a simple shortcut to create a python EDNS object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createEDNSObject(const EDNS& source); + +/// \brief Checks if the given python object is a EDNS object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type EDNS, false otherwise +bool PyEDNS_Check(PyObject* obj); + +/// \brief Returns a reference to the EDNS object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type EDNS; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyEDNS_Check() +/// +/// \note This is not a copy; if the EDNS is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param edns_obj The edns object to convert +const EDNS& PyEDNS_ToEDNS(const PyObject* edns_obj); + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_EDNS_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/message_python.cc b/src/lib/dns/python/message_python.cc index 2842588b07..23494019c6 100644 --- a/src/lib/dns/python/message_python.cc +++ b/src/lib/dns/python/message_python.cc @@ -12,49 +12,42 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#define PY_SSIZE_T_CLEAN +#include + #include #include #include #include +#include +#include +#include "name_python.h" +#include "question_python.h" +#include "edns_python.h" +#include "rcode_python.h" +#include "opcode_python.h" +#include "rrset_python.h" +#include "message_python.h" +#include "messagerenderer_python.h" +#include "tsig_python.h" +#include "tsigrecord_python.h" +#include "pydnspp_common.h" + +using namespace std; using namespace isc::dns; +using namespace isc::dns::python; using namespace isc::util; +// Import pydoc text +#include "message_python_inc.cc" + namespace { -// -// Declaration of the custom exceptions -// Initialization and addition of these go in the initModulePart -// function at the end of this file -// -PyObject* po_MessageTooShort; -PyObject* po_InvalidMessageSection; -PyObject* po_InvalidMessageOperation; -PyObject* po_InvalidMessageUDPSize; - -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description - -// -// Message -// - -// The s_* Class simply coverst one instantiation of the object class s_Message : public PyObject { public: - Message* message; + isc::dns::Message* cppobj; }; -// -// We declare the functions here, the definitions are below -// the type definition of the object, since both can use the other -// - -// General creation and destruction int Message_init(s_Message* self, PyObject* args); void Message_destroy(s_Message* self); @@ -85,7 +78,7 @@ PyObject* Message_makeResponse(s_Message* self); PyObject* Message_toText(s_Message* self); PyObject* Message_str(PyObject* self); PyObject* Message_toWire(s_Message* self, PyObject* args); -PyObject* Message_fromWire(s_Message* self, PyObject* args); +PyObject* Message_fromWire(PyObject* pyself, PyObject* args); // This list contains the actual set of functions we have in // python. Each entry has @@ -167,17 +160,554 @@ PyMethodDef Message_methods[] = { "If the given message is not in RENDER mode, an " "InvalidMessageOperation is raised.\n" }, - { "from_wire", reinterpret_cast(Message_fromWire), METH_VARARGS, - "Parses the given wire format to a Message object.\n" - "The first argument is a Message to parse the data into.\n" - "The second argument must implement the buffer interface.\n" - "If the given message is not in PARSE mode, an " - "InvalidMessageOperation is raised.\n" - "Raises MessageTooShort, DNSMessageFORMERR or DNSMessageBADVERS " - " if there is a problem parsing the message." }, + { "from_wire", Message_fromWire, METH_VARARGS, Message_fromWire_doc }, { NULL, NULL, 0, NULL } }; +int +Message_init(s_Message* self, PyObject* args) { + int i; + + if (PyArg_ParseTuple(args, "i", &i)) { + PyErr_Clear(); + if (i == Message::PARSE) { + self->cppobj = new Message(Message::PARSE); + return (0); + } else if (i == Message::RENDER) { + self->cppobj = new Message(Message::RENDER); + return (0); + } else { + PyErr_SetString(PyExc_TypeError, "Message mode must be Message.PARSE or Message.RENDER"); + return (-1); + } + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in constructor argument"); + return (-1); +} + +void +Message_destroy(s_Message* self) { + delete self->cppobj; + self->cppobj = NULL; + Py_TYPE(self)->tp_free(self); +} + +PyObject* +Message_getHeaderFlag(s_Message* self, PyObject* args) { + unsigned int messageflag; + if (!PyArg_ParseTuple(args, "I", &messageflag)) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in get_header_flag argument"); + return (NULL); + } + + if (self->cppobj->getHeaderFlag( + static_cast(messageflag))) { + Py_RETURN_TRUE; + } else { + Py_RETURN_FALSE; + } +} + +PyObject* +Message_setHeaderFlag(s_Message* self, PyObject* args) { + long messageflag; + PyObject *on = Py_True; + + if (!PyArg_ParseTuple(args, "l|O!", &messageflag, &PyBool_Type, &on)) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in set_header_flag argument"); + return (NULL); + } + if (messageflag < 0 || messageflag > 0xffff) { + PyErr_SetString(PyExc_ValueError, "Message header flag out of range"); + return (NULL); + } + + try { + self->cppobj->setHeaderFlag( + static_cast(messageflag), on == Py_True); + Py_RETURN_NONE; + } catch (const InvalidMessageOperation& imo) { + PyErr_Clear(); + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } catch (const isc::InvalidParameter& ip) { + PyErr_Clear(); + PyErr_SetString(po_InvalidParameter, ip.what()); + return (NULL); + } +} + +PyObject* +Message_getQid(s_Message* self) { + return (Py_BuildValue("I", self->cppobj->getQid())); +} + +PyObject* +Message_setQid(s_Message* self, PyObject* args) { + long id; + if (!PyArg_ParseTuple(args, "l", &id)) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in set_qid argument"); + return (NULL); + } + if (id < 0 || id > 0xffff) { + PyErr_SetString(PyExc_ValueError, + "Message id out of range"); + return (NULL); + } + + try { + self->cppobj->setQid(id); + Py_RETURN_NONE; + } catch (const InvalidMessageOperation& imo) { + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } +} + +PyObject* +Message_getRcode(s_Message* self) { + try { + return (createRcodeObject(self->cppobj->getRcode())); + } catch (const InvalidMessageOperation& imo) { + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } catch (...) { + PyErr_SetString(po_IscException, "Unexpected exception"); + return (NULL); + } +} + +PyObject* +Message_setRcode(s_Message* self, PyObject* args) { + PyObject* rcode; + if (!PyArg_ParseTuple(args, "O!", &rcode_type, &rcode)) { + return (NULL); + } + try { + self->cppobj->setRcode(PyRcode_ToRcode(rcode)); + Py_RETURN_NONE; + } catch (const InvalidMessageOperation& imo) { + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } +} + +PyObject* +Message_getOpcode(s_Message* self) { + try { + return (createOpcodeObject(self->cppobj->getOpcode())); + } catch (const InvalidMessageOperation& imo) { + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } catch (const exception& ex) { + const string ex_what = + "Failed to get message opcode: " + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + return (NULL); + } catch (...) { + PyErr_SetString(po_IscException, + "Unexpected exception getting opcode from message"); + return (NULL); + } +} + +PyObject* +Message_setOpcode(s_Message* self, PyObject* args) { + PyObject* opcode; + if (!PyArg_ParseTuple(args, "O!", &opcode_type, &opcode)) { + return (NULL); + } + try { + self->cppobj->setOpcode(PyOpcode_ToOpcode(opcode)); + Py_RETURN_NONE; + } catch (const InvalidMessageOperation& imo) { + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } +} + +PyObject* +Message_getEDNS(s_Message* self) { + ConstEDNSPtr src = self->cppobj->getEDNS(); + if (!src) { + Py_RETURN_NONE; + } + try { + return (createEDNSObject(*src)); + } catch (const exception& ex) { + const string ex_what = + "Failed to get EDNS from message: " + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting EDNS from message"); + } + return (NULL); +} + +PyObject* +Message_setEDNS(s_Message* self, PyObject* args) { + PyObject* edns; + if (!PyArg_ParseTuple(args, "O!", &edns_type, &edns)) { + return (NULL); + } + try { + self->cppobj->setEDNS(EDNSPtr(new EDNS(PyEDNS_ToEDNS(edns)))); + Py_RETURN_NONE; + } catch (const InvalidMessageOperation& imo) { + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } +} + +PyObject* +Message_getTSIGRecord(s_Message* self) { + try { + const TSIGRecord* tsig_record = self->cppobj->getTSIGRecord(); + + if (tsig_record == NULL) { + Py_RETURN_NONE; + } + return (createTSIGRecordObject(*tsig_record)); + } catch (const InvalidMessageOperation& ex) { + PyErr_SetString(po_InvalidMessageOperation, ex.what()); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure in getting TSIGRecord from message: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, "Unexpected failure in " + "getting TSIGRecord from message"); + } + return (NULL); +} + +PyObject* +Message_getRRCount(s_Message* self, PyObject* args) { + unsigned int section; + if (!PyArg_ParseTuple(args, "I", §ion)) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in get_rr_count argument"); + return (NULL); + } + try { + return (Py_BuildValue("I", self->cppobj->getRRCount( + static_cast(section)))); + } catch (const isc::OutOfRange& ex) { + PyErr_SetString(PyExc_OverflowError, ex.what()); + return (NULL); + } +} + +// TODO use direct iterators for these? (or simply lists for now?) +PyObject* +Message_getQuestion(s_Message* self) { + QuestionIterator qi, qi_end; + try { + qi = self->cppobj->beginQuestion(); + qi_end = self->cppobj->endQuestion(); + } catch (const InvalidMessageSection& ex) { + PyErr_SetString(po_InvalidMessageSection, ex.what()); + return (NULL); + } catch (...) { + PyErr_SetString(po_IscException, + "Unexpected exception in getting section iterators"); + return (NULL); + } + + PyObject* list = PyList_New(0); + if (list == NULL) { + return (NULL); + } + + try { + for (; qi != qi_end; ++qi) { + if (PyList_Append(list, createQuestionObject(**qi)) == -1) { + Py_DECREF(list); + return (NULL); + } + } + return (list); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting Question section: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting Question section"); + } + Py_DECREF(list); + return (NULL); +} + +PyObject* +Message_getSection(s_Message* self, PyObject* args) { + unsigned int section; + if (!PyArg_ParseTuple(args, "I", §ion)) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in get_section argument"); + return (NULL); + } + RRsetIterator rrsi, rrsi_end; + try { + rrsi = self->cppobj->beginSection( + static_cast(section)); + rrsi_end = self->cppobj->endSection( + static_cast(section)); + } catch (const isc::OutOfRange& ex) { + PyErr_SetString(PyExc_OverflowError, ex.what()); + return (NULL); + } catch (const InvalidMessageSection& ex) { + PyErr_SetString(po_InvalidMessageSection, ex.what()); + return (NULL); + } catch (...) { + PyErr_SetString(po_IscException, + "Unexpected exception in getting section iterators"); + return (NULL); + } + + PyObject* list = PyList_New(0); + if (list == NULL) { + return (NULL); + } + try { + for (; rrsi != rrsi_end; ++rrsi) { + if (PyList_Append(list, createRRsetObject(**rrsi)) == -1) { + Py_DECREF(list); + return (NULL); + } + } + return (list); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure creating Question object: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure creating Question object"); + } + Py_DECREF(list); + return (NULL); +} + +//static PyObject* Message_beginQuestion(s_Message* self, PyObject* args); +//static PyObject* Message_endQuestion(s_Message* self, PyObject* args); +//static PyObject* Message_beginSection(s_Message* self, PyObject* args); +//static PyObject* Message_endSection(s_Message* self, PyObject* args); +//static PyObject* Message_addQuestion(s_Message* self, PyObject* args); +PyObject* +Message_addQuestion(s_Message* self, PyObject* args) { + PyObject* question; + + if (!PyArg_ParseTuple(args, "O!", &question_type, &question)) { + return (NULL); + } + + self->cppobj->addQuestion(PyQuestion_ToQuestion(question)); + + Py_RETURN_NONE; +} + +PyObject* +Message_addRRset(s_Message* self, PyObject* args) { + PyObject *sign = Py_False; + int section; + PyObject* rrset; + if (!PyArg_ParseTuple(args, "iO!|O!", §ion, &rrset_type, &rrset, + &PyBool_Type, &sign)) { + return (NULL); + } + + try { + self->cppobj->addRRset(static_cast(section), + PyRRset_ToRRsetPtr(rrset), sign == Py_True); + Py_RETURN_NONE; + } catch (const InvalidMessageOperation& imo) { + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } catch (const isc::OutOfRange& ex) { + PyErr_SetString(PyExc_OverflowError, ex.what()); + return (NULL); + } catch (...) { + PyErr_SetString(po_IscException, + "Unexpected exception in adding RRset"); + return (NULL); + } +} + +PyObject* +Message_clear(s_Message* self, PyObject* args) { + int i; + if (PyArg_ParseTuple(args, "i", &i)) { + PyErr_Clear(); + if (i == Message::PARSE) { + self->cppobj->clear(Message::PARSE); + Py_RETURN_NONE; + } else if (i == Message::RENDER) { + self->cppobj->clear(Message::RENDER); + Py_RETURN_NONE; + } else { + PyErr_SetString(PyExc_TypeError, + "Message mode must be Message.PARSE or Message.RENDER"); + return (NULL); + } + } else { + return (NULL); + } +} + +PyObject* +Message_makeResponse(s_Message* self) { + self->cppobj->makeResponse(); + Py_RETURN_NONE; +} + +PyObject* +Message_toText(s_Message* self) { + // Py_BuildValue makes python objects from native data + try { + return (Py_BuildValue("s", self->cppobj->toText().c_str())); + } catch (const InvalidMessageOperation& imo) { + PyErr_Clear(); + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } catch (...) { + PyErr_SetString(po_IscException, "Unexpected exception"); + return (NULL); + } +} + +PyObject* +Message_str(PyObject* self) { + // Simply call the to_text method we already defined + return (PyObject_CallMethod(self, + const_cast("to_text"), + const_cast(""))); +} + +PyObject* +Message_toWire(s_Message* self, PyObject* args) { + PyObject* mr; + PyObject* tsig_ctx = NULL; + + if (PyArg_ParseTuple(args, "O!|O!", &messagerenderer_type, &mr, + &tsigcontext_type, &tsig_ctx)) { + try { + if (tsig_ctx == NULL) { + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr)); + } else { + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr), + PyTSIGContext_ToTSIGContext(tsig_ctx)); + } + // If we return NULL it is seen as an error, so use this for + // None returns + Py_RETURN_NONE; + } catch (const InvalidMessageOperation& imo) { + PyErr_Clear(); + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } catch (const TSIGContextError& ex) { + // toWire() with a TSIG context can fail due to this if the + // python program has a bug. + PyErr_SetString(po_TSIGContextError, ex.what()); + return (NULL); + } catch (const std::exception& ex) { + // Other exceptions should be rare (most likely an implementation + // bug) + PyErr_SetString(po_TSIGContextError, ex.what()); + return (NULL); + } catch (...) { + PyErr_SetString(PyExc_RuntimeError, + "Unexpected C++ exception in Message.to_wire"); + return (NULL); + } + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "toWire argument must be a MessageRenderer"); + return (NULL); +} + +PyObject* +Message_fromWire(PyObject* pyself, PyObject* args) { + s_Message* const self = static_cast(pyself); + const char* b; + Py_ssize_t len; + unsigned int options = Message::PARSE_DEFAULT; + + if (PyArg_ParseTuple(args, "y#", &b, &len) || + PyArg_ParseTuple(args, "y#I", &b, &len, &options)) { + // We need to clear the error in case the first call to ParseTuple + // fails. + PyErr_Clear(); + + InputBuffer inbuf(b, len); + try { + self->cppobj->fromWire( + inbuf, static_cast(options)); + Py_RETURN_NONE; + } catch (const InvalidMessageOperation& imo) { + PyErr_SetString(po_InvalidMessageOperation, imo.what()); + return (NULL); + } catch (const DNSMessageFORMERR& dmfe) { + PyErr_SetString(po_DNSMessageFORMERR, dmfe.what()); + return (NULL); + } catch (const DNSMessageBADVERS& dmfe) { + PyErr_SetString(po_DNSMessageBADVERS, dmfe.what()); + return (NULL); + } catch (const MessageTooShort& mts) { + PyErr_SetString(po_MessageTooShort, mts.what()); + return (NULL); + } catch (const InvalidBufferPosition& ex) { + PyErr_SetString(po_DNSMessageFORMERR, ex.what()); + return (NULL); + } catch (const exception& ex) { + const string ex_what = + "Error in Message.from_wire: " + string(ex.what()); + PyErr_SetString(PyExc_RuntimeError, ex_what.c_str()); + return (NULL); + } catch (...) { + PyErr_SetString(PyExc_RuntimeError, + "Unexpected exception in Message.from_wire"); + return (NULL); + } + } + + PyErr_SetString(PyExc_TypeError, + "from_wire() arguments must be a byte object and " + "(optional) parse options"); + return (NULL); +} + +} // end of unnamed namespace + +namespace isc { +namespace dns { +namespace python { + +// +// Declaration of the custom exceptions +// Initialization and addition of these go in the initModulePart +// function in pydnspp.cc +// +PyObject* po_MessageTooShort; +PyObject* po_InvalidMessageSection; +PyObject* po_InvalidMessageOperation; +PyObject* po_InvalidMessageUDPSize; + // This defines the complete type for reflection in python and // parsing of PyObject* to s_Message // Most of the functions are not actually implemented and NULL here. @@ -195,7 +725,7 @@ PyTypeObject message_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call Message_str, // tp_str NULL, // tp_getattro @@ -231,578 +761,6 @@ PyTypeObject message_type = { 0 // tp_version_tag }; -int -Message_init(s_Message* self, PyObject* args) { - int i; - - if (PyArg_ParseTuple(args, "i", &i)) { - PyErr_Clear(); - if (i == Message::PARSE) { - self->message = new Message(Message::PARSE); - return (0); - } else if (i == Message::RENDER) { - self->message = new Message(Message::RENDER); - return (0); - } else { - PyErr_SetString(PyExc_TypeError, "Message mode must be Message.PARSE or Message.RENDER"); - return (-1); - } - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in constructor argument"); - return (-1); -} - -void -Message_destroy(s_Message* self) { - delete self->message; - self->message = NULL; - Py_TYPE(self)->tp_free(self); -} - -PyObject* -Message_getHeaderFlag(s_Message* self, PyObject* args) { - unsigned int messageflag; - if (!PyArg_ParseTuple(args, "I", &messageflag)) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in get_header_flag argument"); - return (NULL); - } - - if (self->message->getHeaderFlag( - static_cast(messageflag))) { - Py_RETURN_TRUE; - } else { - Py_RETURN_FALSE; - } -} - -PyObject* -Message_setHeaderFlag(s_Message* self, PyObject* args) { - long messageflag; - PyObject *on = Py_True; - - if (!PyArg_ParseTuple(args, "l|O!", &messageflag, &PyBool_Type, &on)) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in set_header_flag argument"); - return (NULL); - } - if (messageflag < 0 || messageflag > 0xffff) { - PyErr_SetString(PyExc_ValueError, "Message header flag out of range"); - return (NULL); - } - - try { - self->message->setHeaderFlag( - static_cast(messageflag), on == Py_True); - Py_RETURN_NONE; - } catch (const InvalidMessageOperation& imo) { - PyErr_Clear(); - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } catch (const isc::InvalidParameter& ip) { - PyErr_Clear(); - PyErr_SetString(po_InvalidParameter, ip.what()); - return (NULL); - } -} - -PyObject* -Message_getQid(s_Message* self) { - return (Py_BuildValue("I", self->message->getQid())); -} - -PyObject* -Message_setQid(s_Message* self, PyObject* args) { - long id; - if (!PyArg_ParseTuple(args, "l", &id)) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in set_qid argument"); - return (NULL); - } - if (id < 0 || id > 0xffff) { - PyErr_SetString(PyExc_ValueError, - "Message id out of range"); - return (NULL); - } - - try { - self->message->setQid(id); - Py_RETURN_NONE; - } catch (const InvalidMessageOperation& imo) { - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } -} - -PyObject* -Message_getRcode(s_Message* self) { - s_Rcode* rcode; - - rcode = static_cast(rcode_type.tp_alloc(&rcode_type, 0)); - if (rcode != NULL) { - rcode->cppobj = NULL; - try { - rcode->cppobj = new Rcode(self->message->getRcode()); - } catch (const InvalidMessageOperation& imo) { - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - } catch (...) { - PyErr_SetString(po_IscException, "Unexpected exception"); - } - if (rcode->cppobj == NULL) { - Py_DECREF(rcode); - return (NULL); - } - } - - return (rcode); -} - -PyObject* -Message_setRcode(s_Message* self, PyObject* args) { - s_Rcode* rcode; - if (!PyArg_ParseTuple(args, "O!", &rcode_type, &rcode)) { - return (NULL); - } - try { - self->message->setRcode(*rcode->cppobj); - Py_RETURN_NONE; - } catch (const InvalidMessageOperation& imo) { - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } -} - -PyObject* -Message_getOpcode(s_Message* self) { - s_Opcode* opcode; - - opcode = static_cast(opcode_type.tp_alloc(&opcode_type, 0)); - if (opcode != NULL) { - opcode->opcode = NULL; - try { - opcode->opcode = new Opcode(self->message->getOpcode()); - } catch (const InvalidMessageOperation& imo) { - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - } catch (...) { - PyErr_SetString(po_IscException, "Unexpected exception"); - } - if (opcode->opcode == NULL) { - Py_DECREF(opcode); - return (NULL); - } - } - - return (opcode); -} - -PyObject* -Message_setOpcode(s_Message* self, PyObject* args) { - s_Opcode* opcode; - if (!PyArg_ParseTuple(args, "O!", &opcode_type, &opcode)) { - return (NULL); - } - try { - self->message->setOpcode(*opcode->opcode); - Py_RETURN_NONE; - } catch (const InvalidMessageOperation& imo) { - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } -} - -PyObject* -Message_getEDNS(s_Message* self) { - s_EDNS* edns; - EDNS* edns_body; - ConstEDNSPtr src = self->message->getEDNS(); - - if (!src) { - Py_RETURN_NONE; - } - if ((edns_body = new(nothrow) EDNS(*src)) == NULL) { - return (PyErr_NoMemory()); - } - edns = static_cast(opcode_type.tp_alloc(&edns_type, 0)); - if (edns != NULL) { - edns->edns = edns_body; - } - - return (edns); -} - -PyObject* -Message_setEDNS(s_Message* self, PyObject* args) { - s_EDNS* edns; - if (!PyArg_ParseTuple(args, "O!", &edns_type, &edns)) { - return (NULL); - } - try { - self->message->setEDNS(EDNSPtr(new EDNS(*edns->edns))); - Py_RETURN_NONE; - } catch (const InvalidMessageOperation& imo) { - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } -} - -PyObject* -Message_getTSIGRecord(s_Message* self) { - try { - const TSIGRecord* tsig_record = self->message->getTSIGRecord(); - - if (tsig_record == NULL) { - Py_RETURN_NONE; - } - return (createTSIGRecordObject(*tsig_record)); - } catch (const InvalidMessageOperation& ex) { - PyErr_SetString(po_InvalidMessageOperation, ex.what()); - } catch (const exception& ex) { - const string ex_what = - "Unexpected failure in getting TSIGRecord from message: " + - string(ex.what()); - PyErr_SetString(po_IscException, ex_what.c_str()); - } catch (...) { - PyErr_SetString(PyExc_SystemError, "Unexpected failure in " - "getting TSIGRecord from message"); - } - return (NULL); -} - -PyObject* -Message_getRRCount(s_Message* self, PyObject* args) { - unsigned int section; - if (!PyArg_ParseTuple(args, "I", §ion)) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in get_rr_count argument"); - return (NULL); - } - try { - return (Py_BuildValue("I", self->message->getRRCount( - static_cast(section)))); - } catch (const isc::OutOfRange& ex) { - PyErr_SetString(PyExc_OverflowError, ex.what()); - return (NULL); - } -} - -// TODO use direct iterators for these? (or simply lists for now?) -PyObject* -Message_getQuestion(s_Message* self) { - QuestionIterator qi, qi_end; - try { - qi = self->message->beginQuestion(); - qi_end = self->message->endQuestion(); - } catch (const InvalidMessageSection& ex) { - PyErr_SetString(po_InvalidMessageSection, ex.what()); - return (NULL); - } catch (...) { - PyErr_SetString(po_IscException, - "Unexpected exception in getting section iterators"); - return (NULL); - } - - PyObject* list = PyList_New(0); - if (list == NULL) { - return (NULL); - } - - for (; qi != qi_end; ++qi) { - s_Question *question = static_cast( - question_type.tp_alloc(&question_type, 0)); - if (question == NULL) { - Py_DECREF(question); - Py_DECREF(list); - return (NULL); - } - question->question = *qi; - if (PyList_Append(list, question) == -1) { - Py_DECREF(question); - Py_DECREF(list); - return (NULL); - } - Py_DECREF(question); - } - return (list); -} - -PyObject* -Message_getSection(s_Message* self, PyObject* args) { - unsigned int section; - if (!PyArg_ParseTuple(args, "I", §ion)) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in get_section argument"); - return (NULL); - } - RRsetIterator rrsi, rrsi_end; - try { - rrsi = self->message->beginSection( - static_cast(section)); - rrsi_end = self->message->endSection( - static_cast(section)); - } catch (const isc::OutOfRange& ex) { - PyErr_SetString(PyExc_OverflowError, ex.what()); - return (NULL); - } catch (const InvalidMessageSection& ex) { - PyErr_SetString(po_InvalidMessageSection, ex.what()); - return (NULL); - } catch (...) { - PyErr_SetString(po_IscException, - "Unexpected exception in getting section iterators"); - return (NULL); - } - - PyObject* list = PyList_New(0); - if (list == NULL) { - return (NULL); - } - for (; rrsi != rrsi_end; ++rrsi) { - s_RRset *rrset = static_cast( - rrset_type.tp_alloc(&rrset_type, 0)); - if (rrset == NULL) { - Py_DECREF(rrset); - Py_DECREF(list); - return (NULL); - } - rrset->rrset = *rrsi; - if (PyList_Append(list, rrset) == -1) { - Py_DECREF(rrset); - Py_DECREF(list); - return (NULL); - } - // PyList_Append increases refcount, so we remove ours since - // we don't need it anymore - Py_DECREF(rrset); - } - return (list); -} - -//static PyObject* Message_beginQuestion(s_Message* self, PyObject* args); -//static PyObject* Message_endQuestion(s_Message* self, PyObject* args); -//static PyObject* Message_beginSection(s_Message* self, PyObject* args); -//static PyObject* Message_endSection(s_Message* self, PyObject* args); -//static PyObject* Message_addQuestion(s_Message* self, PyObject* args); -PyObject* -Message_addQuestion(s_Message* self, PyObject* args) { - s_Question *question; - - if (!PyArg_ParseTuple(args, "O!", &question_type, &question)) { - return (NULL); - } - - self->message->addQuestion(question->question); - - Py_RETURN_NONE; -} - -PyObject* -Message_addRRset(s_Message* self, PyObject* args) { - PyObject *sign = Py_False; - int section; - s_RRset* rrset; - if (!PyArg_ParseTuple(args, "iO!|O!", §ion, &rrset_type, &rrset, - &PyBool_Type, &sign)) { - return (NULL); - } - - try { - self->message->addRRset(static_cast(section), - rrset->rrset, sign == Py_True); - Py_RETURN_NONE; - } catch (const InvalidMessageOperation& imo) { - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } catch (const isc::OutOfRange& ex) { - PyErr_SetString(PyExc_OverflowError, ex.what()); - return (NULL); - } catch (...) { - PyErr_SetString(po_IscException, - "Unexpected exception in adding RRset"); - return (NULL); - } -} - -PyObject* -Message_clear(s_Message* self, PyObject* args) { - int i; - if (PyArg_ParseTuple(args, "i", &i)) { - PyErr_Clear(); - if (i == Message::PARSE) { - self->message->clear(Message::PARSE); - Py_RETURN_NONE; - } else if (i == Message::RENDER) { - self->message->clear(Message::RENDER); - Py_RETURN_NONE; - } else { - PyErr_SetString(PyExc_TypeError, - "Message mode must be Message.PARSE or Message.RENDER"); - return (NULL); - } - } else { - return (NULL); - } -} - -PyObject* -Message_makeResponse(s_Message* self) { - self->message->makeResponse(); - Py_RETURN_NONE; -} - -PyObject* -Message_toText(s_Message* self) { - // Py_BuildValue makes python objects from native data - try { - return (Py_BuildValue("s", self->message->toText().c_str())); - } catch (const InvalidMessageOperation& imo) { - PyErr_Clear(); - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } catch (...) { - PyErr_SetString(po_IscException, "Unexpected exception"); - return (NULL); - } -} - -PyObject* -Message_str(PyObject* self) { - // Simply call the to_text method we already defined - return (PyObject_CallMethod(self, - const_cast("to_text"), - const_cast(""))); -} - -PyObject* -Message_toWire(s_Message* self, PyObject* args) { - s_MessageRenderer* mr; - s_TSIGContext* tsig_ctx = NULL; - - if (PyArg_ParseTuple(args, "O!|O!", &messagerenderer_type, &mr, - &tsigcontext_type, &tsig_ctx)) { - try { - if (tsig_ctx == NULL) { - self->message->toWire(*mr->messagerenderer); - } else { - self->message->toWire(*mr->messagerenderer, *tsig_ctx->cppobj); - } - // If we return NULL it is seen as an error, so use this for - // None returns - Py_RETURN_NONE; - } catch (const InvalidMessageOperation& imo) { - PyErr_Clear(); - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } catch (const TSIGContextError& ex) { - // toWire() with a TSIG context can fail due to this if the - // python program has a bug. - PyErr_SetString(po_TSIGContextError, ex.what()); - return (NULL); - } - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "toWire argument must be a MessageRenderer"); - return (NULL); -} - -PyObject* -Message_fromWire(s_Message* self, PyObject* args) { - const char* b; - Py_ssize_t len; - if (!PyArg_ParseTuple(args, "y#", &b, &len)) { - return (NULL); - } - - InputBuffer inbuf(b, len); - try { - self->message->fromWire(inbuf); - Py_RETURN_NONE; - } catch (const InvalidMessageOperation& imo) { - PyErr_SetString(po_InvalidMessageOperation, imo.what()); - return (NULL); - } catch (const DNSMessageFORMERR& dmfe) { - PyErr_SetString(po_DNSMessageFORMERR, dmfe.what()); - return (NULL); - } catch (const DNSMessageBADVERS& dmfe) { - PyErr_SetString(po_DNSMessageBADVERS, dmfe.what()); - return (NULL); - } catch (const MessageTooShort& mts) { - PyErr_SetString(po_MessageTooShort, mts.what()); - return (NULL); - } -} - -// Module Initialization, all statics are initialized here -bool -initModulePart_Message(PyObject* mod) { - if (PyType_Ready(&message_type) < 0) { - return (false); - } - Py_INCREF(&message_type); - - // Class variables - // These are added to the tp_dict of the type object - // - addClassVariable(message_type, "PARSE", - Py_BuildValue("I", Message::PARSE)); - addClassVariable(message_type, "RENDER", - Py_BuildValue("I", Message::RENDER)); - - addClassVariable(message_type, "HEADERFLAG_QR", - Py_BuildValue("I", Message::HEADERFLAG_QR)); - addClassVariable(message_type, "HEADERFLAG_AA", - Py_BuildValue("I", Message::HEADERFLAG_AA)); - addClassVariable(message_type, "HEADERFLAG_TC", - Py_BuildValue("I", Message::HEADERFLAG_TC)); - addClassVariable(message_type, "HEADERFLAG_RD", - Py_BuildValue("I", Message::HEADERFLAG_RD)); - addClassVariable(message_type, "HEADERFLAG_RA", - Py_BuildValue("I", Message::HEADERFLAG_RA)); - addClassVariable(message_type, "HEADERFLAG_AD", - Py_BuildValue("I", Message::HEADERFLAG_AD)); - addClassVariable(message_type, "HEADERFLAG_CD", - Py_BuildValue("I", Message::HEADERFLAG_CD)); - - addClassVariable(message_type, "SECTION_QUESTION", - Py_BuildValue("I", Message::SECTION_QUESTION)); - addClassVariable(message_type, "SECTION_ANSWER", - Py_BuildValue("I", Message::SECTION_ANSWER)); - addClassVariable(message_type, "SECTION_AUTHORITY", - Py_BuildValue("I", Message::SECTION_AUTHORITY)); - addClassVariable(message_type, "SECTION_ADDITIONAL", - Py_BuildValue("I", Message::SECTION_ADDITIONAL)); - - addClassVariable(message_type, "DEFAULT_MAX_UDPSIZE", - Py_BuildValue("I", Message::DEFAULT_MAX_UDPSIZE)); - - /* Class-specific exceptions */ - po_MessageTooShort = PyErr_NewException("pydnspp.MessageTooShort", NULL, - NULL); - PyModule_AddObject(mod, "MessageTooShort", po_MessageTooShort); - po_InvalidMessageSection = - PyErr_NewException("pydnspp.InvalidMessageSection", NULL, NULL); - PyModule_AddObject(mod, "InvalidMessageSection", po_InvalidMessageSection); - po_InvalidMessageOperation = - PyErr_NewException("pydnspp.InvalidMessageOperation", NULL, NULL); - PyModule_AddObject(mod, "InvalidMessageOperation", - po_InvalidMessageOperation); - po_InvalidMessageUDPSize = - PyErr_NewException("pydnspp.InvalidMessageUDPSize", NULL, NULL); - PyModule_AddObject(mod, "InvalidMessageUDPSize", po_InvalidMessageUDPSize); - po_DNSMessageBADVERS = PyErr_NewException("pydnspp.DNSMessageBADVERS", - NULL, NULL); - PyModule_AddObject(mod, "DNSMessageBADVERS", po_DNSMessageBADVERS); - - PyModule_AddObject(mod, "Message", - reinterpret_cast(&message_type)); - - - return (true); -} -} // end of unnamed namespace +} // end python namespace +} // end dns namespace +} // end isc namespace diff --git a/src/lib/dns/python/message_python.h b/src/lib/dns/python/message_python.h new file mode 100644 index 0000000000..be238907db --- /dev/null +++ b/src/lib/dns/python/message_python.h @@ -0,0 +1,40 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_MESSAGE_H +#define __PYTHON_MESSAGE_H 1 + +#include + +namespace isc { +namespace dns { +class Message; + +namespace python { + +extern PyObject* po_MessageTooShort; +extern PyObject* po_InvalidMessageSection; +extern PyObject* po_InvalidMessageOperation; +extern PyObject* po_InvalidMessageUDPSize; + +extern PyTypeObject message_type; + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_MESSAGE_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/message_python_inc.cc b/src/lib/dns/python/message_python_inc.cc new file mode 100644 index 0000000000..561c494436 --- /dev/null +++ b/src/lib/dns/python/message_python_inc.cc @@ -0,0 +1,41 @@ +namespace { +const char* const Message_fromWire_doc = "\ +from_wire(data, options=PARSE_DEFAULT)\n\ +\n\ +(Re)build a Message object from wire-format data.\n\ +\n\ +This method parses the given wire format data to build a complete\n\ +Message object. On success, the values of the header section fields\n\ +can be accessible via corresponding get methods, and the question and\n\ +following sections can be accessible via the corresponding iterators.\n\ +If the message contains an EDNS or TSIG, they can be accessible via\n\ +get_edns() and get_tsig_record(), respectively.\n\ +\n\ +This Message must be in the PARSE mode.\n\ +\n\ +This method performs strict validation on the given message based on\n\ +the DNS protocol specifications. If the given message data is invalid,\n\ +this method throws an exception (see the exception list).\n\ +\n\ +By default, this method combines RRs of the same name, RR type and RR\n\ +class in a section into a single RRset, even if they are interleaved\n\ +with a different type of RR (though it would be a rare case in\n\ +practice). If the PRESERVE_ORDER option is specified, it handles each\n\ +RR separately, in the appearing order, and converts it to a separate\n\ +RRset (so this RRset should contain exactly one Rdata). This mode will\n\ +be necessary when the higher level protocol is ordering conscious. For\n\ +example, in AXFR and IXFR, the position of the SOA RRs are crucial.\n\ +\n\ +Exceptions:\n\ + InvalidMessageOperation Message is in the RENDER mode\n\ + DNSMessageFORMERR The given message data is syntactically\n\ + MessageTooShort The given data is shorter than a valid header\n\ + section\n\ + Others Name, Rdata, and EDNS classes can also throw\n\ +\n\ +Parameters:\n\ + data A byte object of the wire data\n\ + options Parse options\n\ +\n\ +"; +} // unnamed namespace diff --git a/src/lib/dns/python/messagerenderer_python.cc b/src/lib/dns/python/messagerenderer_python.cc index e6f5d3e259..bb896228b2 100644 --- a/src/lib/dns/python/messagerenderer_python.cc +++ b/src/lib/dns/python/messagerenderer_python.cc @@ -17,6 +17,7 @@ #include #include +#include #include "pydnspp_common.h" #include "messagerenderer_python.h" @@ -24,15 +25,21 @@ using namespace isc::dns; using namespace isc::dns::python; using namespace isc::util; - -// MessageRenderer - -s_MessageRenderer::s_MessageRenderer() : outputbuffer(NULL), - messagerenderer(NULL) -{ -} +using namespace isc::util::python; namespace { +// The s_* Class simply covers one instantiation of the object. +// +// since we don't use *Buffer in the python version (but work with +// the already existing bytearray type where we use these custom buffers +// in C++, we need to keep track of one here. +class s_MessageRenderer : public PyObject { +public: + s_MessageRenderer(); + isc::util::OutputBuffer* outputbuffer; + MessageRenderer* cppobj; +}; + int MessageRenderer_init(s_MessageRenderer* self); void MessageRenderer_destroy(s_MessageRenderer* self); @@ -72,15 +79,15 @@ PyMethodDef MessageRenderer_methods[] = { int MessageRenderer_init(s_MessageRenderer* self) { self->outputbuffer = new OutputBuffer(4096); - self->messagerenderer = new MessageRenderer(*self->outputbuffer); + self->cppobj = new MessageRenderer(*self->outputbuffer); return (0); } void MessageRenderer_destroy(s_MessageRenderer* self) { - delete self->messagerenderer; + delete self->cppobj; delete self->outputbuffer; - self->messagerenderer = NULL; + self->cppobj = NULL; self->outputbuffer = NULL; Py_TYPE(self)->tp_free(self); } @@ -88,18 +95,18 @@ MessageRenderer_destroy(s_MessageRenderer* self) { PyObject* MessageRenderer_getData(s_MessageRenderer* self) { return (Py_BuildValue("y#", - self->messagerenderer->getData(), - self->messagerenderer->getLength())); + self->cppobj->getData(), + self->cppobj->getLength())); } PyObject* MessageRenderer_getLength(s_MessageRenderer* self) { - return (Py_BuildValue("I", self->messagerenderer->getLength())); + return (Py_BuildValue("I", self->cppobj->getLength())); } PyObject* MessageRenderer_isTruncated(s_MessageRenderer* self) { - if (self->messagerenderer->isTruncated()) { + if (self->cppobj->isTruncated()) { Py_RETURN_TRUE; } else { Py_RETURN_FALSE; @@ -108,17 +115,17 @@ MessageRenderer_isTruncated(s_MessageRenderer* self) { PyObject* MessageRenderer_getLengthLimit(s_MessageRenderer* self) { - return (Py_BuildValue("I", self->messagerenderer->getLengthLimit())); + return (Py_BuildValue("I", self->cppobj->getLengthLimit())); } PyObject* MessageRenderer_getCompressMode(s_MessageRenderer* self) { - return (Py_BuildValue("I", self->messagerenderer->getCompressMode())); + return (Py_BuildValue("I", self->cppobj->getCompressMode())); } PyObject* MessageRenderer_setTruncated(s_MessageRenderer* self) { - self->messagerenderer->setTruncated(); + self->cppobj->setTruncated(); Py_RETURN_NONE; } @@ -138,7 +145,7 @@ MessageRenderer_setLengthLimit(s_MessageRenderer* self, "MessageRenderer length limit out of range"); return (NULL); } - self->messagerenderer->setLengthLimit(lengthlimit); + self->cppobj->setLengthLimit(lengthlimit); Py_RETURN_NONE; } @@ -152,12 +159,12 @@ MessageRenderer_setCompressMode(s_MessageRenderer* self, } if (mode == MessageRenderer::CASE_INSENSITIVE) { - self->messagerenderer->setCompressMode(MessageRenderer::CASE_INSENSITIVE); + self->cppobj->setCompressMode(MessageRenderer::CASE_INSENSITIVE); // If we return NULL it is seen as an error, so use this for // None returns, it also applies to CASE_SENSITIVE. Py_RETURN_NONE; } else if (mode == MessageRenderer::CASE_SENSITIVE) { - self->messagerenderer->setCompressMode(MessageRenderer::CASE_SENSITIVE); + self->cppobj->setCompressMode(MessageRenderer::CASE_SENSITIVE); Py_RETURN_NONE; } else { PyErr_SetString(PyExc_TypeError, @@ -169,12 +176,11 @@ MessageRenderer_setCompressMode(s_MessageRenderer* self, PyObject* MessageRenderer_clear(s_MessageRenderer* self) { - self->messagerenderer->clear(); + self->cppobj->clear(); Py_RETURN_NONE; } } // end of unnamed namespace -// end of MessageRenderer namespace isc { namespace dns { namespace python { @@ -233,37 +239,29 @@ PyTypeObject messagerenderer_type = { 0 // tp_version_tag }; -// Module Initialization, all statics are initialized here +// If we need a createMessageRendererObject(), should we copy? can we? +// copy the existing buffer into a new one, then create a new renderer with +// that buffer? + bool -initModulePart_MessageRenderer(PyObject* mod) { - // Add the exceptions to the module - - // Add the enums to the module - - // Add the constants to the module - - // Add the classes to the module - // We initialize the static description object with PyType_Ready(), - // then add it to the module - - // NameComparisonResult - if (PyType_Ready(&messagerenderer_type) < 0) { - return (false); +PyMessageRenderer_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&messagerenderer_type); - - // Class variables - // These are added to the tp_dict of the type object - addClassVariable(messagerenderer_type, "CASE_INSENSITIVE", - Py_BuildValue("I", MessageRenderer::CASE_INSENSITIVE)); - addClassVariable(messagerenderer_type, "CASE_SENSITIVE", - Py_BuildValue("I", MessageRenderer::CASE_SENSITIVE)); - - PyModule_AddObject(mod, "MessageRenderer", - reinterpret_cast(&messagerenderer_type)); - - return (true); + return (PyObject_TypeCheck(obj, &messagerenderer_type)); } + +MessageRenderer& +PyMessageRenderer_ToMessageRenderer(PyObject* messagerenderer_obj) { + if (messagerenderer_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in MessageRenderer PyObject conversion"); + } + s_MessageRenderer* messagerenderer = static_cast(messagerenderer_obj); + return (*messagerenderer->cppobj); +} + + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/messagerenderer_python.h b/src/lib/dns/python/messagerenderer_python.h index 3bb096ed6c..ea9a9402d9 100644 --- a/src/lib/dns/python/messagerenderer_python.h +++ b/src/lib/dns/python/messagerenderer_python.h @@ -17,30 +17,35 @@ #include +#include + namespace isc { -namespace util { -class OutputBuffer; -} namespace dns { class MessageRenderer; namespace python { -// The s_* Class simply covers one instantiation of the object. -// -// since we don't use *Buffer in the python version (but work with -// the already existing bytearray type where we use these custom buffers -// in C++, we need to keep track of one here. -class s_MessageRenderer : public PyObject { -public: - s_MessageRenderer(); - isc::util::OutputBuffer* outputbuffer; - MessageRenderer* messagerenderer; -}; - extern PyTypeObject messagerenderer_type; -bool initModulePart_MessageRenderer(PyObject* mod); +/// \brief Checks if the given python object is a MessageRenderer object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type MessageRenderer, false otherwise +bool PyMessageRenderer_Check(PyObject* obj); + +/// \brief Returns a reference to the MessageRenderer object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type MessageRenderer; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyMessageRenderer_Check() +/// +/// \note This is not a copy; if the MessageRenderer is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param messagerenderer_obj The messagerenderer object to convert +MessageRenderer& PyMessageRenderer_ToMessageRenderer(PyObject* messagerenderer_obj); } // namespace python } // namespace dns diff --git a/src/lib/dns/python/name_python.cc b/src/lib/dns/python/name_python.cc index d00c6f7c89..404344549b 100644 --- a/src/lib/dns/python/name_python.cc +++ b/src/lib/dns/python/name_python.cc @@ -25,20 +25,25 @@ #include "messagerenderer_python.h" #include "name_python.h" -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description using namespace isc::dns; using namespace isc::dns::python; using namespace isc::util; using namespace isc::util::python; namespace { -// NameComparisonResult +// The s_* Class simply covers one instantiation of the object. +class s_NameComparisonResult : public PyObject { +public: + s_NameComparisonResult() : cppobj(NULL) {} + NameComparisonResult* cppobj; +}; + +class s_Name : public PyObject { +public: + s_Name() : cppobj(NULL), position(0) {} + Name* cppobj; + size_t position; +}; int NameComparisonResult_init(s_NameComparisonResult*, PyObject*); void NameComparisonResult_destroy(s_NameComparisonResult* self); @@ -84,9 +89,7 @@ PyObject* NameComparisonResult_getRelation(s_NameComparisonResult* self) { return (Py_BuildValue("I", self->cppobj->getRelation())); } -// end of NameComparisonResult -// Name // Shortcut type which would be convenient for adding class variables safely. typedef CPPPyObjectContainer NameContainer; @@ -292,7 +295,7 @@ Name_str(PyObject* self) { PyObject* Name_toWire(s_Name* self, PyObject* args) { PyObject* bytes; - s_MessageRenderer* mr; + PyObject* mr; if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { PyObject* bytes_o = bytes; @@ -306,7 +309,7 @@ Name_toWire(s_Name* self, PyObject* args) { Py_DECREF(name_bytes); return (result); } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { - self->cppobj->toWire(*mr->messagerenderer); + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr)); // If we return NULL it is seen as an error, so use this for // None returns Py_RETURN_NONE; @@ -495,7 +498,7 @@ Name_isWildCard(s_Name* self) { Py_RETURN_FALSE; } } -// end of Name + } // end of unnamed namespace namespace isc { @@ -634,94 +637,32 @@ PyTypeObject name_type = { 0 // tp_version_tag }; -// Module Initialization, all statics are initialized here -bool -initModulePart_Name(PyObject* mod) { - // Add the classes to the module - // We initialize the static description object with PyType_Ready(), - // then add it to the module - - // - // NameComparisonResult - // - if (PyType_Ready(&name_comparison_result_type) < 0) { - return (false); - } - Py_INCREF(&name_comparison_result_type); - - // Add the enums to the module - po_NameRelation = Py_BuildValue("{i:s,i:s,i:s,i:s}", - NameComparisonResult::SUPERDOMAIN, "SUPERDOMAIN", - NameComparisonResult::SUBDOMAIN, "SUBDOMAIN", - NameComparisonResult::EQUAL, "EQUAL", - NameComparisonResult::COMMONANCESTOR, "COMMONANCESTOR"); - addClassVariable(name_comparison_result_type, "NameRelation", po_NameRelation); - - PyModule_AddObject(mod, "NameComparisonResult", - reinterpret_cast(&name_comparison_result_type)); - - // - // Name - // - - if (PyType_Ready(&name_type) < 0) { - return (false); - } - Py_INCREF(&name_type); - - // Add the constants to the module - addClassVariable(name_type, "MAX_WIRE", Py_BuildValue("I", Name::MAX_WIRE)); - addClassVariable(name_type, "MAX_LABELS", Py_BuildValue("I", Name::MAX_LABELS)); - addClassVariable(name_type, "MAX_LABELLEN", Py_BuildValue("I", Name::MAX_LABELLEN)); - addClassVariable(name_type, "MAX_COMPRESS_POINTER", Py_BuildValue("I", Name::MAX_COMPRESS_POINTER)); - addClassVariable(name_type, "COMPRESS_POINTER_MARK8", Py_BuildValue("I", Name::COMPRESS_POINTER_MARK8)); - addClassVariable(name_type, "COMPRESS_POINTER_MARK16", Py_BuildValue("I", Name::COMPRESS_POINTER_MARK16)); - - s_Name* root_name = PyObject_New(s_Name, &name_type); - root_name->cppobj = new Name(Name::ROOT_NAME()); - PyObject* po_ROOT_NAME = root_name; - addClassVariable(name_type, "ROOT_NAME", po_ROOT_NAME); - - PyModule_AddObject(mod, "Name", - reinterpret_cast(&name_type)); - - - // Add the exceptions to the module - po_EmptyLabel = PyErr_NewException("pydnspp.EmptyLabel", NULL, NULL); - PyModule_AddObject(mod, "EmptyLabel", po_EmptyLabel); - - po_TooLongName = PyErr_NewException("pydnspp.TooLongName", NULL, NULL); - PyModule_AddObject(mod, "TooLongName", po_TooLongName); - - po_TooLongLabel = PyErr_NewException("pydnspp.TooLongLabel", NULL, NULL); - PyModule_AddObject(mod, "TooLongLabel", po_TooLongLabel); - - po_BadLabelType = PyErr_NewException("pydnspp.BadLabelType", NULL, NULL); - PyModule_AddObject(mod, "BadLabelType", po_BadLabelType); - - po_BadEscape = PyErr_NewException("pydnspp.BadEscape", NULL, NULL); - PyModule_AddObject(mod, "BadEscape", po_BadEscape); - - po_IncompleteName = PyErr_NewException("pydnspp.IncompleteName", NULL, NULL); - PyModule_AddObject(mod, "IncompleteName", po_IncompleteName); - - po_InvalidBufferPosition = PyErr_NewException("pydnspp.InvalidBufferPosition", NULL, NULL); - PyModule_AddObject(mod, "InvalidBufferPosition", po_InvalidBufferPosition); - - // This one could have gone into the message_python.cc file, but is - // already needed here. - po_DNSMessageFORMERR = PyErr_NewException("pydnspp.DNSMessageFORMERR", NULL, NULL); - PyModule_AddObject(mod, "DNSMessageFORMERR", po_DNSMessageFORMERR); - - return (true); -} - PyObject* createNameObject(const Name& source) { - NameContainer container = PyObject_New(s_Name, &name_type); + NameContainer container(PyObject_New(s_Name, &name_type)); container.set(new Name(source)); return (container.release()); } + +bool +PyName_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); + } + return (PyObject_TypeCheck(obj, &name_type)); +} + +const Name& +PyName_ToName(const PyObject* name_obj) { + if (name_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in Name PyObject conversion"); + } + const s_Name* name = static_cast(name_obj); + return (*name->cppobj); +} + + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/name_python.h b/src/lib/dns/python/name_python.h index f8e793d7c5..86d7fd08a0 100644 --- a/src/lib/dns/python/name_python.h +++ b/src/lib/dns/python/name_python.h @@ -17,20 +17,12 @@ #include -#include - namespace isc { namespace dns { -class NameComparisonResult; class Name; namespace python { -// -// Declaration of the custom exceptions -// Initialization and addition of these go in the module init at the -// end -// extern PyObject* po_EmptyLabel; extern PyObject* po_TooLongName; extern PyObject* po_TooLongLabel; @@ -47,25 +39,9 @@ extern PyObject* po_DNSMessageFORMERR; // extern PyObject* po_NameRelation; -// The s_* Class simply covers one instantiation of the object. -class s_NameComparisonResult : public PyObject { -public: - s_NameComparisonResult() : cppobj(NULL) {} - NameComparisonResult* cppobj; -}; - -class s_Name : public PyObject { -public: - s_Name() : cppobj(NULL), position(0) {} - Name* cppobj; - size_t position; -}; - extern PyTypeObject name_comparison_result_type; extern PyTypeObject name_type; -bool initModulePart_Name(PyObject* mod); - /// This is A simple shortcut to create a python Name object (in the /// form of a pointer to PyObject) with minimal exception safety. /// On success, it returns a valid pointer to PyObject with a reference @@ -74,6 +50,27 @@ bool initModulePart_Name(PyObject* mod); /// This function is expected to be called with in a try block /// followed by necessary setup for python exception. PyObject* createNameObject(const Name& source); + +/// \brief Checks if the given python object is a Name object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type Name, false otherwise +bool PyName_Check(PyObject* obj); + +/// \brief Returns a reference to the Name object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type Name; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyName_Check() +/// +/// \note This is not a copy; if the Name is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param name_obj The name object to convert +const Name& PyName_ToName(const PyObject* name_obj); + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/opcode_python.cc b/src/lib/dns/python/opcode_python.cc index 0e2a30b8a0..50436a9f70 100644 --- a/src/lib/dns/python/opcode_python.cc +++ b/src/lib/dns/python/opcode_python.cc @@ -12,32 +12,31 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include + #include +#include + +#include "pydnspp_common.h" +#include "opcode_python.h" +#include "edns_python.h" using namespace isc::dns; - -// -// Declaration of the custom exceptions (None for this class) - -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description +using namespace isc::dns::python; +using namespace isc::util; +using namespace isc::util::python; namespace { -// -// Opcode -// + class s_Opcode : public PyObject { public: - s_Opcode() : opcode(NULL), static_code(false) {} - const Opcode* opcode; + s_Opcode() : cppobj(NULL), static_code(false) {} + const isc::dns::Opcode* cppobj; bool static_code; }; +typedef CPPPyObjectContainer OpcodeContainer; + int Opcode_init(s_Opcode* const self, PyObject* args); void Opcode_destroy(s_Opcode* const self); @@ -103,64 +102,13 @@ PyMethodDef Opcode_methods[] = { { NULL, NULL, 0, NULL } }; -PyTypeObject opcode_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "pydnspp.Opcode", - sizeof(s_Opcode), // tp_basicsize - 0, // tp_itemsize - (destructor)Opcode_destroy, // tp_dealloc - NULL, // tp_print - NULL, // tp_getattr - NULL, // tp_setattr - NULL, // tp_reserved - NULL, // tp_repr - NULL, // tp_as_number - NULL, // tp_as_sequence - NULL, // tp_as_mapping - NULL, // tp_hash - NULL, // tp_call - Opcode_str, // tp_str - NULL, // tp_getattro - NULL, // tp_setattro - NULL, // tp_as_buffer - Py_TPFLAGS_DEFAULT, // tp_flags - "The Opcode class objects represent standard OPCODEs " - "of the header section of DNS messages.", - NULL, // tp_traverse - NULL, // tp_clear - (richcmpfunc)Opcode_richcmp, // tp_richcompare - 0, // tp_weaklistoffset - NULL, // tp_iter - NULL, // tp_iternext - Opcode_methods, // tp_methods - NULL, // tp_members - NULL, // tp_getset - NULL, // tp_base - NULL, // tp_dict - NULL, // tp_descr_get - NULL, // tp_descr_set - 0, // tp_dictoffset - (initproc)Opcode_init, // tp_init - NULL, // tp_alloc - PyType_GenericNew, // tp_new - NULL, // tp_free - NULL, // tp_is_gc - NULL, // tp_bases - NULL, // tp_mro - NULL, // tp_cache - NULL, // tp_subclasses - NULL, // tp_weaklist - NULL, // tp_del - 0 // tp_version_tag -}; - int Opcode_init(s_Opcode* const self, PyObject* args) { uint8_t code = 0; if (PyArg_ParseTuple(args, "b", &code)) { try { - self->opcode = new Opcode(code); + self->cppobj = new Opcode(code); self->static_code = false; } catch (const isc::OutOfRange& ex) { PyErr_SetString(PyExc_OverflowError, ex.what()); @@ -181,22 +129,22 @@ Opcode_init(s_Opcode* const self, PyObject* args) { void Opcode_destroy(s_Opcode* const self) { // Depending on whether we created the rcode or are referring - // to a global static one, we do or do not delete self->opcode here + // to a global static one, we do or do not delete self->cppobj here if (!self->static_code) { - delete self->opcode; + delete self->cppobj; } - self->opcode = NULL; + self->cppobj = NULL; Py_TYPE(self)->tp_free(self); } PyObject* Opcode_getCode(const s_Opcode* const self) { - return (Py_BuildValue("I", self->opcode->getCode())); + return (Py_BuildValue("I", self->cppobj->getCode())); } PyObject* Opcode_toText(const s_Opcode* const self) { - return (Py_BuildValue("s", self->opcode->toText().c_str())); + return (Py_BuildValue("s", self->cppobj->toText().c_str())); } PyObject* @@ -211,7 +159,7 @@ PyObject* Opcode_createStatic(const Opcode& opcode) { s_Opcode* ret = PyObject_New(s_Opcode, &opcode_type); if (ret != NULL) { - ret->opcode = &opcode; + ret->cppobj = &opcode; ret->static_code = true; } return (ret); @@ -297,7 +245,7 @@ Opcode_RESERVED15(const s_Opcode*) { return (Opcode_createStatic(Opcode::RESERVED15())); } -PyObject* +PyObject* Opcode_richcmp(const s_Opcode* const self, const s_Opcode* const other, const int op) { @@ -318,10 +266,10 @@ Opcode_richcmp(const s_Opcode* const self, const s_Opcode* const other, PyErr_SetString(PyExc_TypeError, "Unorderable type; Opcode"); return (NULL); case Py_EQ: - c = (*self->opcode == *other->opcode); + c = (*self->cppobj == *other->cppobj); break; case Py_NE: - c = (*self->opcode != *other->opcode); + c = (*self->cppobj != *other->cppobj); break; case Py_GT: PyErr_SetString(PyExc_TypeError, "Unorderable type; Opcode"); @@ -336,55 +284,88 @@ Opcode_richcmp(const s_Opcode* const self, const s_Opcode* const other, Py_RETURN_FALSE; } -// Module Initialization, all statics are initialized here -bool -initModulePart_Opcode(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&opcode_type) < 0) { - return (false); - } - Py_INCREF(&opcode_type); - void* p = &opcode_type; - if (PyModule_AddObject(mod, "Opcode", static_cast(p)) != 0) { - Py_DECREF(&opcode_type); - return (false); - } - - addClassVariable(opcode_type, "QUERY_CODE", - Py_BuildValue("h", Opcode::QUERY_CODE)); - addClassVariable(opcode_type, "IQUERY_CODE", - Py_BuildValue("h", Opcode::IQUERY_CODE)); - addClassVariable(opcode_type, "STATUS_CODE", - Py_BuildValue("h", Opcode::STATUS_CODE)); - addClassVariable(opcode_type, "RESERVED3_CODE", - Py_BuildValue("h", Opcode::RESERVED3_CODE)); - addClassVariable(opcode_type, "NOTIFY_CODE", - Py_BuildValue("h", Opcode::NOTIFY_CODE)); - addClassVariable(opcode_type, "UPDATE_CODE", - Py_BuildValue("h", Opcode::UPDATE_CODE)); - addClassVariable(opcode_type, "RESERVED6_CODE", - Py_BuildValue("h", Opcode::RESERVED6_CODE)); - addClassVariable(opcode_type, "RESERVED7_CODE", - Py_BuildValue("h", Opcode::RESERVED7_CODE)); - addClassVariable(opcode_type, "RESERVED8_CODE", - Py_BuildValue("h", Opcode::RESERVED8_CODE)); - addClassVariable(opcode_type, "RESERVED9_CODE", - Py_BuildValue("h", Opcode::RESERVED9_CODE)); - addClassVariable(opcode_type, "RESERVED10_CODE", - Py_BuildValue("h", Opcode::RESERVED10_CODE)); - addClassVariable(opcode_type, "RESERVED11_CODE", - Py_BuildValue("h", Opcode::RESERVED11_CODE)); - addClassVariable(opcode_type, "RESERVED12_CODE", - Py_BuildValue("h", Opcode::RESERVED12_CODE)); - addClassVariable(opcode_type, "RESERVED13_CODE", - Py_BuildValue("h", Opcode::RESERVED13_CODE)); - addClassVariable(opcode_type, "RESERVED14_CODE", - Py_BuildValue("h", Opcode::RESERVED14_CODE)); - addClassVariable(opcode_type, "RESERVED15_CODE", - Py_BuildValue("h", Opcode::RESERVED15_CODE)); - - return (true); -} } // end of unnamed namespace + +namespace isc { +namespace dns { +namespace python { + +PyTypeObject opcode_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "pydnspp.Opcode", + sizeof(s_Opcode), // tp_basicsize + 0, // tp_itemsize + (destructor)Opcode_destroy, // tp_dealloc + NULL, // tp_print + NULL, // tp_getattr + NULL, // tp_setattr + NULL, // tp_reserved + NULL, // tp_repr + NULL, // tp_as_number + NULL, // tp_as_sequence + NULL, // tp_as_mapping + NULL, // tp_hash + NULL, // tp_call + Opcode_str, // tp_str + NULL, // tp_getattro + NULL, // tp_setattro + NULL, // tp_as_buffer + Py_TPFLAGS_DEFAULT, // tp_flags + "The Opcode class objects represent standard OPCODEs " + "of the header section of DNS messages.", + NULL, // tp_traverse + NULL, // tp_clear + (richcmpfunc)Opcode_richcmp, // tp_richcompare + 0, // tp_weaklistoffset + NULL, // tp_iter + NULL, // tp_iternext + Opcode_methods, // tp_methods + NULL, // tp_members + NULL, // tp_getset + NULL, // tp_base + NULL, // tp_dict + NULL, // tp_descr_get + NULL, // tp_descr_set + 0, // tp_dictoffset + (initproc)Opcode_init, // tp_init + NULL, // tp_alloc + PyType_GenericNew, // tp_new + NULL, // tp_free + NULL, // tp_is_gc + NULL, // tp_bases + NULL, // tp_mro + NULL, // tp_cache + NULL, // tp_subclasses + NULL, // tp_weaklist + NULL, // tp_del + 0 // tp_version_tag +}; + +PyObject* +createOpcodeObject(const Opcode& source) { + OpcodeContainer container(PyObject_New(s_Opcode, &opcode_type)); + container.set(new Opcode(source)); + return (container.release()); +} + +bool +PyOpcode_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); + } + return (PyObject_TypeCheck(obj, &opcode_type)); +} + +const Opcode& +PyOpcode_ToOpcode(const PyObject* opcode_obj) { + if (opcode_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in Opcode PyObject conversion"); + } + const s_Opcode* opcode = static_cast(opcode_obj); + return (*opcode->cppobj); +} + +} // end python namespace +} // end dns namespace +} // end isc namespace diff --git a/src/lib/dns/python/opcode_python.h b/src/lib/dns/python/opcode_python.h new file mode 100644 index 0000000000..d0aec15e8b --- /dev/null +++ b/src/lib/dns/python/opcode_python.h @@ -0,0 +1,64 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_OPCODE_H +#define __PYTHON_OPCODE_H 1 + +#include + +namespace isc { +namespace dns { +class Opcode; + +namespace python { + +extern PyTypeObject opcode_type; + +/// This is a simple shortcut to create a python Opcode object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createOpcodeObject(const Opcode& source); + +/// \brief Checks if the given python object is a Opcode object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type Opcode, false otherwise +bool PyOpcode_Check(PyObject* obj); + +/// \brief Returns a reference to the Opcode object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type Opcode; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyOpcode_Check() +/// +/// \note This is not a copy; if the Opcode is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param opcode_obj The opcode object to convert +const Opcode& PyOpcode_ToOpcode(const PyObject* opcode_obj); + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_OPCODE_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/pydnspp.cc b/src/lib/dns/python/pydnspp.cc index 07abf7112e..0a7d8e5324 100644 --- a/src/lib/dns/python/pydnspp.cc +++ b/src/lib/dns/python/pydnspp.cc @@ -21,63 +21,707 @@ // name initModulePart_, and return true/false instead of // NULL/*mod // -// And of course care has to be taken that all identifiers be unique +// The big init function is split up into a separate initModulePart function +// for each class we add. #define PY_SSIZE_T_CLEAN #include #include -#include - -#include - -#include - -#include -#include -#include +#include +#include +#include +#include #include "pydnspp_common.h" + +#include "edns_python.h" +#include "message_python.h" #include "messagerenderer_python.h" #include "name_python.h" +#include "opcode_python.h" +#include "pydnspp_common.h" +#include "pydnspp_towire.h" +#include "question_python.h" #include "rcode_python.h" -#include "tsigkey_python.h" -#include "tsig_rdata_python.h" +#include "rdata_python.h" +#include "rrclass_python.h" +#include "rrset_python.h" +#include "rrttl_python.h" +#include "rrtype_python.h" #include "tsigerror_python.h" -#include "tsigrecord_python.h" +#include "tsigkey_python.h" #include "tsig_python.h" +#include "tsig_rdata_python.h" +#include "tsigrecord_python.h" -namespace isc { -namespace dns { -namespace python { -// For our 'general' isc::Exceptions -PyObject* po_IscException; -PyObject* po_InvalidParameter; - -// For our own isc::dns::Exception -PyObject* po_DNSMessageBADVERS; -} -} -} - -// order is important here! +using namespace isc::dns; using namespace isc::dns::python; +using namespace isc::util::python; -#include // needs Messagerenderer -#include // needs Messagerenderer -#include // needs Messagerenderer -#include // needs Type, Class -#include // needs Rdata, RRTTL -#include // needs RRClass, RRType, RRTTL, - // Name -#include -#include // needs Messagerenderer, Rcode -#include // needs RRset, Question - -// -// Definition of the module -// namespace { + +bool +initModulePart_EDNS(PyObject* mod) { + // We initialize the static description object with PyType_Ready(), + // then add it to the module. This is not just a check! (leaving + // this out results in segmentation faults) + // + // After the type has been initialized, we initialize any exceptions + // that are defined in the wrapper for this class, and add constants + // to the type, if any + + if (PyType_Ready(&edns_type) < 0) { + return (false); + } + Py_INCREF(&edns_type); + void* p = &edns_type; + PyModule_AddObject(mod, "EDNS", static_cast(p)); + + addClassVariable(edns_type, "SUPPORTED_VERSION", + Py_BuildValue("B", EDNS::SUPPORTED_VERSION)); + + return (true); +} + +bool +initModulePart_Message(PyObject* mod) { + if (PyType_Ready(&message_type) < 0) { + return (false); + } + void* p = &message_type; + if (PyModule_AddObject(mod, "Message", static_cast(p)) < 0) { + return (false); + } + Py_INCREF(&message_type); + + try { + // + // Constant class variables + // + + // Parse mode + installClassVariable(message_type, "PARSE", + Py_BuildValue("I", Message::PARSE)); + installClassVariable(message_type, "RENDER", + Py_BuildValue("I", Message::RENDER)); + + // Parse options + installClassVariable(message_type, "PARSE_DEFAULT", + Py_BuildValue("I", Message::PARSE_DEFAULT)); + installClassVariable(message_type, "PRESERVE_ORDER", + Py_BuildValue("I", Message::PRESERVE_ORDER)); + + // Header flags + installClassVariable(message_type, "HEADERFLAG_QR", + Py_BuildValue("I", Message::HEADERFLAG_QR)); + installClassVariable(message_type, "HEADERFLAG_AA", + Py_BuildValue("I", Message::HEADERFLAG_AA)); + installClassVariable(message_type, "HEADERFLAG_TC", + Py_BuildValue("I", Message::HEADERFLAG_TC)); + installClassVariable(message_type, "HEADERFLAG_RD", + Py_BuildValue("I", Message::HEADERFLAG_RD)); + installClassVariable(message_type, "HEADERFLAG_RA", + Py_BuildValue("I", Message::HEADERFLAG_RA)); + installClassVariable(message_type, "HEADERFLAG_AD", + Py_BuildValue("I", Message::HEADERFLAG_AD)); + installClassVariable(message_type, "HEADERFLAG_CD", + Py_BuildValue("I", Message::HEADERFLAG_CD)); + + // Sections + installClassVariable(message_type, "SECTION_QUESTION", + Py_BuildValue("I", Message::SECTION_QUESTION)); + installClassVariable(message_type, "SECTION_ANSWER", + Py_BuildValue("I", Message::SECTION_ANSWER)); + installClassVariable(message_type, "SECTION_AUTHORITY", + Py_BuildValue("I", Message::SECTION_AUTHORITY)); + installClassVariable(message_type, "SECTION_ADDITIONAL", + Py_BuildValue("I", Message::SECTION_ADDITIONAL)); + + // Protocol constant + installClassVariable(message_type, "DEFAULT_MAX_UDPSIZE", + Py_BuildValue("I", Message::DEFAULT_MAX_UDPSIZE)); + + /* Class-specific exceptions */ + po_MessageTooShort = + PyErr_NewException("pydnspp.MessageTooShort", NULL, NULL); + PyObjectContainer(po_MessageTooShort).installToModule( + mod, "MessageTooShort"); + po_InvalidMessageSection = + PyErr_NewException("pydnspp.InvalidMessageSection", NULL, NULL); + PyObjectContainer(po_InvalidMessageSection).installToModule( + mod, "InvalidMessageSection"); + po_InvalidMessageOperation = + PyErr_NewException("pydnspp.InvalidMessageOperation", NULL, NULL); + PyObjectContainer(po_InvalidMessageOperation).installToModule( + mod, "InvalidMessageOperation"); + po_InvalidMessageUDPSize = + PyErr_NewException("pydnspp.InvalidMessageUDPSize", NULL, NULL); + PyObjectContainer(po_InvalidMessageUDPSize).installToModule( + mod, "InvalidMessageUDPSize"); + po_DNSMessageBADVERS = + PyErr_NewException("pydnspp.DNSMessageBADVERS", NULL, NULL); + PyObjectContainer(po_DNSMessageBADVERS).installToModule( + mod, "DNSMessageBADVERS"); + } catch (const std::exception& ex) { + const std::string ex_what = + "Unexpected failure in Message initialization: " + + std::string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + return (false); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure in Message initialization"); + return (false); + } + + return (true); +} + +bool +initModulePart_MessageRenderer(PyObject* mod) { + if (PyType_Ready(&messagerenderer_type) < 0) { + return (false); + } + Py_INCREF(&messagerenderer_type); + + addClassVariable(messagerenderer_type, "CASE_INSENSITIVE", + Py_BuildValue("I", MessageRenderer::CASE_INSENSITIVE)); + addClassVariable(messagerenderer_type, "CASE_SENSITIVE", + Py_BuildValue("I", MessageRenderer::CASE_SENSITIVE)); + + PyModule_AddObject(mod, "MessageRenderer", + reinterpret_cast(&messagerenderer_type)); + + return (true); +} + +bool +initModulePart_Name(PyObject* mod) { + + // + // NameComparisonResult + // + if (PyType_Ready(&name_comparison_result_type) < 0) { + return (false); + } + Py_INCREF(&name_comparison_result_type); + + // Add the enums to the module + po_NameRelation = Py_BuildValue("{i:s,i:s,i:s,i:s}", + NameComparisonResult::SUPERDOMAIN, "SUPERDOMAIN", + NameComparisonResult::SUBDOMAIN, "SUBDOMAIN", + NameComparisonResult::EQUAL, "EQUAL", + NameComparisonResult::COMMONANCESTOR, "COMMONANCESTOR"); + addClassVariable(name_comparison_result_type, "NameRelation", + po_NameRelation); + + PyModule_AddObject(mod, "NameComparisonResult", + reinterpret_cast(&name_comparison_result_type)); + + // + // Name + // + + if (PyType_Ready(&name_type) < 0) { + return (false); + } + Py_INCREF(&name_type); + + // Add the constants to the module + addClassVariable(name_type, "MAX_WIRE", + Py_BuildValue("I", Name::MAX_WIRE)); + addClassVariable(name_type, "MAX_LABELS", + Py_BuildValue("I", Name::MAX_LABELS)); + addClassVariable(name_type, "MAX_LABELLEN", + Py_BuildValue("I", Name::MAX_LABELLEN)); + addClassVariable(name_type, "MAX_COMPRESS_POINTER", + Py_BuildValue("I", Name::MAX_COMPRESS_POINTER)); + addClassVariable(name_type, "COMPRESS_POINTER_MARK8", + Py_BuildValue("I", Name::COMPRESS_POINTER_MARK8)); + addClassVariable(name_type, "COMPRESS_POINTER_MARK16", + Py_BuildValue("I", Name::COMPRESS_POINTER_MARK16)); + + addClassVariable(name_type, "ROOT_NAME", + createNameObject(Name::ROOT_NAME())); + + PyModule_AddObject(mod, "Name", + reinterpret_cast(&name_type)); + + + // Add the exceptions to the module + po_EmptyLabel = PyErr_NewException("pydnspp.EmptyLabel", NULL, NULL); + PyModule_AddObject(mod, "EmptyLabel", po_EmptyLabel); + + po_TooLongName = PyErr_NewException("pydnspp.TooLongName", NULL, NULL); + PyModule_AddObject(mod, "TooLongName", po_TooLongName); + + po_TooLongLabel = PyErr_NewException("pydnspp.TooLongLabel", NULL, NULL); + PyModule_AddObject(mod, "TooLongLabel", po_TooLongLabel); + + po_BadLabelType = PyErr_NewException("pydnspp.BadLabelType", NULL, NULL); + PyModule_AddObject(mod, "BadLabelType", po_BadLabelType); + + po_BadEscape = PyErr_NewException("pydnspp.BadEscape", NULL, NULL); + PyModule_AddObject(mod, "BadEscape", po_BadEscape); + + po_IncompleteName = PyErr_NewException("pydnspp.IncompleteName", NULL, NULL); + PyModule_AddObject(mod, "IncompleteName", po_IncompleteName); + + po_InvalidBufferPosition = + PyErr_NewException("pydnspp.InvalidBufferPosition", NULL, NULL); + PyModule_AddObject(mod, "InvalidBufferPosition", po_InvalidBufferPosition); + + // This one could have gone into the message_python.cc file, but is + // already needed here. + po_DNSMessageFORMERR = PyErr_NewException("pydnspp.DNSMessageFORMERR", + NULL, NULL); + PyModule_AddObject(mod, "DNSMessageFORMERR", po_DNSMessageFORMERR); + + return (true); +} + +bool +initModulePart_Opcode(PyObject* mod) { + if (PyType_Ready(&opcode_type) < 0) { + return (false); + } + Py_INCREF(&opcode_type); + void* p = &opcode_type; + if (PyModule_AddObject(mod, "Opcode", static_cast(p)) != 0) { + Py_DECREF(&opcode_type); + return (false); + } + + addClassVariable(opcode_type, "QUERY_CODE", + Py_BuildValue("h", Opcode::QUERY_CODE)); + addClassVariable(opcode_type, "IQUERY_CODE", + Py_BuildValue("h", Opcode::IQUERY_CODE)); + addClassVariable(opcode_type, "STATUS_CODE", + Py_BuildValue("h", Opcode::STATUS_CODE)); + addClassVariable(opcode_type, "RESERVED3_CODE", + Py_BuildValue("h", Opcode::RESERVED3_CODE)); + addClassVariable(opcode_type, "NOTIFY_CODE", + Py_BuildValue("h", Opcode::NOTIFY_CODE)); + addClassVariable(opcode_type, "UPDATE_CODE", + Py_BuildValue("h", Opcode::UPDATE_CODE)); + addClassVariable(opcode_type, "RESERVED6_CODE", + Py_BuildValue("h", Opcode::RESERVED6_CODE)); + addClassVariable(opcode_type, "RESERVED7_CODE", + Py_BuildValue("h", Opcode::RESERVED7_CODE)); + addClassVariable(opcode_type, "RESERVED8_CODE", + Py_BuildValue("h", Opcode::RESERVED8_CODE)); + addClassVariable(opcode_type, "RESERVED9_CODE", + Py_BuildValue("h", Opcode::RESERVED9_CODE)); + addClassVariable(opcode_type, "RESERVED10_CODE", + Py_BuildValue("h", Opcode::RESERVED10_CODE)); + addClassVariable(opcode_type, "RESERVED11_CODE", + Py_BuildValue("h", Opcode::RESERVED11_CODE)); + addClassVariable(opcode_type, "RESERVED12_CODE", + Py_BuildValue("h", Opcode::RESERVED12_CODE)); + addClassVariable(opcode_type, "RESERVED13_CODE", + Py_BuildValue("h", Opcode::RESERVED13_CODE)); + addClassVariable(opcode_type, "RESERVED14_CODE", + Py_BuildValue("h", Opcode::RESERVED14_CODE)); + addClassVariable(opcode_type, "RESERVED15_CODE", + Py_BuildValue("h", Opcode::RESERVED15_CODE)); + + return (true); +} + +bool +initModulePart_Question(PyObject* mod) { + if (PyType_Ready(&question_type) < 0) { + return (false); + } + Py_INCREF(&question_type); + PyModule_AddObject(mod, "Question", + reinterpret_cast(&question_type)); + + return (true); +} + +bool +initModulePart_Rcode(PyObject* mod) { + if (PyType_Ready(&rcode_type) < 0) { + return (false); + } + Py_INCREF(&rcode_type); + void* p = &rcode_type; + if (PyModule_AddObject(mod, "Rcode", static_cast(p)) != 0) { + Py_DECREF(&rcode_type); + return (false); + } + + addClassVariable(rcode_type, "NOERROR_CODE", + Py_BuildValue("h", Rcode::NOERROR_CODE)); + addClassVariable(rcode_type, "FORMERR_CODE", + Py_BuildValue("h", Rcode::FORMERR_CODE)); + addClassVariable(rcode_type, "SERVFAIL_CODE", + Py_BuildValue("h", Rcode::SERVFAIL_CODE)); + addClassVariable(rcode_type, "NXDOMAIN_CODE", + Py_BuildValue("h", Rcode::NXDOMAIN_CODE)); + addClassVariable(rcode_type, "NOTIMP_CODE", + Py_BuildValue("h", Rcode::NOTIMP_CODE)); + addClassVariable(rcode_type, "REFUSED_CODE", + Py_BuildValue("h", Rcode::REFUSED_CODE)); + addClassVariable(rcode_type, "YXDOMAIN_CODE", + Py_BuildValue("h", Rcode::YXDOMAIN_CODE)); + addClassVariable(rcode_type, "YXRRSET_CODE", + Py_BuildValue("h", Rcode::YXRRSET_CODE)); + addClassVariable(rcode_type, "NXRRSET_CODE", + Py_BuildValue("h", Rcode::NXRRSET_CODE)); + addClassVariable(rcode_type, "NOTAUTH_CODE", + Py_BuildValue("h", Rcode::NOTAUTH_CODE)); + addClassVariable(rcode_type, "NOTZONE_CODE", + Py_BuildValue("h", Rcode::NOTZONE_CODE)); + addClassVariable(rcode_type, "RESERVED11_CODE", + Py_BuildValue("h", Rcode::RESERVED11_CODE)); + addClassVariable(rcode_type, "RESERVED12_CODE", + Py_BuildValue("h", Rcode::RESERVED12_CODE)); + addClassVariable(rcode_type, "RESERVED13_CODE", + Py_BuildValue("h", Rcode::RESERVED13_CODE)); + addClassVariable(rcode_type, "RESERVED14_CODE", + Py_BuildValue("h", Rcode::RESERVED14_CODE)); + addClassVariable(rcode_type, "RESERVED15_CODE", + Py_BuildValue("h", Rcode::RESERVED15_CODE)); + addClassVariable(rcode_type, "BADVERS_CODE", + Py_BuildValue("h", Rcode::BADVERS_CODE)); + + return (true); +} + +bool +initModulePart_Rdata(PyObject* mod) { + if (PyType_Ready(&rdata_type) < 0) { + return (false); + } + Py_INCREF(&rdata_type); + PyModule_AddObject(mod, "Rdata", + reinterpret_cast(&rdata_type)); + + // Add the exceptions to the class + po_InvalidRdataLength = PyErr_NewException("pydnspp.InvalidRdataLength", + NULL, NULL); + PyModule_AddObject(mod, "InvalidRdataLength", po_InvalidRdataLength); + + po_InvalidRdataText = PyErr_NewException("pydnspp.InvalidRdataText", + NULL, NULL); + PyModule_AddObject(mod, "InvalidRdataText", po_InvalidRdataText); + + po_CharStringTooLong = PyErr_NewException("pydnspp.CharStringTooLong", + NULL, NULL); + PyModule_AddObject(mod, "CharStringTooLong", po_CharStringTooLong); + + + return (true); +} + +bool +initModulePart_RRClass(PyObject* mod) { + po_InvalidRRClass = PyErr_NewException("pydnspp.InvalidRRClass", + NULL, NULL); + Py_INCREF(po_InvalidRRClass); + PyModule_AddObject(mod, "InvalidRRClass", po_InvalidRRClass); + po_IncompleteRRClass = PyErr_NewException("pydnspp.IncompleteRRClass", + NULL, NULL); + Py_INCREF(po_IncompleteRRClass); + PyModule_AddObject(mod, "IncompleteRRClass", po_IncompleteRRClass); + + if (PyType_Ready(&rrclass_type) < 0) { + return (false); + } + Py_INCREF(&rrclass_type); + PyModule_AddObject(mod, "RRClass", + reinterpret_cast(&rrclass_type)); + + return (true); +} + +bool +initModulePart_RRset(PyObject* mod) { + po_EmptyRRset = PyErr_NewException("pydnspp.EmptyRRset", NULL, NULL); + PyModule_AddObject(mod, "EmptyRRset", po_EmptyRRset); + + // NameComparisonResult + if (PyType_Ready(&rrset_type) < 0) { + return (false); + } + Py_INCREF(&rrset_type); + PyModule_AddObject(mod, "RRset", + reinterpret_cast(&rrset_type)); + + return (true); +} + +bool +initModulePart_RRTTL(PyObject* mod) { + po_InvalidRRTTL = PyErr_NewException("pydnspp.InvalidRRTTL", NULL, NULL); + PyModule_AddObject(mod, "InvalidRRTTL", po_InvalidRRTTL); + po_IncompleteRRTTL = PyErr_NewException("pydnspp.IncompleteRRTTL", + NULL, NULL); + PyModule_AddObject(mod, "IncompleteRRTTL", po_IncompleteRRTTL); + + if (PyType_Ready(&rrttl_type) < 0) { + return (false); + } + Py_INCREF(&rrttl_type); + PyModule_AddObject(mod, "RRTTL", + reinterpret_cast(&rrttl_type)); + + return (true); +} + +bool +initModulePart_RRType(PyObject* mod) { + // Add the exceptions to the module + po_InvalidRRType = PyErr_NewException("pydnspp.InvalidRRType", NULL, NULL); + PyModule_AddObject(mod, "InvalidRRType", po_InvalidRRType); + po_IncompleteRRType = PyErr_NewException("pydnspp.IncompleteRRType", + NULL, NULL); + PyModule_AddObject(mod, "IncompleteRRType", po_IncompleteRRType); + + if (PyType_Ready(&rrtype_type) < 0) { + return (false); + } + Py_INCREF(&rrtype_type); + PyModule_AddObject(mod, "RRType", + reinterpret_cast(&rrtype_type)); + + return (true); +} + +bool +initModulePart_TSIGError(PyObject* mod) { + if (PyType_Ready(&tsigerror_type) < 0) { + return (false); + } + void* p = &tsigerror_type; + if (PyModule_AddObject(mod, "TSIGError", static_cast(p)) < 0) { + return (false); + } + Py_INCREF(&tsigerror_type); + + try { + // Constant class variables + // Error codes (bare values) + installClassVariable(tsigerror_type, "BAD_SIG_CODE", + Py_BuildValue("H", TSIGError::BAD_SIG_CODE)); + installClassVariable(tsigerror_type, "BAD_KEY_CODE", + Py_BuildValue("H", TSIGError::BAD_KEY_CODE)); + installClassVariable(tsigerror_type, "BAD_TIME_CODE", + Py_BuildValue("H", TSIGError::BAD_TIME_CODE)); + + // Error codes (constant objects) + installClassVariable(tsigerror_type, "NOERROR", + createTSIGErrorObject(TSIGError::NOERROR())); + installClassVariable(tsigerror_type, "FORMERR", + createTSIGErrorObject(TSIGError::FORMERR())); + installClassVariable(tsigerror_type, "SERVFAIL", + createTSIGErrorObject(TSIGError::SERVFAIL())); + installClassVariable(tsigerror_type, "NXDOMAIN", + createTSIGErrorObject(TSIGError::NXDOMAIN())); + installClassVariable(tsigerror_type, "NOTIMP", + createTSIGErrorObject(TSIGError::NOTIMP())); + installClassVariable(tsigerror_type, "REFUSED", + createTSIGErrorObject(TSIGError::REFUSED())); + installClassVariable(tsigerror_type, "YXDOMAIN", + createTSIGErrorObject(TSIGError::YXDOMAIN())); + installClassVariable(tsigerror_type, "YXRRSET", + createTSIGErrorObject(TSIGError::YXRRSET())); + installClassVariable(tsigerror_type, "NXRRSET", + createTSIGErrorObject(TSIGError::NXRRSET())); + installClassVariable(tsigerror_type, "NOTAUTH", + createTSIGErrorObject(TSIGError::NOTAUTH())); + installClassVariable(tsigerror_type, "NOTZONE", + createTSIGErrorObject(TSIGError::NOTZONE())); + installClassVariable(tsigerror_type, "RESERVED11", + createTSIGErrorObject(TSIGError::RESERVED11())); + installClassVariable(tsigerror_type, "RESERVED12", + createTSIGErrorObject(TSIGError::RESERVED12())); + installClassVariable(tsigerror_type, "RESERVED13", + createTSIGErrorObject(TSIGError::RESERVED13())); + installClassVariable(tsigerror_type, "RESERVED14", + createTSIGErrorObject(TSIGError::RESERVED14())); + installClassVariable(tsigerror_type, "RESERVED15", + createTSIGErrorObject(TSIGError::RESERVED15())); + installClassVariable(tsigerror_type, "BAD_SIG", + createTSIGErrorObject(TSIGError::BAD_SIG())); + installClassVariable(tsigerror_type, "BAD_KEY", + createTSIGErrorObject(TSIGError::BAD_KEY())); + installClassVariable(tsigerror_type, "BAD_TIME", + createTSIGErrorObject(TSIGError::BAD_TIME())); + } catch (const std::exception& ex) { + const std::string ex_what = + "Unexpected failure in TSIGError initialization: " + + std::string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + return (false); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure in TSIGError initialization"); + return (false); + } + + return (true); +} + +bool +initModulePart_TSIGKey(PyObject* mod) { + if (PyType_Ready(&tsigkey_type) < 0) { + return (false); + } + void* p = &tsigkey_type; + if (PyModule_AddObject(mod, "TSIGKey", static_cast(p)) != 0) { + return (false); + } + Py_INCREF(&tsigkey_type); + + try { + // Constant class variables + installClassVariable(tsigkey_type, "HMACMD5_NAME", + createNameObject(TSIGKey::HMACMD5_NAME())); + installClassVariable(tsigkey_type, "HMACSHA1_NAME", + createNameObject(TSIGKey::HMACSHA1_NAME())); + installClassVariable(tsigkey_type, "HMACSHA256_NAME", + createNameObject(TSIGKey::HMACSHA256_NAME())); + installClassVariable(tsigkey_type, "HMACSHA224_NAME", + createNameObject(TSIGKey::HMACSHA224_NAME())); + installClassVariable(tsigkey_type, "HMACSHA384_NAME", + createNameObject(TSIGKey::HMACSHA384_NAME())); + installClassVariable(tsigkey_type, "HMACSHA512_NAME", + createNameObject(TSIGKey::HMACSHA512_NAME())); + } catch (const std::exception& ex) { + const std::string ex_what = + "Unexpected failure in TSIGKey initialization: " + + std::string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + return (false); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure in TSIGKey initialization"); + return (false); + } + + return (true); +} + +bool +initModulePart_TSIGKeyRing(PyObject* mod) { + if (PyType_Ready(&tsigkeyring_type) < 0) { + return (false); + } + Py_INCREF(&tsigkeyring_type); + void* p = &tsigkeyring_type; + if (PyModule_AddObject(mod, "TSIGKeyRing", + static_cast(p)) != 0) { + Py_DECREF(&tsigkeyring_type); + return (false); + } + + addClassVariable(tsigkeyring_type, "SUCCESS", + Py_BuildValue("I", TSIGKeyRing::SUCCESS)); + addClassVariable(tsigkeyring_type, "EXIST", + Py_BuildValue("I", TSIGKeyRing::EXIST)); + addClassVariable(tsigkeyring_type, "NOTFOUND", + Py_BuildValue("I", TSIGKeyRing::NOTFOUND)); + + return (true); +} + +bool +initModulePart_TSIGContext(PyObject* mod) { + if (PyType_Ready(&tsigcontext_type) < 0) { + return (false); + } + void* p = &tsigcontext_type; + if (PyModule_AddObject(mod, "TSIGContext", + static_cast(p)) < 0) { + return (false); + } + Py_INCREF(&tsigcontext_type); + + try { + // Class specific exceptions + po_TSIGContextError = PyErr_NewException("pydnspp.TSIGContextError", + po_IscException, NULL); + PyObjectContainer(po_TSIGContextError).installToModule( + mod, "TSIGContextError"); + + // Constant class variables + installClassVariable(tsigcontext_type, "STATE_INIT", + Py_BuildValue("I", TSIGContext::INIT)); + installClassVariable(tsigcontext_type, "STATE_SENT_REQUEST", + Py_BuildValue("I", TSIGContext::SENT_REQUEST)); + installClassVariable(tsigcontext_type, "STATE_RECEIVED_REQUEST", + Py_BuildValue("I", TSIGContext::RECEIVED_REQUEST)); + installClassVariable(tsigcontext_type, "STATE_SENT_RESPONSE", + Py_BuildValue("I", TSIGContext::SENT_RESPONSE)); + installClassVariable(tsigcontext_type, "STATE_VERIFIED_RESPONSE", + Py_BuildValue("I", + TSIGContext::VERIFIED_RESPONSE)); + + installClassVariable(tsigcontext_type, "DEFAULT_FUDGE", + Py_BuildValue("H", TSIGContext::DEFAULT_FUDGE)); + } catch (const std::exception& ex) { + const std::string ex_what = + "Unexpected failure in TSIGContext initialization: " + + std::string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + return (false); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure in TSIGContext initialization"); + return (false); + } + + return (true); +} + +bool +initModulePart_TSIG(PyObject* mod) { + if (PyType_Ready(&tsig_type) < 0) { + return (false); + } + void* p = &tsig_type; + if (PyModule_AddObject(mod, "TSIG", static_cast(p)) < 0) { + return (false); + } + Py_INCREF(&tsig_type); + + return (true); +} + +bool +initModulePart_TSIGRecord(PyObject* mod) { + if (PyType_Ready(&tsigrecord_type) < 0) { + return (false); + } + void* p = &tsigrecord_type; + if (PyModule_AddObject(mod, "TSIGRecord", static_cast(p)) < 0) { + return (false); + } + Py_INCREF(&tsigrecord_type); + + try { + // Constant class variables + installClassVariable(tsigrecord_type, "TSIG_TTL", + Py_BuildValue("I", 0)); + } catch (const std::exception& ex) { + const std::string ex_what = + "Unexpected failure in TSIGRecord initialization: " + + std::string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + return (false); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure in TSIGRecord initialization"); + return (false); + } + + return (true); +} + PyModuleDef pydnspp = { { PyObject_HEAD_INIT(NULL) NULL, 0, NULL}, "pydnspp", diff --git a/src/lib/dns/python/pydnspp_common.cc b/src/lib/dns/python/pydnspp_common.cc index 8ca763a969..0f0f873867 100644 --- a/src/lib/dns/python/pydnspp_common.cc +++ b/src/lib/dns/python/pydnspp_common.cc @@ -15,9 +15,45 @@ #include #include +#include + +#include + +#include +#include +#include + +#include "pydnspp_common.h" +#include "messagerenderer_python.h" +#include "name_python.h" +#include "rdata_python.h" +#include "rrclass_python.h" +#include "rrtype_python.h" +#include "rrttl_python.h" +#include "rrset_python.h" +#include "rcode_python.h" +#include "opcode_python.h" +#include "tsigkey_python.h" +#include "tsig_rdata_python.h" +#include "tsigerror_python.h" +#include "tsigrecord_python.h" +#include "tsig_python.h" +#include "question_python.h" +#include "message_python.h" + +using namespace isc::dns::python; + namespace isc { namespace dns { namespace python { +// For our 'general' isc::Exceptions +PyObject* po_IscException; +PyObject* po_InvalidParameter; + +// For our own isc::dns::Exception +PyObject* po_DNSMessageBADVERS; + + int readDataFromSequence(uint8_t *data, size_t len, PyObject* sequence) { PyObject* el = NULL; diff --git a/src/lib/dns/python/pydnspp_common.h b/src/lib/dns/python/pydnspp_common.h index ed90998ccc..8092b086d4 100644 --- a/src/lib/dns/python/pydnspp_common.h +++ b/src/lib/dns/python/pydnspp_common.h @@ -20,8 +20,6 @@ #include #include -#include - namespace isc { namespace dns { namespace python { diff --git a/src/lib/dns/python/pydnspp_towire.h b/src/lib/dns/python/pydnspp_towire.h index 66362a0e1a..e987a29814 100644 --- a/src/lib/dns/python/pydnspp_towire.h +++ b/src/lib/dns/python/pydnspp_towire.h @@ -93,10 +93,10 @@ toWireWrapper(const PYSTRUCT* const self, PyObject* args) { } // To MessageRenderer version - s_MessageRenderer* renderer; + PyObject* renderer; if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &renderer)) { const unsigned int n = TOWIRECALLER(*self->cppobj)( - *renderer->messagerenderer); + PyMessageRenderer_ToMessageRenderer(renderer)); return (Py_BuildValue("I", n)); } diff --git a/src/lib/dns/python/question_python.cc b/src/lib/dns/python/question_python.cc index c702f85ec2..44d68a2047 100644 --- a/src/lib/dns/python/question_python.cc +++ b/src/lib/dns/python/question_python.cc @@ -12,25 +12,34 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#define PY_SSIZE_T_CLEAN +#include #include +#include +#include +#include +#include + +#include "pydnspp_common.h" +#include "question_python.h" +#include "name_python.h" +#include "rrclass_python.h" +#include "rrtype_python.h" +#include "messagerenderer_python.h" + +using namespace std; using namespace isc::dns; +using namespace isc::dns::python; +using namespace isc::util; +using namespace isc::util::python; +using namespace isc; -// -// Question -// - -// The s_* Class simply coverst one instantiation of the object +namespace { class s_Question : public PyObject { public: - QuestionPtr question; + isc::dns::QuestionPtr cppobj; }; -// -// We declare the functions here, the definitions are below -// the type definition of the object, since both can use the other -// - -// General creation and destruction static int Question_init(s_Question* self, PyObject* args); static void Question_destroy(s_Question* self); @@ -69,10 +78,168 @@ static PyMethodDef Question_methods[] = { { NULL, NULL, 0, NULL } }; +static int +Question_init(s_Question* self, PyObject* args) { + // Try out the various combinations of arguments to call the + // correct cpp constructor. + // Note that PyArg_ParseType can set PyError, and we need to clear + // that if we try several like here. Otherwise the *next* python + // call will suddenly appear to throw an exception. + // (the way to do exceptions is to set PyErr and return -1) + PyObject* name; + PyObject* rrclass; + PyObject* rrtype; + + const char* b; + Py_ssize_t len; + unsigned int position = 0; + + try { + if (PyArg_ParseTuple(args, "O!O!O!", &name_type, &name, + &rrclass_type, &rrclass, + &rrtype_type, &rrtype + )) { + self->cppobj = QuestionPtr(new Question(PyName_ToName(name), + PyRRClass_ToRRClass(rrclass), + PyRRType_ToRRType(rrtype))); + return (0); + } else if (PyArg_ParseTuple(args, "y#|I", &b, &len, &position)) { + PyErr_Clear(); + InputBuffer inbuf(b, len); + inbuf.setPosition(position); + self->cppobj = QuestionPtr(new Question(inbuf)); + return (0); + } + } catch (const DNSMessageFORMERR& dmfe) { + PyErr_Clear(); + PyErr_SetString(po_DNSMessageFORMERR, dmfe.what()); + return (-1); + } catch (const IncompleteRRClass& irc) { + PyErr_Clear(); + PyErr_SetString(po_IncompleteRRClass, irc.what()); + return (-1); + } catch (const IncompleteRRType& irt) { + PyErr_Clear(); + PyErr_SetString(po_IncompleteRRType, irt.what()); + return (-1); + } + + self->cppobj = QuestionPtr(); + + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in constructor argument"); + return (-1); +} + +static void +Question_destroy(s_Question* self) { + self->cppobj.reset(); + Py_TYPE(self)->tp_free(self); +} + +static PyObject* +Question_getName(s_Question* self) { + try { + return (createNameObject(self->cppobj->getName())); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting question Name: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting question Name"); + } + return (NULL); +} + +static PyObject* +Question_getType(s_Question* self) { + try { + return (createRRTypeObject(self->cppobj->getType())); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting question RRType: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting question RRType"); + } + return (NULL); +} + +static PyObject* +Question_getClass(s_Question* self) { + try { + return (createRRClassObject(self->cppobj->getClass())); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting question RRClass: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting question RRClass"); + } + return (NULL); +} + +static PyObject* +Question_toText(s_Question* self) { + // Py_BuildValue makes python objects from native data + return (Py_BuildValue("s", self->cppobj->toText().c_str())); +} + +static PyObject* +Question_str(PyObject* self) { + // Simply call the to_text method we already defined + return (PyObject_CallMethod(self, + const_cast("to_text"), + const_cast(""))); +} + +static PyObject* +Question_toWire(s_Question* self, PyObject* args) { + PyObject* bytes; + PyObject* mr; + + if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { + PyObject* bytes_o = bytes; + + // Max length is Name::MAX_WIRE + rrclass (2) + rrtype (2) + OutputBuffer buffer(Name::MAX_WIRE + 4); + self->cppobj->toWire(buffer); + PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), + buffer.getLength()); + PyObject* result = PySequence_InPlaceConcat(bytes_o, n); + // We need to release the object we temporarily created here + // to prevent memory leak + Py_DECREF(n); + return (result); + } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr)); + // If we return NULL it is seen as an error, so use this for + // None returns + Py_RETURN_NONE; + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "toWire argument must be a sequence object or a MessageRenderer"); + return (NULL); +} + +} // end of unnamed namespace + +namespace isc { +namespace dns { +namespace python { + // This defines the complete type for reflection in python and // parsing of PyObject* to s_Question // Most of the functions are not actually implemented and NULL here. -static PyTypeObject question_type = { +PyTypeObject question_type = { PyVarObject_HEAD_INIT(NULL, 0) "pydnspp.Question", sizeof(s_Question), // tp_basicsize @@ -86,7 +253,7 @@ static PyTypeObject question_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call Question_str, // tp_str NULL, // tp_getattro @@ -123,164 +290,32 @@ static PyTypeObject question_type = { 0 // tp_version_tag }; -static int -Question_init(s_Question* self, PyObject* args) { - // Try out the various combinations of arguments to call the - // correct cpp constructor. - // Note that PyArg_ParseType can set PyError, and we need to clear - // that if we try several like here. Otherwise the *next* python - // call will suddenly appear to throw an exception. - // (the way to do exceptions is to set PyErr and return -1) - s_Name* name; - s_RRClass* rrclass; - s_RRType* rrtype; - - const char* b; - Py_ssize_t len; - unsigned int position = 0; - - try { - if (PyArg_ParseTuple(args, "O!O!O!", &name_type, &name, - &rrclass_type, &rrclass, - &rrtype_type, &rrtype - )) { - self->question = QuestionPtr(new Question(*name->cppobj, *rrclass->rrclass, - *rrtype->rrtype)); - return (0); - } else if (PyArg_ParseTuple(args, "y#|I", &b, &len, &position)) { - PyErr_Clear(); - InputBuffer inbuf(b, len); - inbuf.setPosition(position); - self->question = QuestionPtr(new Question(inbuf)); - return (0); - } - } catch (const DNSMessageFORMERR& dmfe) { - PyErr_Clear(); - PyErr_SetString(po_DNSMessageFORMERR, dmfe.what()); - return (-1); - } catch (const IncompleteRRClass& irc) { - PyErr_Clear(); - PyErr_SetString(po_IncompleteRRClass, irc.what()); - return (-1); - } catch (const IncompleteRRType& irt) { - PyErr_Clear(); - PyErr_SetString(po_IncompleteRRType, irt.what()); - return (-1); - } - - self->question = QuestionPtr(); - - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in constructor argument"); - return (-1); +PyObject* +createQuestionObject(const Question& source) { + s_Question* question = + static_cast(question_type.tp_alloc(&question_type, 0)); + question->cppobj = QuestionPtr(new Question(source)); + return (question); } -static void -Question_destroy(s_Question* self) { - self->question.reset(); - Py_TYPE(self)->tp_free(self); -} - -static PyObject* -Question_getName(s_Question* self) { - s_Name* name; - - // is this the best way to do this? - name = static_cast(name_type.tp_alloc(&name_type, 0)); - if (name != NULL) { - name->cppobj = new Name(self->question->getName()); - } - - return (name); -} - -static PyObject* -Question_getType(s_Question* self) { - s_RRType* rrtype; - - rrtype = static_cast(rrtype_type.tp_alloc(&rrtype_type, 0)); - if (rrtype != NULL) { - rrtype->rrtype = new RRType(self->question->getType()); - } - - return (rrtype); -} - -static PyObject* -Question_getClass(s_Question* self) { - s_RRClass* rrclass; - - rrclass = static_cast(rrclass_type.tp_alloc(&rrclass_type, 0)); - if (rrclass != NULL) { - rrclass->rrclass = new RRClass(self->question->getClass()); - } - - return (rrclass); -} - - -static PyObject* -Question_toText(s_Question* self) { - // Py_BuildValue makes python objects from native data - return (Py_BuildValue("s", self->question->toText().c_str())); -} - -static PyObject* -Question_str(PyObject* self) { - // Simply call the to_text method we already defined - return (PyObject_CallMethod(self, - const_cast("to_text"), - const_cast(""))); -} - -static PyObject* -Question_toWire(s_Question* self, PyObject* args) { - PyObject* bytes; - s_MessageRenderer* mr; - - if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { - PyObject* bytes_o = bytes; - - // Max length is Name::MAX_WIRE + rrclass (2) + rrtype (2) - OutputBuffer buffer(Name::MAX_WIRE + 4); - self->question->toWire(buffer); - PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), - buffer.getLength()); - PyObject* result = PySequence_InPlaceConcat(bytes_o, n); - // We need to release the object we temporarily created here - // to prevent memory leak - Py_DECREF(n); - return (result); - } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { - self->question->toWire(*mr->messagerenderer); - // If we return NULL it is seen as an error, so use this for - // None returns - Py_RETURN_NONE; - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "toWire argument must be a sequence object or a MessageRenderer"); - return (NULL); -} - -// end of Question - - -// Module Initialization, all statics are initialized here bool -initModulePart_Question(PyObject* mod) { - // Add the exceptions to the module - - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&question_type) < 0) { - return (false); +PyQuestion_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&question_type); - PyModule_AddObject(mod, "Question", - reinterpret_cast(&question_type)); - - return (true); + return (PyObject_TypeCheck(obj, &question_type)); } + +const Question& +PyQuestion_ToQuestion(const PyObject* question_obj) { + if (question_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in Question PyObject conversion"); + } + const s_Question* question = static_cast(question_obj); + return (*question->cppobj); +} + +} // end python namespace +} // end dns namespace +} // end isc namespace diff --git a/src/lib/dns/python/question_python.h b/src/lib/dns/python/question_python.h new file mode 100644 index 0000000000..f5d78b1372 --- /dev/null +++ b/src/lib/dns/python/question_python.h @@ -0,0 +1,66 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_QUESTION_H +#define __PYTHON_QUESTION_H 1 + +#include + +namespace isc { +namespace dns { +class Question; + +namespace python { + +extern PyObject* po_EmptyQuestion; + +extern PyTypeObject question_type; + +/// This is a simple shortcut to create a python Question object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createQuestionObject(const Question& source); + +/// \brief Checks if the given python object is a Question object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type Question, false otherwise +bool PyQuestion_Check(PyObject* obj); + +/// \brief Returns a reference to the Question object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type Question; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyQuestion_Check() +/// +/// \note This is not a copy; if the Question is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param question_obj The question object to convert +const Question& PyQuestion_ToQuestion(const PyObject* question_obj); + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_QUESTION_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/rcode_python.cc b/src/lib/dns/python/rcode_python.cc index b594ad33b5..42b48e7b62 100644 --- a/src/lib/dns/python/rcode_python.cc +++ b/src/lib/dns/python/rcode_python.cc @@ -15,34 +15,39 @@ #include #include - #include +#include #include "pydnspp_common.h" #include "rcode_python.h" using namespace isc::dns; using namespace isc::dns::python; - -// -// Declaration of the custom exceptions (None for this class) - -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description - -// -// Rcode -// - -// Trivial constructor. -s_Rcode::s_Rcode() : cppobj(NULL), static_code(false) {} +using namespace isc::util::python; namespace { +// The s_* Class simply covers one instantiation of the object. +// +// We added a helper variable static_code here +// Since we can create Rcodes dynamically with Rcode(int), but also +// use the static globals (Rcode::NOERROR() etc), we use this +// variable to see if the code came from one of the latter, in which +// case Rcode_destroy should not free it (the other option is to +// allocate new Rcodes for every use of the static ones, but this +// seems more efficient). +// +// Follow-up note: we don't have to use the proxy function in the python lib; +// we can just define class specific constants directly (see TSIGError). +// We should make this cleanup later. +class s_Rcode : public PyObject { +public: + s_Rcode() : cppobj(NULL), static_code(false) {}; + const Rcode* cppobj; + bool static_code; +}; + +typedef CPPPyObjectContainer RcodeContainer; + int Rcode_init(s_Rcode* const self, PyObject* args); void Rcode_destroy(s_Rcode* const self); @@ -282,7 +287,7 @@ Rcode_BADVERS(const s_Rcode*) { return (Rcode_createStatic(Rcode::BADVERS())); } -PyObject* +PyObject* Rcode_richcmp(const s_Rcode* const self, const s_Rcode* const other, const int op) { @@ -376,59 +381,31 @@ PyTypeObject rcode_type = { 0 // tp_version_tag }; -// Module Initialization, all statics are initialized here -bool -initModulePart_Rcode(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&rcode_type) < 0) { - return (false); - } - Py_INCREF(&rcode_type); - void* p = &rcode_type; - if (PyModule_AddObject(mod, "Rcode", static_cast(p)) != 0) { - Py_DECREF(&rcode_type); - return (false); - } - - addClassVariable(rcode_type, "NOERROR_CODE", - Py_BuildValue("h", Rcode::NOERROR_CODE)); - addClassVariable(rcode_type, "FORMERR_CODE", - Py_BuildValue("h", Rcode::FORMERR_CODE)); - addClassVariable(rcode_type, "SERVFAIL_CODE", - Py_BuildValue("h", Rcode::SERVFAIL_CODE)); - addClassVariable(rcode_type, "NXDOMAIN_CODE", - Py_BuildValue("h", Rcode::NXDOMAIN_CODE)); - addClassVariable(rcode_type, "NOTIMP_CODE", - Py_BuildValue("h", Rcode::NOTIMP_CODE)); - addClassVariable(rcode_type, "REFUSED_CODE", - Py_BuildValue("h", Rcode::REFUSED_CODE)); - addClassVariable(rcode_type, "YXDOMAIN_CODE", - Py_BuildValue("h", Rcode::YXDOMAIN_CODE)); - addClassVariable(rcode_type, "YXRRSET_CODE", - Py_BuildValue("h", Rcode::YXRRSET_CODE)); - addClassVariable(rcode_type, "NXRRSET_CODE", - Py_BuildValue("h", Rcode::NXRRSET_CODE)); - addClassVariable(rcode_type, "NOTAUTH_CODE", - Py_BuildValue("h", Rcode::NOTAUTH_CODE)); - addClassVariable(rcode_type, "NOTZONE_CODE", - Py_BuildValue("h", Rcode::NOTZONE_CODE)); - addClassVariable(rcode_type, "RESERVED11_CODE", - Py_BuildValue("h", Rcode::RESERVED11_CODE)); - addClassVariable(rcode_type, "RESERVED12_CODE", - Py_BuildValue("h", Rcode::RESERVED12_CODE)); - addClassVariable(rcode_type, "RESERVED13_CODE", - Py_BuildValue("h", Rcode::RESERVED13_CODE)); - addClassVariable(rcode_type, "RESERVED14_CODE", - Py_BuildValue("h", Rcode::RESERVED14_CODE)); - addClassVariable(rcode_type, "RESERVED15_CODE", - Py_BuildValue("h", Rcode::RESERVED15_CODE)); - addClassVariable(rcode_type, "BADVERS_CODE", - Py_BuildValue("h", Rcode::BADVERS_CODE)); - - return (true); +PyObject* +createRcodeObject(const Rcode& source) { + RcodeContainer container(PyObject_New(s_Rcode, &rcode_type)); + container.set(new Rcode(source)); + return (container.release()); } + +bool +PyRcode_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); + } + return (PyObject_TypeCheck(obj, &rcode_type)); +} + +const Rcode& +PyRcode_ToRcode(const PyObject* rcode_obj) { + if (rcode_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in Rcode PyObject conversion"); + } + const s_Rcode* rcode = static_cast(rcode_obj); + return (*rcode->cppobj); +} + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/rcode_python.h b/src/lib/dns/python/rcode_python.h index 9b5e699e85..a149406c0e 100644 --- a/src/lib/dns/python/rcode_python.h +++ b/src/lib/dns/python/rcode_python.h @@ -23,29 +23,36 @@ class Rcode; namespace python { -// The s_* Class simply covers one instantiation of the object. -// -// We added a helper variable static_code here -// Since we can create Rcodes dynamically with Rcode(int), but also -// use the static globals (Rcode::NOERROR() etc), we use this -// variable to see if the code came from one of the latter, in which -// case Rcode_destroy should not free it (the other option is to -// allocate new Rcodes for every use of the static ones, but this -// seems more efficient). -// -// Follow-up note: we don't have to use the proxy function in the python lib; -// we can just define class specific constants directly (see TSIGError). -// We should make this cleanup later. -class s_Rcode : public PyObject { -public: - s_Rcode(); - const Rcode* cppobj; - bool static_code; -}; - extern PyTypeObject rcode_type; -bool initModulePart_Rcode(PyObject* mod); +/// This is a simple shortcut to create a python Rcode object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createRcodeObject(const Rcode& source); + +/// \brief Checks if the given python object is a Rcode object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type Rcode, false otherwise +bool PyRcode_Check(PyObject* obj); + +/// \brief Returns a reference to the Rcode object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type Rcode; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyRcode_Check() +/// +/// \note This is not a copy; if the Rcode is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param rcode_obj The rcode object to convert +const Rcode& PyRcode_ToRcode(const PyObject* rcode_obj); } // namespace python } // namespace dns diff --git a/src/lib/dns/python/rdata_python.cc b/src/lib/dns/python/rdata_python.cc index faa4f4c41f..06c0263fa6 100644 --- a/src/lib/dns/python/rdata_python.cc +++ b/src/lib/dns/python/rdata_python.cc @@ -12,60 +12,48 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#define PY_SSIZE_T_CLEAN +#include #include +#include +#include +#include + +#include "rdata_python.h" +#include "rrtype_python.h" +#include "rrclass_python.h" +#include "messagerenderer_python.h" + using namespace isc::dns; +using namespace isc::dns::python; using namespace isc::util; +using namespace isc::util::python; using namespace isc::dns::rdata; -// -// Declaration of the custom exceptions -// Initialization and addition of these go in the initModulePart -// function at the end of this file -// -static PyObject* po_InvalidRdataLength; -static PyObject* po_InvalidRdataText; -static PyObject* po_CharStringTooLong; - -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description - -// -// Rdata -// - -// The s_* Class simply coverst one instantiation of the object - -// Using a shared_ptr here should not really be necessary (PyObject -// is already reference-counted), however internally on the cpp side, -// not doing so might result in problems, since we can't copy construct -// rdata field, adding them to rrsets results in a problem when the -// rrset is destroyed later +namespace { class s_Rdata : public PyObject { public: - RdataPtr rdata; + isc::dns::rdata::ConstRdataPtr cppobj; }; +typedef CPPPyObjectContainer RdataContainer; + // // We declare the functions here, the definitions are below // the type definition of the object, since both can use the other // // General creation and destruction -static int Rdata_init(s_Rdata* self, PyObject* args); -static void Rdata_destroy(s_Rdata* self); +int Rdata_init(s_Rdata* self, PyObject* args); +void Rdata_destroy(s_Rdata* self); // These are the functions we export -static PyObject* Rdata_toText(s_Rdata* self); +PyObject* Rdata_toText(s_Rdata* self); // This is a second version of toText, we need one where the argument // is a PyObject*, for the str() function in python. -static PyObject* Rdata_str(PyObject* self); -static PyObject* Rdata_toWire(s_Rdata* self, PyObject* args); -static PyObject* RData_richcmp(s_Rdata* self, s_Rdata* other, int op); +PyObject* Rdata_str(PyObject* self); +PyObject* Rdata_toWire(s_Rdata* self, PyObject* args); +PyObject* RData_richcmp(s_Rdata* self, s_Rdata* other, int op); // This list contains the actual set of functions we have in // python. Each entry has @@ -73,7 +61,7 @@ static PyObject* RData_richcmp(s_Rdata* self, s_Rdata* other, int op); // 2. Our static function here // 3. Argument type // 4. Documentation -static PyMethodDef Rdata_methods[] = { +PyMethodDef Rdata_methods[] = { { "to_text", reinterpret_cast(Rdata_toText), METH_NOARGS, "Returns the string representation" }, { "to_wire", reinterpret_cast(Rdata_toWire), METH_VARARGS, @@ -86,10 +74,145 @@ static PyMethodDef Rdata_methods[] = { { NULL, NULL, 0, NULL } }; +int +Rdata_init(s_Rdata* self, PyObject* args) { + PyObject* rrtype; + PyObject* rrclass; + const char* s; + const char* data; + Py_ssize_t len; + + // Create from string + if (PyArg_ParseTuple(args, "O!O!s", &rrtype_type, &rrtype, + &rrclass_type, &rrclass, + &s)) { + self->cppobj = createRdata(PyRRType_ToRRType(rrtype), + PyRRClass_ToRRClass(rrclass), s); + return (0); + } else if (PyArg_ParseTuple(args, "O!O!y#", &rrtype_type, &rrtype, + &rrclass_type, &rrclass, &data, &len)) { + InputBuffer input_buffer(data, len); + self->cppobj = createRdata(PyRRType_ToRRType(rrtype), + PyRRClass_ToRRClass(rrclass), + input_buffer, len); + return (0); + } + + return (-1); +} + +void +Rdata_destroy(s_Rdata* self) { + // Clear the shared_ptr so that its reference count is zero + // before we call tp_free() (there is no direct release()) + self->cppobj.reset(); + Py_TYPE(self)->tp_free(self); +} + +PyObject* +Rdata_toText(s_Rdata* self) { + // Py_BuildValue makes python objects from native data + return (Py_BuildValue("s", self->cppobj->toText().c_str())); +} + +PyObject* +Rdata_str(PyObject* self) { + // Simply call the to_text method we already defined + return (PyObject_CallMethod(self, + const_cast("to_text"), + const_cast(""))); +} + +PyObject* +Rdata_toWire(s_Rdata* self, PyObject* args) { + PyObject* bytes; + PyObject* mr; + + if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { + PyObject* bytes_o = bytes; + + OutputBuffer buffer(4); + self->cppobj->toWire(buffer); + PyObject* rd_bytes = PyBytes_FromStringAndSize(static_cast(buffer.getData()), buffer.getLength()); + PyObject* result = PySequence_InPlaceConcat(bytes_o, rd_bytes); + // We need to release the object we temporarily created here + // to prevent memory leak + Py_DECREF(rd_bytes); + return (result); + } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr)); + // If we return NULL it is seen as an error, so use this for + // None returns + Py_RETURN_NONE; + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "toWire argument must be a sequence object or a MessageRenderer"); + return (NULL); +} + +PyObject* +RData_richcmp(s_Rdata* self, s_Rdata* other, int op) { + bool c; + + // Check for null and if the types match. If different type, + // simply return False + if (!other || (self->ob_type != other->ob_type)) { + Py_RETURN_FALSE; + } + + switch (op) { + case Py_LT: + c = self->cppobj->compare(*other->cppobj) < 0; + break; + case Py_LE: + c = self->cppobj->compare(*other->cppobj) < 0 || + self->cppobj->compare(*other->cppobj) == 0; + break; + case Py_EQ: + c = self->cppobj->compare(*other->cppobj) == 0; + break; + case Py_NE: + c = self->cppobj->compare(*other->cppobj) != 0; + break; + case Py_GT: + c = self->cppobj->compare(*other->cppobj) > 0; + break; + case Py_GE: + c = self->cppobj->compare(*other->cppobj) > 0 || + self->cppobj->compare(*other->cppobj) == 0; + break; + default: + PyErr_SetString(PyExc_IndexError, + "Unhandled rich comparison operator"); + return (NULL); + } + if (c) + Py_RETURN_TRUE; + else + Py_RETURN_FALSE; +} + +} // end of unnamed namespace + +namespace isc { +namespace dns { +namespace python { + + +// +// Declaration of the custom exceptions +// Initialization and addition of these go in the initModulePart +// function in pydnspp +// +PyObject* po_InvalidRdataLength; +PyObject* po_InvalidRdataText; +PyObject* po_CharStringTooLong; + // This defines the complete type for reflection in python and // parsing of PyObject* to s_Rdata // Most of the functions are not actually implemented and NULL here. -static PyTypeObject rdata_type = { +PyTypeObject rdata_type = { PyVarObject_HEAD_INIT(NULL, 0) "pydnspp.Rdata", sizeof(s_Rdata), // tp_basicsize @@ -103,7 +226,7 @@ static PyTypeObject rdata_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call Rdata_str, // tp_str NULL, // tp_getattro @@ -140,150 +263,36 @@ static PyTypeObject rdata_type = { 0 // tp_version_tag }; -static int -Rdata_init(s_Rdata* self, PyObject* args) { - s_RRType* rrtype; - s_RRClass* rrclass; - const char* s; - const char* data; - Py_ssize_t len; - - // Create from string - if (PyArg_ParseTuple(args, "O!O!s", &rrtype_type, &rrtype, - &rrclass_type, &rrclass, - &s)) { - self->rdata = createRdata(*rrtype->rrtype, *rrclass->rrclass, s); - return (0); - } else if (PyArg_ParseTuple(args, "O!O!y#", &rrtype_type, &rrtype, - &rrclass_type, &rrclass, &data, &len)) { - InputBuffer input_buffer(data, len); - self->rdata = createRdata(*rrtype->rrtype, *rrclass->rrclass, - input_buffer, len); - return (0); +PyObject* +createRdataObject(ConstRdataPtr source) { + s_Rdata* py_rdata = + static_cast(rdata_type.tp_alloc(&rdata_type, 0)); + if (py_rdata == NULL) { + isc_throw(PyCPPWrapperException, "Unexpected NULL C++ object, " + "probably due to short memory"); } - - return (-1); + py_rdata->cppobj = source; + return (py_rdata); } -static void -Rdata_destroy(s_Rdata* self) { - // Clear the shared_ptr so that its reference count is zero - // before we call tp_free() (there is no direct release()) - self->rdata.reset(); - Py_TYPE(self)->tp_free(self); -} - -static PyObject* -Rdata_toText(s_Rdata* self) { - // Py_BuildValue makes python objects from native data - return (Py_BuildValue("s", self->rdata->toText().c_str())); -} - -static PyObject* -Rdata_str(PyObject* self) { - // Simply call the to_text method we already defined - return (PyObject_CallMethod(self, - const_cast("to_text"), - const_cast(""))); -} - -static PyObject* -Rdata_toWire(s_Rdata* self, PyObject* args) { - PyObject* bytes; - s_MessageRenderer* mr; - - if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { - PyObject* bytes_o = bytes; - - OutputBuffer buffer(4); - self->rdata->toWire(buffer); - PyObject* rd_bytes = PyBytes_FromStringAndSize(static_cast(buffer.getData()), buffer.getLength()); - PyObject* result = PySequence_InPlaceConcat(bytes_o, rd_bytes); - // We need to release the object we temporarily created here - // to prevent memory leak - Py_DECREF(rd_bytes); - return (result); - } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { - self->rdata->toWire(*mr->messagerenderer); - // If we return NULL it is seen as an error, so use this for - // None returns - Py_RETURN_NONE; - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "toWire argument must be a sequence object or a MessageRenderer"); - return (NULL); -} - - - -static PyObject* -RData_richcmp(s_Rdata* self, s_Rdata* other, int op) { - bool c; - - // Check for null and if the types match. If different type, - // simply return False - if (!other || (self->ob_type != other->ob_type)) { - Py_RETURN_FALSE; - } - - switch (op) { - case Py_LT: - c = self->rdata->compare(*other->rdata) < 0; - break; - case Py_LE: - c = self->rdata->compare(*other->rdata) < 0 || - self->rdata->compare(*other->rdata) == 0; - break; - case Py_EQ: - c = self->rdata->compare(*other->rdata) == 0; - break; - case Py_NE: - c = self->rdata->compare(*other->rdata) != 0; - break; - case Py_GT: - c = self->rdata->compare(*other->rdata) > 0; - break; - case Py_GE: - c = self->rdata->compare(*other->rdata) > 0 || - self->rdata->compare(*other->rdata) == 0; - break; - default: - PyErr_SetString(PyExc_IndexError, - "Unhandled rich comparison operator"); - return (NULL); - } - if (c) - Py_RETURN_TRUE; - else - Py_RETURN_FALSE; -} -// end of Rdata - - -// Module Initialization, all statics are initialized here bool -initModulePart_Rdata(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&rdata_type) < 0) { - return (false); +PyRdata_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&rdata_type); - PyModule_AddObject(mod, "Rdata", - reinterpret_cast(&rdata_type)); - - // Add the exceptions to the class - po_InvalidRdataLength = PyErr_NewException("pydnspp.InvalidRdataLength", NULL, NULL); - PyModule_AddObject(mod, "InvalidRdataLength", po_InvalidRdataLength); - - po_InvalidRdataText = PyErr_NewException("pydnspp.InvalidRdataText", NULL, NULL); - PyModule_AddObject(mod, "InvalidRdataText", po_InvalidRdataText); - - po_CharStringTooLong = PyErr_NewException("pydnspp.CharStringTooLong", NULL, NULL); - PyModule_AddObject(mod, "CharStringTooLong", po_CharStringTooLong); - - - return (true); + return (PyObject_TypeCheck(obj, &rdata_type)); } + +const Rdata& +PyRdata_ToRdata(const PyObject* rdata_obj) { + if (rdata_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in Rdata PyObject conversion"); + } + const s_Rdata* rdata = static_cast(rdata_obj); + return (*rdata->cppobj); +} + +} // end python namespace +} // end dns namespace +} // end isc namespace diff --git a/src/lib/dns/python/rdata_python.h b/src/lib/dns/python/rdata_python.h new file mode 100644 index 0000000000..c7ddd57a6d --- /dev/null +++ b/src/lib/dns/python/rdata_python.h @@ -0,0 +1,68 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_RDATA_H +#define __PYTHON_RDATA_H 1 + +#include + +#include + +namespace isc { +namespace dns { +namespace python { + +extern PyObject* po_InvalidRdataLength; +extern PyObject* po_InvalidRdataText; +extern PyObject* po_CharStringTooLong; + +extern PyTypeObject rdata_type; + +/// This is a simple shortcut to create a python Rdata object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createRdataObject(isc::dns::rdata::ConstRdataPtr source); + +/// \brief Checks if the given python object is a Rdata object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type Rdata, false otherwise +bool PyRdata_Check(PyObject* obj); + +/// \brief Returns a reference to the Rdata object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type Rdata; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyRdata_Check() +/// +/// \note This is not a copy; if the Rdata is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param rdata_obj The rdata object to convert +const isc::dns::rdata::Rdata& PyRdata_ToRdata(const PyObject* rdata_obj); + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_RDATA_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/rrclass_python.cc b/src/lib/dns/python/rrclass_python.cc index 6d150c2b5e..00141872e9 100644 --- a/src/lib/dns/python/rrclass_python.cc +++ b/src/lib/dns/python/rrclass_python.cc @@ -11,35 +11,28 @@ // LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include #include +#include +#include +#include + +#include "rrclass_python.h" +#include "messagerenderer_python.h" +#include "pydnspp_common.h" + + using namespace isc::dns; +using namespace isc::dns::python; using namespace isc::util; - -// -// Declaration of the custom exceptions -// Initialization and addition of these go in the initModulePart -// function at the end of this file -// -static PyObject* po_InvalidRRClass; -static PyObject* po_IncompleteRRClass; - -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description - -// -// RRClass -// - +using namespace isc::util::python; +namespace { // The s_* Class simply covers one instantiation of the object class s_RRClass : public PyObject { public: - RRClass* rrclass; + s_RRClass() : cppobj(NULL) {}; + RRClass* cppobj; }; // @@ -48,25 +41,26 @@ public: // // General creation and destruction -static int RRClass_init(s_RRClass* self, PyObject* args); -static void RRClass_destroy(s_RRClass* self); +int RRClass_init(s_RRClass* self, PyObject* args); +void RRClass_destroy(s_RRClass* self); // These are the functions we export -static PyObject* RRClass_toText(s_RRClass* self); +PyObject* RRClass_toText(s_RRClass* self); // This is a second version of toText, we need one where the argument // is a PyObject*, for the str() function in python. -static PyObject* RRClass_str(PyObject* self); -static PyObject* RRClass_toWire(s_RRClass* self, PyObject* args); -static PyObject* RRClass_getCode(s_RRClass* self); -static PyObject* RRClass_richcmp(s_RRClass* self, s_RRClass* other, int op); +PyObject* RRClass_str(PyObject* self); +PyObject* RRClass_toWire(s_RRClass* self, PyObject* args); +PyObject* RRClass_getCode(s_RRClass* self); +PyObject* RRClass_richcmp(s_RRClass* self, s_RRClass* other, int op); // Static function for direct class creation -static PyObject* RRClass_IN(s_RRClass *self); -static PyObject* RRClass_CH(s_RRClass *self); -static PyObject* RRClass_HS(s_RRClass *self); -static PyObject* RRClass_NONE(s_RRClass *self); -static PyObject* RRClass_ANY(s_RRClass *self); +PyObject* RRClass_IN(s_RRClass *self); +PyObject* RRClass_CH(s_RRClass *self); +PyObject* RRClass_HS(s_RRClass *self); +PyObject* RRClass_NONE(s_RRClass *self); +PyObject* RRClass_ANY(s_RRClass *self); +typedef CPPPyObjectContainer RRClassContainer; // This list contains the actual set of functions we have in // python. Each entry has @@ -74,7 +68,7 @@ static PyObject* RRClass_ANY(s_RRClass *self); // 2. Our static function here // 3. Argument type // 4. Documentation -static PyMethodDef RRClass_methods[] = { +PyMethodDef RRClass_methods[] = { { "to_text", reinterpret_cast(RRClass_toText), METH_NOARGS, "Returns the string representation" }, { "to_wire", reinterpret_cast(RRClass_toWire), METH_VARARGS, @@ -94,10 +88,201 @@ static PyMethodDef RRClass_methods[] = { { NULL, NULL, 0, NULL } }; +int +RRClass_init(s_RRClass* self, PyObject* args) { + const char* s; + long i; + PyObject* bytes = NULL; + // The constructor argument can be a string ("IN"), an integer (1), + // or a sequence of numbers between 0 and 65535 (wire code) + + // Note that PyArg_ParseType can set PyError, and we need to clear + // that if we try several like here. Otherwise the *next* python + // call will suddenly appear to throw an exception. + // (the way to do exceptions is to set PyErr and return -1) + try { + if (PyArg_ParseTuple(args, "s", &s)) { + self->cppobj = new RRClass(s); + return (0); + } else if (PyArg_ParseTuple(args, "l", &i)) { + if (i < 0 || i > 0xffff) { + PyErr_Clear(); + PyErr_SetString(PyExc_ValueError, + "RR class number out of range"); + return (-1); + } + self->cppobj = new RRClass(i); + return (0); + } else if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { + uint8_t data[2]; + int result = readDataFromSequence(data, 2, bytes); + if (result != 0) { + return (result); + } + InputBuffer ib(data, 2); + self->cppobj = new RRClass(ib); + PyErr_Clear(); + return (0); + } + // Incomplete is never thrown, a type error would have already been raised + //when we try to read the 2 bytes above + } catch (const InvalidRRClass& ic) { + PyErr_Clear(); + PyErr_SetString(po_InvalidRRClass, ic.what()); + return (-1); + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in constructor argument"); + return (-1); +} + +void +RRClass_destroy(s_RRClass* self) { + delete self->cppobj; + self->cppobj = NULL; + Py_TYPE(self)->tp_free(self); +} + +PyObject* +RRClass_toText(s_RRClass* self) { + // Py_BuildValue makes python objects from native data + return (Py_BuildValue("s", self->cppobj->toText().c_str())); +} + +PyObject* +RRClass_str(PyObject* self) { + // Simply call the to_text method we already defined + return (PyObject_CallMethod(self, + const_cast("to_text"), + const_cast(""))); +} + +PyObject* +RRClass_toWire(s_RRClass* self, PyObject* args) { + PyObject* bytes; + PyObject* mr; + + if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { + PyObject* bytes_o = bytes; + + OutputBuffer buffer(2); + self->cppobj->toWire(buffer); + PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), buffer.getLength()); + PyObject* result = PySequence_InPlaceConcat(bytes_o, n); + // We need to release the object we temporarily created here + // to prevent memory leak + Py_DECREF(n); + return (result); + } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr)); + // If we return NULL it is seen as an error, so use this for + // None returns + Py_RETURN_NONE; + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "toWire argument must be a sequence object or a MessageRenderer"); + return (NULL); +} + +PyObject* +RRClass_getCode(s_RRClass* self) { + return (Py_BuildValue("I", self->cppobj->getCode())); +} + +PyObject* +RRClass_richcmp(s_RRClass* self, s_RRClass* other, int op) { + bool c; + + // Check for null and if the types match. If different type, + // simply return False + if (!other || (self->ob_type != other->ob_type)) { + Py_RETURN_FALSE; + } + + switch (op) { + case Py_LT: + c = *self->cppobj < *other->cppobj; + break; + case Py_LE: + c = *self->cppobj < *other->cppobj || + *self->cppobj == *other->cppobj; + break; + case Py_EQ: + c = *self->cppobj == *other->cppobj; + break; + case Py_NE: + c = *self->cppobj != *other->cppobj; + break; + case Py_GT: + c = *other->cppobj < *self->cppobj; + break; + case Py_GE: + c = *other->cppobj < *self->cppobj || + *self->cppobj == *other->cppobj; + break; + default: + PyErr_SetString(PyExc_IndexError, + "Unhandled rich comparison operator"); + return (NULL); + } + if (c) + Py_RETURN_TRUE; + else + Py_RETURN_FALSE; +} + +// +// Common function for RRClass_IN/CH/etc. +// +PyObject* RRClass_createStatic(RRClass stc) { + s_RRClass* ret = PyObject_New(s_RRClass, &rrclass_type); + if (ret != NULL) { + ret->cppobj = new RRClass(stc); + } + return (ret); +} + +PyObject* RRClass_IN(s_RRClass*) { + return (RRClass_createStatic(RRClass::IN())); +} + +PyObject* RRClass_CH(s_RRClass*) { + return (RRClass_createStatic(RRClass::CH())); +} + +PyObject* RRClass_HS(s_RRClass*) { + return (RRClass_createStatic(RRClass::HS())); +} + +PyObject* RRClass_NONE(s_RRClass*) { + return (RRClass_createStatic(RRClass::NONE())); +} + +PyObject* RRClass_ANY(s_RRClass*) { + return (RRClass_createStatic(RRClass::ANY())); +} + +} // end anonymous namespace + +namespace isc { +namespace dns { +namespace python { + +// +// Declaration of the custom exceptions +// Initialization and addition of these go in the initModulePart +// function in pydnspp.cc +// +PyObject* po_InvalidRRClass; +PyObject* po_IncompleteRRClass; + + // This defines the complete type for reflection in python and // parsing of PyObject* to s_RRClass // Most of the functions are not actually implemented and NULL here. -static PyTypeObject rrclass_type = { +PyTypeObject rrclass_type = { PyVarObject_HEAD_INIT(NULL, 0) "pydnspp.RRClass", sizeof(s_RRClass), // tp_basicsize @@ -111,7 +296,7 @@ static PyTypeObject rrclass_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call RRClass_str, // tp_str NULL, // tp_getattro @@ -150,204 +335,32 @@ static PyTypeObject rrclass_type = { 0 // tp_version_tag }; -static int -RRClass_init(s_RRClass* self, PyObject* args) { - const char* s; - long i; - PyObject* bytes = NULL; - // The constructor argument can be a string ("IN"), an integer (1), - // or a sequence of numbers between 0 and 65535 (wire code) - - // Note that PyArg_ParseType can set PyError, and we need to clear - // that if we try several like here. Otherwise the *next* python - // call will suddenly appear to throw an exception. - // (the way to do exceptions is to set PyErr and return -1) - try { - if (PyArg_ParseTuple(args, "s", &s)) { - self->rrclass = new RRClass(s); - return (0); - } else if (PyArg_ParseTuple(args, "l", &i)) { - if (i < 0 || i > 0xffff) { - PyErr_Clear(); - PyErr_SetString(PyExc_ValueError, - "RR class number out of range"); - return (-1); - } - self->rrclass = new RRClass(i); - return (0); - } else if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { - uint8_t data[2]; - int result = readDataFromSequence(data, 2, bytes); - if (result != 0) { - return (result); - } - InputBuffer ib(data, 2); - self->rrclass = new RRClass(ib); - PyErr_Clear(); - return (0); - } - // Incomplete is never thrown, a type error would have already been raised - //when we try to read the 2 bytes above - } catch (const InvalidRRClass& ic) { - PyErr_Clear(); - PyErr_SetString(po_InvalidRRClass, ic.what()); - return (-1); - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in constructor argument"); - return (-1); +PyObject* +createRRClassObject(const RRClass& source) { + RRClassContainer container(PyObject_New(s_RRClass, &rrclass_type)); + container.set(new RRClass(source)); + return (container.release()); } -static void -RRClass_destroy(s_RRClass* self) { - delete self->rrclass; - self->rrclass = NULL; - Py_TYPE(self)->tp_free(self); -} -static PyObject* -RRClass_toText(s_RRClass* self) { - // Py_BuildValue makes python objects from native data - return (Py_BuildValue("s", self->rrclass->toText().c_str())); -} - -static PyObject* -RRClass_str(PyObject* self) { - // Simply call the to_text method we already defined - return (PyObject_CallMethod(self, - const_cast("to_text"), - const_cast(""))); -} - -static PyObject* -RRClass_toWire(s_RRClass* self, PyObject* args) { - PyObject* bytes; - s_MessageRenderer* mr; - - if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { - PyObject* bytes_o = bytes; - - OutputBuffer buffer(2); - self->rrclass->toWire(buffer); - PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), buffer.getLength()); - PyObject* result = PySequence_InPlaceConcat(bytes_o, n); - // We need to release the object we temporarily created here - // to prevent memory leak - Py_DECREF(n); - return (result); - } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { - self->rrclass->toWire(*mr->messagerenderer); - // If we return NULL it is seen as an error, so use this for - // None returns - Py_RETURN_NONE; - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "toWire argument must be a sequence object or a MessageRenderer"); - return (NULL); -} - -static PyObject* -RRClass_getCode(s_RRClass* self) { - return (Py_BuildValue("I", self->rrclass->getCode())); -} - -static PyObject* -RRClass_richcmp(s_RRClass* self, s_RRClass* other, int op) { - bool c; - - // Check for null and if the types match. If different type, - // simply return False - if (!other || (self->ob_type != other->ob_type)) { - Py_RETURN_FALSE; - } - - switch (op) { - case Py_LT: - c = *self->rrclass < *other->rrclass; - break; - case Py_LE: - c = *self->rrclass < *other->rrclass || - *self->rrclass == *other->rrclass; - break; - case Py_EQ: - c = *self->rrclass == *other->rrclass; - break; - case Py_NE: - c = *self->rrclass != *other->rrclass; - break; - case Py_GT: - c = *other->rrclass < *self->rrclass; - break; - case Py_GE: - c = *other->rrclass < *self->rrclass || - *self->rrclass == *other->rrclass; - break; - default: - PyErr_SetString(PyExc_IndexError, - "Unhandled rich comparison operator"); - return (NULL); - } - if (c) - Py_RETURN_TRUE; - else - Py_RETURN_FALSE; -} - -// -// Common function for RRClass_IN/CH/etc. -// -static PyObject* RRClass_createStatic(RRClass stc) { - s_RRClass* ret = PyObject_New(s_RRClass, &rrclass_type); - if (ret != NULL) { - ret->rrclass = new RRClass(stc); - } - return (ret); -} - -static PyObject* RRClass_IN(s_RRClass*) { - return (RRClass_createStatic(RRClass::IN())); -} - -static PyObject* RRClass_CH(s_RRClass*) { - return (RRClass_createStatic(RRClass::CH())); -} - -static PyObject* RRClass_HS(s_RRClass*) { - return (RRClass_createStatic(RRClass::HS())); -} - -static PyObject* RRClass_NONE(s_RRClass*) { - return (RRClass_createStatic(RRClass::NONE())); -} - -static PyObject* RRClass_ANY(s_RRClass*) { - return (RRClass_createStatic(RRClass::ANY())); -} -// end of RRClass - - -// Module Initialization, all statics are initialized here bool -initModulePart_RRClass(PyObject* mod) { - // Add the exceptions to the module - po_InvalidRRClass = PyErr_NewException("pydnspp.InvalidRRClass", NULL, NULL); - Py_INCREF(po_InvalidRRClass); - PyModule_AddObject(mod, "InvalidRRClass", po_InvalidRRClass); - po_IncompleteRRClass = PyErr_NewException("pydnspp.IncompleteRRClass", NULL, NULL); - Py_INCREF(po_IncompleteRRClass); - PyModule_AddObject(mod, "IncompleteRRClass", po_IncompleteRRClass); - - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&rrclass_type) < 0) { - return (false); +PyRRClass_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&rrclass_type); - PyModule_AddObject(mod, "RRClass", - reinterpret_cast(&rrclass_type)); - - return (true); + return (PyObject_TypeCheck(obj, &rrclass_type)); } + +const RRClass& +PyRRClass_ToRRClass(const PyObject* rrclass_obj) { + if (rrclass_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in RRClass PyObject conversion"); + } + const s_RRClass* rrclass = static_cast(rrclass_obj); + return (*rrclass->cppobj); +} + +} // end namespace python +} // end namespace dns +} // end namespace isc diff --git a/src/lib/dns/python/rrclass_python.h b/src/lib/dns/python/rrclass_python.h new file mode 100644 index 0000000000..f58bba604c --- /dev/null +++ b/src/lib/dns/python/rrclass_python.h @@ -0,0 +1,68 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_RRCLASS_H +#define __PYTHON_RRCLASS_H 1 + +#include + +namespace isc { +namespace dns { +class RRClass; + +namespace python { + +extern PyObject* po_InvalidRRClass; +extern PyObject* po_IncompleteRRClass; + +extern PyTypeObject rrclass_type; + +/// This is a simple shortcut to create a python RRClass object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createRRClassObject(const RRClass& source); + +/// \brief Checks if the given python object is a RRClass object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type RRClass, false otherwise +bool PyRRClass_Check(PyObject* obj); + +/// \brief Returns a reference to the RRClass object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type RRClass; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyRRClass_Check() +/// +/// \note This is not a copy; if the RRClass is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param rrclass_obj The rrclass object to convert +const RRClass& PyRRClass_ToRRClass(const PyObject* rrclass_obj); + + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_RRCLASS_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/rrset_python.cc b/src/lib/dns/python/rrset_python.cc index 71a0710f8b..9fc3d79166 100644 --- a/src/lib/dns/python/rrset_python.cc +++ b/src/lib/dns/python/rrset_python.cc @@ -12,55 +12,63 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include + +#include + #include +#include +#include -// -// Declaration of the custom exceptions -// Initialization and addition of these go in the module init at the -// end -// -static PyObject* po_EmptyRRset; +#include "name_python.h" +#include "pydnspp_common.h" +#include "rrset_python.h" +#include "rrclass_python.h" +#include "rrtype_python.h" +#include "rrttl_python.h" +#include "rdata_python.h" +#include "messagerenderer_python.h" -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description +using namespace std; using namespace isc::dns; +using namespace isc::dns::python; using namespace isc::util; +using namespace isc::util::python; -// RRset +namespace { + +// The s_* Class simply coverst one instantiation of the object // Using a shared_ptr here should not really be necessary (PyObject // is already reference-counted), however internally on the cpp side, // not doing so might result in problems, since we can't copy construct -// rrsets, adding them to messages results in a problem when the -// message is destroyed or cleared later +// rdata field, adding them to rrsets results in a problem when the +// rrset is destroyed later class s_RRset : public PyObject { public: - RRsetPtr rrset; + isc::dns::RRsetPtr cppobj; }; -static int RRset_init(s_RRset* self, PyObject* args); -static void RRset_destroy(s_RRset* self); +int RRset_init(s_RRset* self, PyObject* args); +void RRset_destroy(s_RRset* self); + +PyObject* RRset_getRdataCount(s_RRset* self); +PyObject* RRset_getName(s_RRset* self); +PyObject* RRset_getClass(s_RRset* self); +PyObject* RRset_getType(s_RRset* self); +PyObject* RRset_getTTL(s_RRset* self); +PyObject* RRset_setName(s_RRset* self, PyObject* args); +PyObject* RRset_setTTL(s_RRset* self, PyObject* args); +PyObject* RRset_toText(s_RRset* self); +PyObject* RRset_str(PyObject* self); +PyObject* RRset_toWire(s_RRset* self, PyObject* args); +PyObject* RRset_addRdata(s_RRset* self, PyObject* args); +PyObject* RRset_getRdata(s_RRset* self); +PyObject* RRset_removeRRsig(s_RRset* self); -static PyObject* RRset_getRdataCount(s_RRset* self); -static PyObject* RRset_getName(s_RRset* self); -static PyObject* RRset_getClass(s_RRset* self); -static PyObject* RRset_getType(s_RRset* self); -static PyObject* RRset_getTTL(s_RRset* self); -static PyObject* RRset_setName(s_RRset* self, PyObject* args); -static PyObject* RRset_setTTL(s_RRset* self, PyObject* args); -static PyObject* RRset_toText(s_RRset* self); -static PyObject* RRset_str(PyObject* self); -static PyObject* RRset_toWire(s_RRset* self, PyObject* args); -static PyObject* RRset_addRdata(s_RRset* self, PyObject* args); -static PyObject* RRset_getRdata(s_RRset* self); // TODO: iterator? -static PyMethodDef RRset_methods[] = { +PyMethodDef RRset_methods[] = { { "get_rdata_count", reinterpret_cast(RRset_getRdataCount), METH_NOARGS, "Returns the number of rdata fields." }, { "get_name", reinterpret_cast(RRset_getName), METH_NOARGS, @@ -88,10 +96,250 @@ static PyMethodDef RRset_methods[] = { "Adds the rdata for one RR to the RRset.\nTakes an Rdata object as an argument" }, { "get_rdata", reinterpret_cast(RRset_getRdata), METH_NOARGS, "Returns a List containing all Rdata elements" }, + { "remove_rrsig", reinterpret_cast(RRset_removeRRsig), METH_NOARGS, + "Clears the list of RRsigs for this RRset" }, { NULL, NULL, 0, NULL } }; -static PyTypeObject rrset_type = { +int +RRset_init(s_RRset* self, PyObject* args) { + PyObject* name; + PyObject* rrclass; + PyObject* rrtype; + PyObject* rrttl; + + if (PyArg_ParseTuple(args, "O!O!O!O!", &name_type, &name, + &rrclass_type, &rrclass, + &rrtype_type, &rrtype, + &rrttl_type, &rrttl + )) { + self->cppobj = RRsetPtr(new RRset(PyName_ToName(name), + PyRRClass_ToRRClass(rrclass), + PyRRType_ToRRType(rrtype), + PyRRTTL_ToRRTTL(rrttl))); + return (0); + } + + self->cppobj = RRsetPtr(); + return (-1); +} + +void +RRset_destroy(s_RRset* self) { + // Clear the shared_ptr so that its reference count is zero + // before we call tp_free() (there is no direct release()) + self->cppobj.reset(); + Py_TYPE(self)->tp_free(self); +} + +PyObject* +RRset_getRdataCount(s_RRset* self) { + return (Py_BuildValue("I", self->cppobj->getRdataCount())); +} + +PyObject* +RRset_getName(s_RRset* self) { + try { + return (createNameObject(self->cppobj->getName())); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting rrset Name: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting rrset Name"); + } + return (NULL); +} + +PyObject* +RRset_getClass(s_RRset* self) { + try { + return (createRRClassObject(self->cppobj->getClass())); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting question RRClass: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting question RRClass"); + } + return (NULL); +} + +PyObject* +RRset_getType(s_RRset* self) { + try { + return (createRRTypeObject(self->cppobj->getType())); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting question RRType: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting question RRType"); + } + return (NULL); +} + +PyObject* +RRset_getTTL(s_RRset* self) { + try { + return (createRRTTLObject(self->cppobj->getTTL())); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting question TTL: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting question TTL"); + } + return (NULL); +} + +PyObject* +RRset_setName(s_RRset* self, PyObject* args) { + PyObject* name; + if (!PyArg_ParseTuple(args, "O!", &name_type, &name)) { + return (NULL); + } + self->cppobj->setName(PyName_ToName(name)); + Py_RETURN_NONE; +} + +PyObject* +RRset_setTTL(s_RRset* self, PyObject* args) { + PyObject* rrttl; + if (!PyArg_ParseTuple(args, "O!", &rrttl_type, &rrttl)) { + return (NULL); + } + self->cppobj->setTTL(PyRRTTL_ToRRTTL(rrttl)); + Py_RETURN_NONE; +} + +PyObject* +RRset_toText(s_RRset* self) { + try { + return (Py_BuildValue("s", self->cppobj->toText().c_str())); + } catch (const EmptyRRset& ers) { + PyErr_SetString(po_EmptyRRset, ers.what()); + return (NULL); + } +} + +PyObject* +RRset_str(PyObject* self) { + // Simply call the to_text method we already defined + return (PyObject_CallMethod(self, + const_cast("to_text"), + const_cast(""))); +} + +PyObject* +RRset_toWire(s_RRset* self, PyObject* args) { + PyObject* bytes; + PyObject* mr; + + try { + if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { + PyObject* bytes_o = bytes; + + OutputBuffer buffer(4096); + self->cppobj->toWire(buffer); + PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), buffer.getLength()); + PyObject* result = PySequence_InPlaceConcat(bytes_o, n); + // We need to release the object we temporarily created here + // to prevent memory leak + Py_DECREF(n); + return (result); + } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr)); + // If we return NULL it is seen as an error, so use this for + // None returns + Py_RETURN_NONE; + } + } catch (const EmptyRRset& ers) { + PyErr_Clear(); + PyErr_SetString(po_EmptyRRset, ers.what()); + return (NULL); + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "toWire argument must be a sequence object or a MessageRenderer"); + return (NULL); +} + +PyObject* +RRset_addRdata(s_RRset* self, PyObject* args) { + PyObject* rdata; + if (!PyArg_ParseTuple(args, "O!", &rdata_type, &rdata)) { + return (NULL); + } + try { + self->cppobj->addRdata(PyRdata_ToRdata(rdata)); + Py_RETURN_NONE; + } catch (const std::bad_cast&) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "Rdata type to add must match type of RRset"); + return (NULL); + } +} + +PyObject* +RRset_getRdata(s_RRset* self) { + PyObject* list = PyList_New(0); + + RdataIteratorPtr it = self->cppobj->getRdataIterator(); + + try { + for (; !it->isLast(); it->next()) { + const rdata::Rdata *rd = &it->getCurrent(); + if (PyList_Append(list, + createRdataObject(createRdata(self->cppobj->getType(), + self->cppobj->getClass(), *rd))) == -1) { + Py_DECREF(list); + return (NULL); + } + } + return (list); + } catch (const exception& ex) { + const string ex_what = + "Unexpected failure getting rrset Rdata: " + + string(ex.what()); + PyErr_SetString(po_IscException, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, + "Unexpected failure getting rrset Rdata"); + } + Py_DECREF(list); + return (NULL); +} + +PyObject* +RRset_removeRRsig(s_RRset* self) { + self->cppobj->removeRRsig(); + Py_RETURN_NONE; +} + +} // end of unnamed namespace + +namespace isc { +namespace dns { +namespace python { + +// +// Declaration of the custom exceptions +// Initialization and addition of these go in the module init at the +// end +// +PyObject* po_EmptyRRset; + +PyTypeObject rrset_type = { PyVarObject_HEAD_INIT(NULL, 0) "pydnspp.RRset", sizeof(s_RRset), // tp_basicsize @@ -105,7 +353,7 @@ static PyTypeObject rrset_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call RRset_str, // tp_str NULL, // tp_getattro @@ -156,247 +404,59 @@ static PyTypeObject rrset_type = { 0 // tp_version_tag }; -static int -RRset_init(s_RRset* self, PyObject* args) { - s_Name* name; - s_RRClass* rrclass; - s_RRType* rrtype; - s_RRTTL* rrttl; +PyObject* +createRRsetObject(const RRset& source) { - if (PyArg_ParseTuple(args, "O!O!O!O!", &name_type, &name, - &rrclass_type, &rrclass, - &rrtype_type, &rrtype, - &rrttl_type, &rrttl - )) { - self->rrset = RRsetPtr(new RRset(*name->cppobj, *rrclass->rrclass, - *rrtype->rrtype, *rrttl->rrttl)); - return (0); + // RRsets are noncopyable, so as a workaround we recreate a new one + // and copy over all content + RRsetPtr new_rrset = isc::dns::RRsetPtr( + new isc::dns::RRset(source.getName(), source.getClass(), + source.getType(), source.getTTL())); + + isc::dns::RdataIteratorPtr rdata_it(source.getRdataIterator()); + for (rdata_it->first(); !rdata_it->isLast(); rdata_it->next()) { + new_rrset->addRdata(rdata_it->getCurrent()); } - self->rrset = RRsetPtr(); - return (-1); -} - -static void -RRset_destroy(s_RRset* self) { - // Clear the shared_ptr so that its reference count is zero - // before we call tp_free() (there is no direct release()) - self->rrset.reset(); - Py_TYPE(self)->tp_free(self); -} - -static PyObject* -RRset_getRdataCount(s_RRset* self) { - return (Py_BuildValue("I", self->rrset->getRdataCount())); -} - -static PyObject* -RRset_getName(s_RRset* self) { - s_Name* name; - - // is this the best way to do this? - name = static_cast(name_type.tp_alloc(&name_type, 0)); - if (name != NULL) { - name->cppobj = new Name(self->rrset->getName()); - if (name->cppobj == NULL) - { - Py_DECREF(name); - return (NULL); - } + isc::dns::RRsetPtr sigs = source.getRRsig(); + if (sigs) { + new_rrset->addRRsig(sigs); } - - return (name); -} - -static PyObject* -RRset_getClass(s_RRset* self) { - s_RRClass* rrclass; - - rrclass = static_cast(rrclass_type.tp_alloc(&rrclass_type, 0)); - if (rrclass != NULL) { - rrclass->rrclass = new RRClass(self->rrset->getClass()); - if (rrclass->rrclass == NULL) - { - Py_DECREF(rrclass); - return (NULL); - } + s_RRset* py_rrset = + static_cast(rrset_type.tp_alloc(&rrset_type, 0)); + if (py_rrset == NULL) { + isc_throw(PyCPPWrapperException, "Unexpected NULL C++ object, " + "probably due to short memory"); } - - return (rrclass); + py_rrset->cppobj = new_rrset; + return (py_rrset); } -static PyObject* -RRset_getType(s_RRset* self) { - s_RRType* rrtype; - - rrtype = static_cast(rrtype_type.tp_alloc(&rrtype_type, 0)); - if (rrtype != NULL) { - rrtype->rrtype = new RRType(self->rrset->getType()); - if (rrtype->rrtype == NULL) - { - Py_DECREF(rrtype); - return (NULL); - } - } - - return (rrtype); -} - -static PyObject* -RRset_getTTL(s_RRset* self) { - s_RRTTL* rrttl; - - rrttl = static_cast(rrttl_type.tp_alloc(&rrttl_type, 0)); - if (rrttl != NULL) { - rrttl->rrttl = new RRTTL(self->rrset->getTTL()); - if (rrttl->rrttl == NULL) - { - Py_DECREF(rrttl); - return (NULL); - } - } - - return (rrttl); -} - -static PyObject* -RRset_setName(s_RRset* self, PyObject* args) { - s_Name* name; - if (!PyArg_ParseTuple(args, "O!", &name_type, &name)) { - return (NULL); - } - self->rrset->setName(*name->cppobj); - Py_RETURN_NONE; -} - -static PyObject* -RRset_setTTL(s_RRset* self, PyObject* args) { - s_RRTTL* rrttl; - if (!PyArg_ParseTuple(args, "O!", &rrttl_type, &rrttl)) { - return (NULL); - } - self->rrset->setTTL(*rrttl->rrttl); - Py_RETURN_NONE; -} - -static PyObject* -RRset_toText(s_RRset* self) { - try { - return (Py_BuildValue("s", self->rrset->toText().c_str())); - } catch (const EmptyRRset& ers) { - PyErr_SetString(po_EmptyRRset, ers.what()); - return (NULL); - } -} - -static PyObject* -RRset_str(PyObject* self) { - // Simply call the to_text method we already defined - return (PyObject_CallMethod(self, - const_cast("to_text"), - const_cast(""))); -} - -static PyObject* -RRset_toWire(s_RRset* self, PyObject* args) { - PyObject* bytes; - s_MessageRenderer* mr; - - try { - if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { - PyObject* bytes_o = bytes; - - OutputBuffer buffer(4096); - self->rrset->toWire(buffer); - PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), buffer.getLength()); - PyObject* result = PySequence_InPlaceConcat(bytes_o, n); - // We need to release the object we temporarily created here - // to prevent memory leak - Py_DECREF(n); - return (result); - } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { - self->rrset->toWire(*mr->messagerenderer); - // If we return NULL it is seen as an error, so use this for - // None returns - Py_RETURN_NONE; - } - } catch (const EmptyRRset& ers) { - PyErr_Clear(); - PyErr_SetString(po_EmptyRRset, ers.what()); - return (NULL); - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "toWire argument must be a sequence object or a MessageRenderer"); - return (NULL); -} - -static PyObject* -RRset_addRdata(s_RRset* self, PyObject* args) { - s_Rdata* rdata; - if (!PyArg_ParseTuple(args, "O!", &rdata_type, &rdata)) { - return (NULL); - } - try { - self->rrset->addRdata(*rdata->rdata); - Py_RETURN_NONE; - } catch (const std::bad_cast&) { - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "Rdata type to add must match type of RRset"); - return (NULL); - } -} - -static PyObject* -RRset_getRdata(s_RRset* self) { - PyObject* list = PyList_New(0); - - RdataIteratorPtr it = self->rrset->getRdataIterator(); - - for (; !it->isLast(); it->next()) { - s_Rdata *rds = static_cast(rdata_type.tp_alloc(&rdata_type, 0)); - if (rds != NULL) { - // hmz them iterators/shared_ptrs and private constructors - // make this a bit weird, so we create a new one with - // the data available - const Rdata *rd = &it->getCurrent(); - rds->rdata = createRdata(self->rrset->getType(), self->rrset->getClass(), *rd); - PyList_Append(list, rds); - } else { - return (NULL); - } - } - - return (list); -} - -// end of RRset - - -// Module Initialization, all statics are initialized here bool -initModulePart_RRset(PyObject* mod) { - // Add the exceptions to the module - po_EmptyRRset = PyErr_NewException("pydnspp.EmptyRRset", NULL, NULL); - PyModule_AddObject(mod, "EmptyRRset", po_EmptyRRset); - - // Add the enums to the module - - // Add the constants to the module - - // Add the classes to the module - // We initialize the static description object with PyType_Ready(), - // then add it to the module - - // NameComparisonResult - if (PyType_Ready(&rrset_type) < 0) { - return (false); +PyRRset_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&rrset_type); - PyModule_AddObject(mod, "RRset", - reinterpret_cast(&rrset_type)); - - return (true); + return (PyObject_TypeCheck(obj, &rrset_type)); } +RRset& +PyRRset_ToRRset(PyObject* rrset_obj) { + s_RRset* rrset = static_cast(rrset_obj); + return (*rrset->cppobj); +} + +RRsetPtr +PyRRset_ToRRsetPtr(PyObject* rrset_obj) { + if (rrset_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in RRset PyObject conversion"); + } + s_RRset* rrset = static_cast(rrset_obj); + return (rrset->cppobj); +} + + +} // end python namespace +} // end dns namespace +} // end isc namespace diff --git a/src/lib/dns/python/rrset_python.h b/src/lib/dns/python/rrset_python.h new file mode 100644 index 0000000000..4268678d92 --- /dev/null +++ b/src/lib/dns/python/rrset_python.h @@ -0,0 +1,78 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_RRSET_H +#define __PYTHON_RRSET_H 1 + +#include + +#include + +#include + +namespace isc { +namespace dns { +namespace python { + +extern PyObject* po_EmptyRRset; + +extern PyTypeObject rrset_type; + +/// This is a simple shortcut to create a python RRset object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createRRsetObject(const RRset& source); + +/// \brief Checks if the given python object is a RRset object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type RRset, false otherwise +bool PyRRset_Check(PyObject* obj); + +/// \brief Returns a reference to the RRset object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type RRset; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyRRset_Check() +/// +/// \note This is not a copy; if the RRset is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param rrset_obj The rrset object to convert +RRset& PyRRset_ToRRset(PyObject* rrset_obj); + +/// \brief Returns the shared_ptr of the RRset object contained within the +/// given Python object. +/// +/// \note The given object MUST be of type RRset; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyRRset_Check() +/// +/// \param rrset_obj The rrset object to convert +RRsetPtr PyRRset_ToRRsetPtr(PyObject* rrset_obj); + + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_RRSET_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/rrttl_python.cc b/src/lib/dns/python/rrttl_python.cc index c4b25bfa30..3a3f067755 100644 --- a/src/lib/dns/python/rrttl_python.cc +++ b/src/lib/dns/python/rrttl_python.cc @@ -12,57 +12,41 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include #include #include +#include +#include +#include + +#include "rrttl_python.h" +#include "pydnspp_common.h" +#include "messagerenderer_python.h" using namespace std; using namespace isc::dns; +using namespace isc::dns::python; using namespace isc::util; +using namespace isc::util::python; -// -// Declaration of the custom exceptions -// Initialization and addition of these go in the initModulePart -// function at the end of this file -// -static PyObject* po_InvalidRRTTL; -static PyObject* po_IncompleteRRTTL; - -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description - -// -// RRTTL -// - +namespace { // The s_* Class simply covers one instantiation of the object class s_RRTTL : public PyObject { public: - RRTTL* rrttl; + s_RRTTL() : cppobj(NULL) {}; + isc::dns::RRTTL* cppobj; }; -// -// We declare the functions here, the definitions are below -// the type definition of the object, since both can use the other -// +typedef CPPPyObjectContainer RRTTLContainer; -// General creation and destruction -static int RRTTL_init(s_RRTTL* self, PyObject* args); -static void RRTTL_destroy(s_RRTTL* self); - -// These are the functions we export -static PyObject* RRTTL_toText(s_RRTTL* self); +PyObject* RRTTL_toText(s_RRTTL* self); // This is a second version of toText, we need one where the argument // is a PyObject*, for the str() function in python. -static PyObject* RRTTL_str(PyObject* self); -static PyObject* RRTTL_toWire(s_RRTTL* self, PyObject* args); -static PyObject* RRTTL_getValue(s_RRTTL* self); -static PyObject* RRTTL_richcmp(s_RRTTL* self, s_RRTTL* other, int op); +PyObject* RRTTL_str(PyObject* self); +PyObject* RRTTL_toWire(s_RRTTL* self, PyObject* args); +PyObject* RRTTL_getValue(s_RRTTL* self); +PyObject* RRTTL_richcmp(s_RRTTL* self, s_RRTTL* other, int op); // This list contains the actual set of functions we have in // python. Each entry has @@ -70,7 +54,7 @@ static PyObject* RRTTL_richcmp(s_RRTTL* self, s_RRTTL* other, int op); // 2. Our static function here // 3. Argument type // 4. Documentation -static PyMethodDef RRTTL_methods[] = { +PyMethodDef RRTTL_methods[] = { { "to_text", reinterpret_cast(RRTTL_toText), METH_NOARGS, "Returns the string representation" }, { "to_wire", reinterpret_cast(RRTTL_toWire), METH_VARARGS, @@ -85,10 +69,174 @@ static PyMethodDef RRTTL_methods[] = { { NULL, NULL, 0, NULL } }; +int +RRTTL_init(s_RRTTL* self, PyObject* args) { + const char* s; + long long i; + PyObject* bytes = NULL; + // The constructor argument can be a string ("1234"), an integer (1), + // or a sequence of numbers between 0 and 255 (wire code) + + // Note that PyArg_ParseType can set PyError, and we need to clear + // that if we try several like here. Otherwise the *next* python + // call will suddenly appear to throw an exception. + // (the way to do exceptions is to set PyErr and return -1) + try { + if (PyArg_ParseTuple(args, "s", &s)) { + self->cppobj = new RRTTL(s); + return (0); + } else if (PyArg_ParseTuple(args, "L", &i)) { + PyErr_Clear(); + if (i < 0 || i > 0xffffffff) { + PyErr_SetString(PyExc_ValueError, "RR TTL number out of range"); + return (-1); + } + self->cppobj = new RRTTL(i); + return (0); + } else if (PyArg_ParseTuple(args, "O", &bytes) && + PySequence_Check(bytes)) { + Py_ssize_t size = PySequence_Size(bytes); + vector data(size); + int result = readDataFromSequence(&data[0], size, bytes); + if (result != 0) { + return (result); + } + InputBuffer ib(&data[0], size); + self->cppobj = new RRTTL(ib); + PyErr_Clear(); + return (0); + } + } catch (const IncompleteRRTTL& icc) { + // Ok so one of our functions has thrown a C++ exception. + // We need to translate that to a Python Exception + // First clear any existing error that was set + PyErr_Clear(); + // Now set our own exception + PyErr_SetString(po_IncompleteRRTTL, icc.what()); + // And return negative + return (-1); + } catch (const InvalidRRTTL& ic) { + PyErr_Clear(); + PyErr_SetString(po_InvalidRRTTL, ic.what()); + return (-1); + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in constructor argument"); + return (-1); +} + +void +RRTTL_destroy(s_RRTTL* self) { + delete self->cppobj; + self->cppobj = NULL; + Py_TYPE(self)->tp_free(self); +} + +PyObject* +RRTTL_toText(s_RRTTL* self) { + // Py_BuildValue makes python objects from native data + return (Py_BuildValue("s", self->cppobj->toText().c_str())); +} + +PyObject* +RRTTL_str(PyObject* self) { + // Simply call the to_text method we already defined + return (PyObject_CallMethod(self, + const_cast("to_text"), + const_cast(""))); +} + +PyObject* +RRTTL_toWire(s_RRTTL* self, PyObject* args) { + PyObject* bytes; + PyObject* mr; + + if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { + PyObject* bytes_o = bytes; + + OutputBuffer buffer(4); + self->cppobj->toWire(buffer); + PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), + buffer.getLength()); + PyObject* result = PySequence_InPlaceConcat(bytes_o, n); + // We need to release the object we temporarily created here + // to prevent memory leak + Py_DECREF(n); + return (result); + } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr)); + // If we return NULL it is seen as an error, so use this for + // None returns + Py_RETURN_NONE; + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "toWire argument must be a sequence object or a MessageRenderer"); + return (NULL); +} + +PyObject* +RRTTL_getValue(s_RRTTL* self) { + return (Py_BuildValue("I", self->cppobj->getValue())); +} + +PyObject* +RRTTL_richcmp(s_RRTTL* self, s_RRTTL* other, int op) { + bool c = false; + + // Check for null and if the types match. If different type, + // simply return False + if (!other || (self->ob_type != other->ob_type)) { + Py_RETURN_FALSE; + } + + switch (op) { + case Py_LT: + c = *self->cppobj < *other->cppobj; + break; + case Py_LE: + c = *self->cppobj < *other->cppobj || + *self->cppobj == *other->cppobj; + break; + case Py_EQ: + c = *self->cppobj == *other->cppobj; + break; + case Py_NE: + c = *self->cppobj != *other->cppobj; + break; + case Py_GT: + c = *other->cppobj < *self->cppobj; + break; + case Py_GE: + c = *other->cppobj < *self->cppobj || + *self->cppobj == *other->cppobj; + break; + } + if (c) + Py_RETURN_TRUE; + else + Py_RETURN_FALSE; +} + +} // end anonymous namespace + +namespace isc { +namespace dns { +namespace python { + +// +// Declaration of the custom exceptions +// Initialization and addition of these go in the initModulePart +// function in pydnspp.cc +// +PyObject* po_InvalidRRTTL; +PyObject* po_IncompleteRRTTL; + // This defines the complete type for reflection in python and // parsing of PyObject* to s_RRTTL // Most of the functions are not actually implemented and NULL here. -static PyTypeObject rrttl_type = { +PyTypeObject rrttl_type = { PyVarObject_HEAD_INIT(NULL, 0) "pydnspp.RRTTL", sizeof(s_RRTTL), // tp_basicsize @@ -102,7 +250,7 @@ static PyTypeObject rrttl_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call RRTTL_str, // tp_str NULL, // tp_getattro @@ -143,176 +291,31 @@ static PyTypeObject rrttl_type = { 0 // tp_version_tag }; -static int -RRTTL_init(s_RRTTL* self, PyObject* args) { - const char* s; - long long i; - PyObject* bytes = NULL; - // The constructor argument can be a string ("1234"), an integer (1), - // or a sequence of numbers between 0 and 255 (wire code) - - // Note that PyArg_ParseType can set PyError, and we need to clear - // that if we try several like here. Otherwise the *next* python - // call will suddenly appear to throw an exception. - // (the way to do exceptions is to set PyErr and return -1) - try { - if (PyArg_ParseTuple(args, "s", &s)) { - self->rrttl = new RRTTL(s); - return (0); - } else if (PyArg_ParseTuple(args, "L", &i)) { - PyErr_Clear(); - if (i < 0 || i > 0xffffffff) { - PyErr_SetString(PyExc_ValueError, "RR TTL number out of range"); - return (-1); - } - self->rrttl = new RRTTL(i); - return (0); - } else if (PyArg_ParseTuple(args, "O", &bytes) && - PySequence_Check(bytes)) { - Py_ssize_t size = PySequence_Size(bytes); - vector data(size); - int result = readDataFromSequence(&data[0], size, bytes); - if (result != 0) { - return (result); - } - InputBuffer ib(&data[0], size); - self->rrttl = new RRTTL(ib); - PyErr_Clear(); - return (0); - } - } catch (const IncompleteRRTTL& icc) { - // Ok so one of our functions has thrown a C++ exception. - // We need to translate that to a Python Exception - // First clear any existing error that was set - PyErr_Clear(); - // Now set our own exception - PyErr_SetString(po_IncompleteRRTTL, icc.what()); - // And return negative - return (-1); - } catch (const InvalidRRTTL& ic) { - PyErr_Clear(); - PyErr_SetString(po_InvalidRRTTL, ic.what()); - return (-1); - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in constructor argument"); - return (-1); +PyObject* +createRRTTLObject(const RRTTL& source) { + RRTTLContainer container(PyObject_New(s_RRTTL, &rrttl_type)); + container.set(new RRTTL(source)); + return (container.release()); } -static void -RRTTL_destroy(s_RRTTL* self) { - delete self->rrttl; - self->rrttl = NULL; - Py_TYPE(self)->tp_free(self); -} - -static PyObject* -RRTTL_toText(s_RRTTL* self) { - // Py_BuildValue makes python objects from native data - return (Py_BuildValue("s", self->rrttl->toText().c_str())); -} - -static PyObject* -RRTTL_str(PyObject* self) { - // Simply call the to_text method we already defined - return (PyObject_CallMethod(self, - const_cast("to_text"), - const_cast(""))); -} - -static PyObject* -RRTTL_toWire(s_RRTTL* self, PyObject* args) { - PyObject* bytes; - s_MessageRenderer* mr; - - if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { - PyObject* bytes_o = bytes; - - OutputBuffer buffer(4); - self->rrttl->toWire(buffer); - PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), - buffer.getLength()); - PyObject* result = PySequence_InPlaceConcat(bytes_o, n); - // We need to release the object we temporarily created here - // to prevent memory leak - Py_DECREF(n); - return (result); - } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { - self->rrttl->toWire(*mr->messagerenderer); - // If we return NULL it is seen as an error, so use this for - // None returns - Py_RETURN_NONE; - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "toWire argument must be a sequence object or a MessageRenderer"); - return (NULL); -} - -static PyObject* -RRTTL_getValue(s_RRTTL* self) { - return (Py_BuildValue("I", self->rrttl->getValue())); -} - -static PyObject* -RRTTL_richcmp(s_RRTTL* self, s_RRTTL* other, int op) { - bool c = false; - - // Check for null and if the types match. If different type, - // simply return False - if (!other || (self->ob_type != other->ob_type)) { - Py_RETURN_FALSE; - } - - switch (op) { - case Py_LT: - c = *self->rrttl < *other->rrttl; - break; - case Py_LE: - c = *self->rrttl < *other->rrttl || - *self->rrttl == *other->rrttl; - break; - case Py_EQ: - c = *self->rrttl == *other->rrttl; - break; - case Py_NE: - c = *self->rrttl != *other->rrttl; - break; - case Py_GT: - c = *other->rrttl < *self->rrttl; - break; - case Py_GE: - c = *other->rrttl < *self->rrttl || - *self->rrttl == *other->rrttl; - break; - } - if (c) - Py_RETURN_TRUE; - else - Py_RETURN_FALSE; -} -// end of RRTTL - - -// Module Initialization, all statics are initialized here bool -initModulePart_RRTTL(PyObject* mod) { - // Add the exceptions to the module - po_InvalidRRTTL = PyErr_NewException("pydnspp.InvalidRRTTL", NULL, NULL); - PyModule_AddObject(mod, "InvalidRRTTL", po_InvalidRRTTL); - po_IncompleteRRTTL = PyErr_NewException("pydnspp.IncompleteRRTTL", NULL, NULL); - PyModule_AddObject(mod, "IncompleteRRTTL", po_IncompleteRRTTL); - - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&rrttl_type) < 0) { - return (false); +PyRRTTL_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&rrttl_type); - PyModule_AddObject(mod, "RRTTL", - reinterpret_cast(&rrttl_type)); - - return (true); + return (PyObject_TypeCheck(obj, &rrttl_type)); } + +const RRTTL& +PyRRTTL_ToRRTTL(const PyObject* rrttl_obj) { + if (rrttl_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in RRTTL PyObject conversion"); + } + const s_RRTTL* rrttl = static_cast(rrttl_obj); + return (*rrttl->cppobj); +} + +} // namespace python +} // namespace dns +} // namespace isc diff --git a/src/lib/dns/python/rrttl_python.h b/src/lib/dns/python/rrttl_python.h new file mode 100644 index 0000000000..9dbc9824bd --- /dev/null +++ b/src/lib/dns/python/rrttl_python.h @@ -0,0 +1,67 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_RRTTL_H +#define __PYTHON_RRTTL_H 1 + +#include + +namespace isc { +namespace dns { +class RRTTL; + +namespace python { + +extern PyObject* po_InvalidRRTTL; +extern PyObject* po_IncompleteRRTTL; + +extern PyTypeObject rrttl_type; + +/// This is a simple shortcut to create a python RRTTL object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createRRTTLObject(const RRTTL& source); + +/// \brief Checks if the given python object is a RRTTL object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type RRTTL, false otherwise +bool PyRRTTL_Check(PyObject* obj); + +/// \brief Returns a reference to the RRTTL object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type RRTTL; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyRRTTL_Check() +/// +/// \note This is not a copy; if the RRTTL is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param rrttl_obj The rrttl object to convert +const RRTTL& PyRRTTL_ToRRTTL(const PyObject* rrttl_obj); + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_RRTTL_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/rrtype_python.cc b/src/lib/dns/python/rrtype_python.cc index 00e0acd632..bf20b7cd9e 100644 --- a/src/lib/dns/python/rrtype_python.cc +++ b/src/lib/dns/python/rrtype_python.cc @@ -12,77 +12,64 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include #include #include +#include +#include + +#include "rrtype_python.h" +#include "messagerenderer_python.h" +#include "pydnspp_common.h" using namespace std; using namespace isc::dns; +using namespace isc::dns::python; using namespace isc::util; +using namespace isc::util::python; -// -// Declaration of the custom exceptions -// Initialization and addition of these go in the initModulePart -// function at the end of this file -// -static PyObject* po_InvalidRRType; -static PyObject* po_IncompleteRRType; - -// -// Definition of the classes -// - -// For each class, we need a struct, a helper functions (init, destroy, -// and static wrappers around the methods we export), a list of methods, -// and a type description - -// -// RRType -// - +namespace { // The s_* Class simply covers one instantiation of the object class s_RRType : public PyObject { public: - const RRType* rrtype; + const RRType* cppobj; }; -// -// We declare the functions here, the definitions are below -// the type definition of the object, since both can use the other -// - // General creation and destruction -static int RRType_init(s_RRType* self, PyObject* args); -static void RRType_destroy(s_RRType* self); +int RRType_init(s_RRType* self, PyObject* args); +void RRType_destroy(s_RRType* self); // These are the functions we export -static PyObject* +PyObject* RRType_toText(s_RRType* self); // This is a second version of toText, we need one where the argument // is a PyObject*, for the str() function in python. -static PyObject* RRType_str(PyObject* self); -static PyObject* RRType_toWire(s_RRType* self, PyObject* args); -static PyObject* RRType_getCode(s_RRType* self); -static PyObject* RRType_richcmp(s_RRType* self, s_RRType* other, int op); -static PyObject* RRType_NSEC3PARAM(s_RRType *self); -static PyObject* RRType_DNAME(s_RRType *self); -static PyObject* RRType_PTR(s_RRType *self); -static PyObject* RRType_MX(s_RRType *self); -static PyObject* RRType_DNSKEY(s_RRType *self); -static PyObject* RRType_TXT(s_RRType *self); -static PyObject* RRType_RRSIG(s_RRType *self); -static PyObject* RRType_NSEC(s_RRType *self); -static PyObject* RRType_AAAA(s_RRType *self); -static PyObject* RRType_DS(s_RRType *self); -static PyObject* RRType_OPT(s_RRType *self); -static PyObject* RRType_A(s_RRType *self); -static PyObject* RRType_NS(s_RRType *self); -static PyObject* RRType_CNAME(s_RRType *self); -static PyObject* RRType_SOA(s_RRType *self); -static PyObject* RRType_NSEC3(s_RRType *self); -static PyObject* RRType_IXFR(s_RRType *self); -static PyObject* RRType_AXFR(s_RRType *self); -static PyObject* RRType_ANY(s_RRType *self); +PyObject* RRType_str(PyObject* self); +PyObject* RRType_toWire(s_RRType* self, PyObject* args); +PyObject* RRType_getCode(s_RRType* self); +PyObject* RRType_richcmp(s_RRType* self, s_RRType* other, int op); +PyObject* RRType_NSEC3PARAM(s_RRType *self); +PyObject* RRType_DNAME(s_RRType *self); +PyObject* RRType_PTR(s_RRType *self); +PyObject* RRType_MX(s_RRType *self); +PyObject* RRType_DNSKEY(s_RRType *self); +PyObject* RRType_TXT(s_RRType *self); +PyObject* RRType_RRSIG(s_RRType *self); +PyObject* RRType_NSEC(s_RRType *self); +PyObject* RRType_AAAA(s_RRType *self); +PyObject* RRType_DS(s_RRType *self); +PyObject* RRType_OPT(s_RRType *self); +PyObject* RRType_A(s_RRType *self); +PyObject* RRType_NS(s_RRType *self); +PyObject* RRType_CNAME(s_RRType *self); +PyObject* RRType_SOA(s_RRType *self); +PyObject* RRType_NSEC3(s_RRType *self); +PyObject* RRType_IXFR(s_RRType *self); +PyObject* RRType_AXFR(s_RRType *self); +PyObject* RRType_ANY(s_RRType *self); + +typedef CPPPyObjectContainer RRTypeContainer; // This list contains the actual set of functions we have in // python. Each entry has @@ -90,7 +77,7 @@ static PyObject* RRType_ANY(s_RRType *self); // 2. Our static function here // 3. Argument type // 4. Documentation -static PyMethodDef RRType_methods[] = { +PyMethodDef RRType_methods[] = { { "to_text", reinterpret_cast(RRType_toText), METH_NOARGS, "Returns the string representation" }, { "to_wire", reinterpret_cast(RRType_toWire), METH_VARARGS, @@ -124,10 +111,276 @@ static PyMethodDef RRType_methods[] = { { NULL, NULL, 0, NULL } }; +int +RRType_init(s_RRType* self, PyObject* args) { + const char* s; + long i; + PyObject* bytes = NULL; + // The constructor argument can be a string ("A"), an integer (1), + // or a sequence of numbers between 0 and 65535 (wire code) + + // Note that PyArg_ParseType can set PyError, and we need to clear + // that if we try several like here. Otherwise the *next* python + // call will suddenly appear to throw an exception. + // (the way to do exceptions is to set PyErr and return -1) + try { + if (PyArg_ParseTuple(args, "s", &s)) { + self->cppobj = new RRType(s); + return (0); + } else if (PyArg_ParseTuple(args, "l", &i)) { + PyErr_Clear(); + if (i < 0 || i > 0xffff) { + PyErr_SetString(PyExc_ValueError, "RR Type number out of range"); + return (-1); + } + self->cppobj = new RRType(i); + return (0); + } else if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { + Py_ssize_t size = PySequence_Size(bytes); + vector data(size); + int result = readDataFromSequence(&data[0], size, bytes); + if (result != 0) { + return (result); + } + InputBuffer ib(&data[0], size); + self->cppobj = new RRType(ib); + PyErr_Clear(); + return (0); + } + } catch (const IncompleteRRType& icc) { + // Ok so one of our functions has thrown a C++ exception. + // We need to translate that to a Python Exception + // First clear any existing error that was set + PyErr_Clear(); + // Now set our own exception + PyErr_SetString(po_IncompleteRRType, icc.what()); + // And return negative + return (-1); + } catch (const InvalidRRType& ic) { + PyErr_Clear(); + PyErr_SetString(po_InvalidRRType, ic.what()); + return (-1); + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "no valid type in constructor argument"); + return (-1); +} + +void +RRType_destroy(s_RRType* self) { + delete self->cppobj; + self->cppobj = NULL; + Py_TYPE(self)->tp_free(self); +} + +PyObject* +RRType_toText(s_RRType* self) { + // Py_BuildValue makes python objects from native data + return (Py_BuildValue("s", self->cppobj->toText().c_str())); +} + +PyObject* +RRType_str(PyObject* self) { + // Simply call the to_text method we already defined + return (PyObject_CallMethod(self, const_cast("to_text"), + const_cast(""))); +} + +PyObject* +RRType_toWire(s_RRType* self, PyObject* args) { + PyObject* bytes; + PyObject* mr; + + if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { + PyObject* bytes_o = bytes; + + OutputBuffer buffer(2); + self->cppobj->toWire(buffer); + PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), buffer.getLength()); + PyObject* result = PySequence_InPlaceConcat(bytes_o, n); + // We need to release the object we temporarily created here + // to prevent memory leak + Py_DECREF(n); + return (result); + } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { + self->cppobj->toWire(PyMessageRenderer_ToMessageRenderer(mr)); + // If we return NULL it is seen as an error, so use this for + // None returns + Py_RETURN_NONE; + } + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "toWire argument must be a sequence object or a MessageRenderer"); + return (NULL); +} + +PyObject* +RRType_getCode(s_RRType* self) { + return (Py_BuildValue("I", self->cppobj->getCode())); +} + +PyObject* +RRType_richcmp(s_RRType* self, s_RRType* other, int op) { + bool c; + + // Check for null and if the types match. If different type, + // simply return False + if (!other || (self->ob_type != other->ob_type)) { + Py_RETURN_FALSE; + } + + switch (op) { + case Py_LT: + c = *self->cppobj < *other->cppobj; + break; + case Py_LE: + c = *self->cppobj < *other->cppobj || + *self->cppobj == *other->cppobj; + break; + case Py_EQ: + c = *self->cppobj == *other->cppobj; + break; + case Py_NE: + c = *self->cppobj != *other->cppobj; + break; + case Py_GT: + c = *other->cppobj < *self->cppobj; + break; + case Py_GE: + c = *other->cppobj < *self->cppobj || + *self->cppobj == *other->cppobj; + break; + default: + PyErr_SetString(PyExc_IndexError, + "Unhandled rich comparison operator"); + return (NULL); + } + if (c) + Py_RETURN_TRUE; + else + Py_RETURN_FALSE; +} + +// +// Common function for RRType_A/NS/etc. +// +PyObject* RRType_createStatic(RRType stc) { + s_RRType* ret = PyObject_New(s_RRType, &rrtype_type); + if (ret != NULL) { + ret->cppobj = new RRType(stc); + } + return (ret); +} + +PyObject* +RRType_NSEC3PARAM(s_RRType*) { + return (RRType_createStatic(RRType::NSEC3PARAM())); +} + +PyObject* +RRType_DNAME(s_RRType*) { + return (RRType_createStatic(RRType::DNAME())); +} + +PyObject* +RRType_PTR(s_RRType*) { + return (RRType_createStatic(RRType::PTR())); +} + +PyObject* +RRType_MX(s_RRType*) { + return (RRType_createStatic(RRType::MX())); +} + +PyObject* +RRType_DNSKEY(s_RRType*) { + return (RRType_createStatic(RRType::DNSKEY())); +} + +PyObject* +RRType_TXT(s_RRType*) { + return (RRType_createStatic(RRType::TXT())); +} + +PyObject* +RRType_RRSIG(s_RRType*) { + return (RRType_createStatic(RRType::RRSIG())); +} + +PyObject* +RRType_NSEC(s_RRType*) { + return (RRType_createStatic(RRType::NSEC())); +} + +PyObject* +RRType_AAAA(s_RRType*) { + return (RRType_createStatic(RRType::AAAA())); +} + +PyObject* +RRType_DS(s_RRType*) { + return (RRType_createStatic(RRType::DS())); +} + +PyObject* +RRType_OPT(s_RRType*) { + return (RRType_createStatic(RRType::OPT())); +} + +PyObject* +RRType_A(s_RRType*) { + return (RRType_createStatic(RRType::A())); +} + +PyObject* +RRType_NS(s_RRType*) { + return (RRType_createStatic(RRType::NS())); +} + +PyObject* +RRType_CNAME(s_RRType*) { + return (RRType_createStatic(RRType::CNAME())); +} + +PyObject* +RRType_SOA(s_RRType*) { + return (RRType_createStatic(RRType::SOA())); +} + +PyObject* +RRType_NSEC3(s_RRType*) { + return (RRType_createStatic(RRType::NSEC3())); +} + +PyObject* +RRType_IXFR(s_RRType*) { + return (RRType_createStatic(RRType::IXFR())); +} + +PyObject* +RRType_AXFR(s_RRType*) { + return (RRType_createStatic(RRType::AXFR())); +} + +PyObject* +RRType_ANY(s_RRType*) { + return (RRType_createStatic(RRType::ANY())); +} + +} // end anonymous namespace + +namespace isc { +namespace dns { +namespace python { + +PyObject* po_InvalidRRType; +PyObject* po_IncompleteRRType; + // This defines the complete type for reflection in python and // parsing of PyObject* to s_RRType // Most of the functions are not actually implemented and NULL here. -static PyTypeObject rrtype_type = { +PyTypeObject rrtype_type = { PyVarObject_HEAD_INIT(NULL, 0) "pydnspp.RRType", sizeof(s_RRType), // tp_basicsize @@ -141,7 +394,7 @@ static PyTypeObject rrtype_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call RRType_str, // tp_str NULL, // tp_getattro @@ -180,285 +433,32 @@ static PyTypeObject rrtype_type = { 0 // tp_version_tag }; -static int -RRType_init(s_RRType* self, PyObject* args) { - const char* s; - long i; - PyObject* bytes = NULL; - // The constructor argument can be a string ("A"), an integer (1), - // or a sequence of numbers between 0 and 65535 (wire code) - - // Note that PyArg_ParseType can set PyError, and we need to clear - // that if we try several like here. Otherwise the *next* python - // call will suddenly appear to throw an exception. - // (the way to do exceptions is to set PyErr and return -1) - try { - if (PyArg_ParseTuple(args, "s", &s)) { - self->rrtype = new RRType(s); - return (0); - } else if (PyArg_ParseTuple(args, "l", &i)) { - PyErr_Clear(); - if (i < 0 || i > 0xffff) { - PyErr_SetString(PyExc_ValueError, "RR Type number out of range"); - return (-1); - } - self->rrtype = new RRType(i); - return (0); - } else if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { - Py_ssize_t size = PySequence_Size(bytes); - vector data(size); - int result = readDataFromSequence(&data[0], size, bytes); - if (result != 0) { - return (result); - } - InputBuffer ib(&data[0], size); - self->rrtype = new RRType(ib); - PyErr_Clear(); - return (0); - } - } catch (const IncompleteRRType& icc) { - // Ok so one of our functions has thrown a C++ exception. - // We need to translate that to a Python Exception - // First clear any existing error that was set - PyErr_Clear(); - // Now set our own exception - PyErr_SetString(po_IncompleteRRType, icc.what()); - // And return negative - return (-1); - } catch (const InvalidRRType& ic) { - PyErr_Clear(); - PyErr_SetString(po_InvalidRRType, ic.what()); - return (-1); - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "no valid type in constructor argument"); - return (-1); +PyObject* +createRRTypeObject(const RRType& source) { + RRTypeContainer container(PyObject_New(s_RRType, &rrtype_type)); + container.set(new RRType(source)); + return (container.release()); } -static void -RRType_destroy(s_RRType* self) { - delete self->rrtype; - self->rrtype = NULL; - Py_TYPE(self)->tp_free(self); -} - -static PyObject* -RRType_toText(s_RRType* self) { - // Py_BuildValue makes python objects from native data - return (Py_BuildValue("s", self->rrtype->toText().c_str())); -} - -static PyObject* -RRType_str(PyObject* self) { - // Simply call the to_text method we already defined - return (PyObject_CallMethod(self, const_cast("to_text"), - const_cast(""))); -} - -static PyObject* -RRType_toWire(s_RRType* self, PyObject* args) { - PyObject* bytes; - s_MessageRenderer* mr; - - if (PyArg_ParseTuple(args, "O", &bytes) && PySequence_Check(bytes)) { - PyObject* bytes_o = bytes; - - OutputBuffer buffer(2); - self->rrtype->toWire(buffer); - PyObject* n = PyBytes_FromStringAndSize(static_cast(buffer.getData()), buffer.getLength()); - PyObject* result = PySequence_InPlaceConcat(bytes_o, n); - // We need to release the object we temporarily created here - // to prevent memory leak - Py_DECREF(n); - return (result); - } else if (PyArg_ParseTuple(args, "O!", &messagerenderer_type, &mr)) { - self->rrtype->toWire(*mr->messagerenderer); - // If we return NULL it is seen as an error, so use this for - // None returns - Py_RETURN_NONE; - } - PyErr_Clear(); - PyErr_SetString(PyExc_TypeError, - "toWire argument must be a sequence object or a MessageRenderer"); - return (NULL); -} - -static PyObject* -RRType_getCode(s_RRType* self) { - return (Py_BuildValue("I", self->rrtype->getCode())); -} - -static PyObject* -RRType_richcmp(s_RRType* self, s_RRType* other, int op) { - bool c; - - // Check for null and if the types match. If different type, - // simply return False - if (!other || (self->ob_type != other->ob_type)) { - Py_RETURN_FALSE; - } - - switch (op) { - case Py_LT: - c = *self->rrtype < *other->rrtype; - break; - case Py_LE: - c = *self->rrtype < *other->rrtype || - *self->rrtype == *other->rrtype; - break; - case Py_EQ: - c = *self->rrtype == *other->rrtype; - break; - case Py_NE: - c = *self->rrtype != *other->rrtype; - break; - case Py_GT: - c = *other->rrtype < *self->rrtype; - break; - case Py_GE: - c = *other->rrtype < *self->rrtype || - *self->rrtype == *other->rrtype; - break; - default: - PyErr_SetString(PyExc_IndexError, - "Unhandled rich comparison operator"); - return (NULL); - } - if (c) - Py_RETURN_TRUE; - else - Py_RETURN_FALSE; -} - -// -// Common function for RRType_A/NS/etc. -// -static PyObject* RRType_createStatic(RRType stc) { - s_RRType* ret = PyObject_New(s_RRType, &rrtype_type); - if (ret != NULL) { - ret->rrtype = new RRType(stc); - } - return (ret); -} - -static PyObject* -RRType_NSEC3PARAM(s_RRType*) { - return (RRType_createStatic(RRType::NSEC3PARAM())); -} - -static PyObject* -RRType_DNAME(s_RRType*) { - return (RRType_createStatic(RRType::DNAME())); -} - -static PyObject* -RRType_PTR(s_RRType*) { - return (RRType_createStatic(RRType::PTR())); -} - -static PyObject* -RRType_MX(s_RRType*) { - return (RRType_createStatic(RRType::MX())); -} - -static PyObject* -RRType_DNSKEY(s_RRType*) { - return (RRType_createStatic(RRType::DNSKEY())); -} - -static PyObject* -RRType_TXT(s_RRType*) { - return (RRType_createStatic(RRType::TXT())); -} - -static PyObject* -RRType_RRSIG(s_RRType*) { - return (RRType_createStatic(RRType::RRSIG())); -} - -static PyObject* -RRType_NSEC(s_RRType*) { - return (RRType_createStatic(RRType::NSEC())); -} - -static PyObject* -RRType_AAAA(s_RRType*) { - return (RRType_createStatic(RRType::AAAA())); -} - -static PyObject* -RRType_DS(s_RRType*) { - return (RRType_createStatic(RRType::DS())); -} - -static PyObject* -RRType_OPT(s_RRType*) { - return (RRType_createStatic(RRType::OPT())); -} - -static PyObject* -RRType_A(s_RRType*) { - return (RRType_createStatic(RRType::A())); -} - -static PyObject* -RRType_NS(s_RRType*) { - return (RRType_createStatic(RRType::NS())); -} - -static PyObject* -RRType_CNAME(s_RRType*) { - return (RRType_createStatic(RRType::CNAME())); -} - -static PyObject* -RRType_SOA(s_RRType*) { - return (RRType_createStatic(RRType::SOA())); -} - -static PyObject* -RRType_NSEC3(s_RRType*) { - return (RRType_createStatic(RRType::NSEC3())); -} - -static PyObject* -RRType_IXFR(s_RRType*) { - return (RRType_createStatic(RRType::IXFR())); -} - -static PyObject* -RRType_AXFR(s_RRType*) { - return (RRType_createStatic(RRType::AXFR())); -} - -static PyObject* -RRType_ANY(s_RRType*) { - return (RRType_createStatic(RRType::ANY())); -} - - -// end of RRType - - -// Module Initialization, all statics are initialized here bool -initModulePart_RRType(PyObject* mod) { - // Add the exceptions to the module - po_InvalidRRType = PyErr_NewException("pydnspp.InvalidRRType", NULL, NULL); - PyModule_AddObject(mod, "InvalidRRType", po_InvalidRRType); - po_IncompleteRRType = PyErr_NewException("pydnspp.IncompleteRRType", NULL, NULL); - PyModule_AddObject(mod, "IncompleteRRType", po_IncompleteRRType); - - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&rrtype_type) < 0) { - return (false); +PyRRType_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&rrtype_type); - PyModule_AddObject(mod, "RRType", - reinterpret_cast(&rrtype_type)); - - return (true); + return (PyObject_TypeCheck(obj, &rrtype_type)); } + +const RRType& +PyRRType_ToRRType(const PyObject* rrtype_obj) { + if (rrtype_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in RRType PyObject conversion"); + } + const s_RRType* rrtype = static_cast(rrtype_obj); + return (*rrtype->cppobj); +} + + +} // end namespace python +} // end namespace dns +} // end namespace isc diff --git a/src/lib/dns/python/rrtype_python.h b/src/lib/dns/python/rrtype_python.h new file mode 100644 index 0000000000..596598e002 --- /dev/null +++ b/src/lib/dns/python/rrtype_python.h @@ -0,0 +1,68 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_RRTYPE_H +#define __PYTHON_RRTYPE_H 1 + +#include + +namespace isc { +namespace dns { +class RRType; + +namespace python { + +extern PyObject* po_InvalidRRType; +extern PyObject* po_IncompleteRRType; + +extern PyTypeObject rrtype_type; + +/// This is a simple shortcut to create a python RRType object (in the +/// form of a pointer to PyObject) with minimal exception safety. +/// On success, it returns a valid pointer to PyObject with a reference +/// counter of 1; if something goes wrong it throws an exception (it never +/// returns a NULL pointer). +/// This function is expected to be called within a try block +/// followed by necessary setup for python exception. +PyObject* createRRTypeObject(const RRType& source); + +/// \brief Checks if the given python object is a RRType object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type RRType, false otherwise +bool PyRRType_Check(PyObject* obj); + +/// \brief Returns a reference to the RRType object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type RRType; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyRRType_Check() +/// +/// \note This is not a copy; if the RRType is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param rrtype_obj The rrtype object to convert +const RRType& PyRRType_ToRRType(const PyObject* rrtype_obj); + + +} // namespace python +} // namespace dns +} // namespace isc +#endif // __PYTHON_RRTYPE_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/python/tests/Makefile.am b/src/lib/dns/python/tests/Makefile.am index 61d7df6e6a..d1273f3551 100644 --- a/src/lib/dns/python/tests/Makefile.am +++ b/src/lib/dns/python/tests/Makefile.am @@ -24,7 +24,7 @@ EXTRA_DIST += testutil.py # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS diff --git a/src/lib/dns/python/tests/message_python_test.py b/src/lib/dns/python/tests/message_python_test.py index 41b9a67903..8f2d7323f2 100644 --- a/src/lib/dns/python/tests/message_python_test.py +++ b/src/lib/dns/python/tests/message_python_test.py @@ -21,6 +21,7 @@ import unittest import os from pydnspp import * from testutil import * +from pyunittests_util import fix_current_time # helper functions for tests taken from c++ unittests if "TESTDATA_PATH" in os.environ: @@ -28,10 +29,10 @@ if "TESTDATA_PATH" in os.environ: else: testdata_path = "../tests/testdata" -def factoryFromFile(message, file): +def factoryFromFile(message, file, parse_options=Message.PARSE_DEFAULT): data = read_wire_data(file) - message.from_wire(data) - pass + message.from_wire(data, parse_options) + return data # we don't have direct comparison for rrsets right now (should we? # should go in the cpp version first then), so also no direct list @@ -44,6 +45,15 @@ def compare_rrset_list(list1, list2): return False return True +# These are used for TSIG + TC tests +LONG_TXT1 = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcde"; + +LONG_TXT2 = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456"; + +LONG_TXT3 = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01"; + +LONG_TXT4 = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0"; + # a complete message taken from cpp tests, for testing towire and totext def create_message(): message_render = Message(Message.RENDER) @@ -62,16 +72,12 @@ def create_message(): message_render.add_rrset(Message.SECTION_ANSWER, rrset) return message_render -def strip_mutable_tsig_data(data): - # Unfortunately we cannot easily compare TSIG RR because we can't tweak - # current time. As a work around this helper function strips off the time - # dependent part of TSIG RDATA, i.e., the MAC (assuming HMAC-MD5) and - # Time Signed. - return data[0:-32] + data[-26:-22] + data[-6:] - class MessageTest(unittest.TestCase): def setUp(self): + # make sure we don't use faked time unless explicitly do so in tests + fix_current_time(None) + self.p = Message(Message.PARSE) self.r = Message(Message.RENDER) @@ -90,6 +96,10 @@ class MessageTest(unittest.TestCase): self.tsig_key = TSIGKey("www.example.com:SFuWd/q99SzF8Yzd1QbB9g==") self.tsig_ctx = TSIGContext(self.tsig_key) + def tearDown(self): + # reset any faked current time setting (it would affect other tests) + fix_current_time(None) + def test_init(self): self.assertRaises(TypeError, Message, -1) self.assertRaises(TypeError, Message, 3) @@ -285,33 +295,112 @@ class MessageTest(unittest.TestCase): self.assertRaises(InvalidMessageOperation, self.r.to_wire, MessageRenderer()) - def __common_tsigquery_setup(self): + def __common_tsigmessage_setup(self, flags=[Message.HEADERFLAG_RD], + rrtype=RRType("A"), answer_data=None): self.r.set_opcode(Opcode.QUERY()) self.r.set_rcode(Rcode.NOERROR()) - self.r.set_header_flag(Message.HEADERFLAG_RD) + for flag in flags: + self.r.set_header_flag(flag) + if answer_data is not None: + rrset = RRset(Name("www.example.com"), RRClass("IN"), + rrtype, RRTTL(86400)) + for rdata in answer_data: + rrset.add_rdata(Rdata(rrtype, RRClass("IN"), rdata)) + self.r.add_rrset(Message.SECTION_ANSWER, rrset) self.r.add_question(Question(Name("www.example.com"), - RRClass("IN"), RRType("A"))) + RRClass("IN"), rrtype)) def __common_tsig_checks(self, expected_file): renderer = MessageRenderer() self.r.to_wire(renderer, self.tsig_ctx) - actual_wire = strip_mutable_tsig_data(renderer.get_data()) - expected_wire = strip_mutable_tsig_data(read_wire_data(expected_file)) - self.assertEqual(expected_wire, actual_wire) + self.assertEqual(read_wire_data(expected_file), renderer.get_data()) def test_to_wire_with_tsig(self): + fix_current_time(0x4da8877a) self.r.set_qid(0x2d65) - self.__common_tsigquery_setup() + self.__common_tsigmessage_setup() self.__common_tsig_checks("message_toWire2.wire") def test_to_wire_with_edns_tsig(self): + fix_current_time(0x4db60d1f) self.r.set_qid(0x6cd) - self.__common_tsigquery_setup() + self.__common_tsigmessage_setup() edns = EDNS() edns.set_udp_size(4096) self.r.set_edns(edns) self.__common_tsig_checks("message_toWire3.wire") + def test_to_wire_tsig_truncation(self): + fix_current_time(0x4e179212) + data = factoryFromFile(self.p, "message_fromWire17.wire") + self.assertEqual(TSIGError.NOERROR, + self.tsig_ctx.verify(self.p.get_tsig_record(), data)) + self.r.set_qid(0x22c2) + self.__common_tsigmessage_setup([Message.HEADERFLAG_QR, + Message.HEADERFLAG_AA, + Message.HEADERFLAG_RD], + RRType("TXT"), + [LONG_TXT1, LONG_TXT2]) + self.__common_tsig_checks("message_toWire4.wire") + + def test_to_wire_tsig_truncation2(self): + fix_current_time(0x4e179212) + data = factoryFromFile(self.p, "message_fromWire17.wire") + self.assertEqual(TSIGError.NOERROR, + self.tsig_ctx.verify(self.p.get_tsig_record(), data)) + self.r.set_qid(0x22c2) + self.__common_tsigmessage_setup([Message.HEADERFLAG_QR, + Message.HEADERFLAG_AA, + Message.HEADERFLAG_RD], + RRType("TXT"), + [LONG_TXT1, LONG_TXT3]) + self.__common_tsig_checks("message_toWire4.wire") + + def test_to_wire_tsig_truncation3(self): + self.r.set_opcode(Opcode.QUERY()) + self.r.set_rcode(Rcode.NOERROR()) + for i in range(1, 68): + self.r.add_question(Question(Name("www.example.com"), + RRClass("IN"), RRType(i))) + renderer = MessageRenderer() + self.r.to_wire(renderer, self.tsig_ctx) + + self.p.from_wire(renderer.get_data()) + self.assertTrue(self.p.get_header_flag(Message.HEADERFLAG_TC)) + self.assertEqual(66, self.p.get_rr_count(Message.SECTION_QUESTION)) + self.assertNotEqual(None, self.p.get_tsig_record()) + + def test_to_wire_tsig_no_truncation(self): + fix_current_time(0x4e17b38d) + data = factoryFromFile(self.p, "message_fromWire18.wire") + self.assertEqual(TSIGError.NOERROR, + self.tsig_ctx.verify(self.p.get_tsig_record(), data)) + self.r.set_qid(0xd6e2) + self.__common_tsigmessage_setup([Message.HEADERFLAG_QR, + Message.HEADERFLAG_AA, + Message.HEADERFLAG_RD], + RRType("TXT"), + [LONG_TXT1, LONG_TXT4]) + self.__common_tsig_checks("message_toWire5.wire") + + def test_to_wire_tsig_length_errors(self): + renderer = MessageRenderer() + renderer.set_length_limit(84) # 84 = expected TSIG length - 1 + self.__common_tsigmessage_setup() + self.assertRaises(TSIGContextError, + self.r.to_wire, renderer, self.tsig_ctx) + + renderer.clear() + self.r.clear(Message.RENDER) + renderer.set_length_limit(86) # 86 = expected TSIG length + 1 + self.__common_tsigmessage_setup() + self.assertRaises(TSIGContextError, + self.r.to_wire, renderer, self.tsig_ctx) + + # skip the last test of the corresponding C++ test: it requires + # subclassing MessageRenderer, which is (currently) not possible + # for python. In any case, it's very unlikely to happen in practice. + def test_to_text(self): message_render = create_message() @@ -377,6 +466,54 @@ test.example.com. 3600 IN A 192.0.2.2 self.assertEqual("192.0.2.2", rdata[1].to_text()) self.assertEqual(2, len(rdata)) + def test_from_wire_short_buffer(self): + data = read_wire_data("message_fromWire22.wire") + self.assertRaises(DNSMessageFORMERR, self.p.from_wire, data[:-1]) + + def test_from_wire_combind_rrs(self): + factoryFromFile(self.p, "message_fromWire19.wire") + rrset = self.p.get_section(Message.SECTION_ANSWER)[0] + self.assertEqual(RRType("A"), rrset.get_type()) + self.assertEqual(2, len(rrset.get_rdata())) + + rrset = self.p.get_section(Message.SECTION_ANSWER)[1] + self.assertEqual(RRType("AAAA"), rrset.get_type()) + self.assertEqual(1, len(rrset.get_rdata())) + + def check_preserve_rrs(self, message, section): + rrset = message.get_section(section)[0] + self.assertEqual(RRType("A"), rrset.get_type()) + rdata = rrset.get_rdata() + self.assertEqual(1, len(rdata)) + self.assertEqual('192.0.2.1', rdata[0].to_text()) + + rrset = message.get_section(section)[1] + self.assertEqual(RRType("AAAA"), rrset.get_type()) + rdata = rrset.get_rdata() + self.assertEqual(1, len(rdata)) + self.assertEqual('2001:db8::1', rdata[0].to_text()) + + rrset = message.get_section(section)[2] + self.assertEqual(RRType("A"), rrset.get_type()) + rdata = rrset.get_rdata() + self.assertEqual(1, len(rdata)) + self.assertEqual('192.0.2.2', rdata[0].to_text()) + + def test_from_wire_preserve_answer(self): + factoryFromFile(self.p, "message_fromWire19.wire", + Message.PRESERVE_ORDER) + self.check_preserve_rrs(self.p, Message.SECTION_ANSWER) + + def test_from_wire_preserve_authority(self): + factoryFromFile(self.p, "message_fromWire20.wire", + Message.PRESERVE_ORDER) + self.check_preserve_rrs(self.p, Message.SECTION_AUTHORITY) + + def test_from_wire_preserve_additional(self): + factoryFromFile(self.p, "message_fromWire21.wire", + Message.PRESERVE_ORDER) + self.check_preserve_rrs(self.p, Message.SECTION_ADDITIONAL) + def test_EDNS0ExtCode(self): # Extended Rcode = BADVERS message_parse = Message(Message.PARSE) diff --git a/src/lib/dns/python/tests/question_python_test.py b/src/lib/dns/python/tests/question_python_test.py index 69e3051933..8c8c81580e 100644 --- a/src/lib/dns/python/tests/question_python_test.py +++ b/src/lib/dns/python/tests/question_python_test.py @@ -74,7 +74,6 @@ class QuestionTest(unittest.TestCase): self.assertEqual("foo.example.com. IN NS\n", str(self.test_question1)) self.assertEqual("bar.example.com. CH A\n", self.test_question2.to_text()) - def test_to_wire_buffer(self): obuffer = bytes() obuffer = self.test_question1.to_wire(obuffer) @@ -82,7 +81,6 @@ class QuestionTest(unittest.TestCase): wiredata = read_wire_data("question_toWire1") self.assertEqual(obuffer, wiredata) - def test_to_wire_renderer(self): renderer = MessageRenderer() self.test_question1.to_wire(renderer) @@ -91,5 +89,13 @@ class QuestionTest(unittest.TestCase): self.assertEqual(renderer.get_data(), wiredata) self.assertRaises(TypeError, self.test_question1.to_wire, 1) + def test_to_wire_truncated(self): + renderer = MessageRenderer() + renderer.set_length_limit(self.example_name1.get_length()) + self.assertFalse(renderer.is_truncated()) + self.test_question1.to_wire(renderer) + self.assertTrue(renderer.is_truncated()) + self.assertEqual(0, renderer.get_length()) + if __name__ == '__main__': unittest.main() diff --git a/src/lib/dns/python/tsig_python.cc b/src/lib/dns/python/tsig_python.cc index db93a086d1..0764e3375e 100644 --- a/src/lib/dns/python/tsig_python.cc +++ b/src/lib/dns/python/tsig_python.cc @@ -37,23 +37,18 @@ using namespace isc::util::python; using namespace isc::dns; using namespace isc::dns::python; -// -// Definition of the classes -// - // For each class, we need a struct, a helper functions (init, destroy, // and static wrappers around the methods we export), a list of methods, // and a type description -// -// TSIGContext -// - -// Trivial constructor. -s_TSIGContext::s_TSIGContext() : cppobj(NULL) { -} - namespace { +// The s_* Class simply covers one instantiation of the object +class s_TSIGContext : public PyObject { +public: + s_TSIGContext() : cppobj(NULL) {}; + TSIGContext* cppobj; +}; + // Shortcut type which would be convenient for adding class variables safely. typedef CPPPyObjectContainer TSIGContextContainer; @@ -101,23 +96,23 @@ int TSIGContext_init(s_TSIGContext* self, PyObject* args) { try { // "From key" constructor - const s_TSIGKey* tsigkey_obj; + const PyObject* tsigkey_obj; if (PyArg_ParseTuple(args, "O!", &tsigkey_type, &tsigkey_obj)) { - self->cppobj = new TSIGContext(*tsigkey_obj->cppobj); + self->cppobj = new TSIGContext(PyTSIGKey_ToTSIGKey(tsigkey_obj)); return (0); } // "From key param + keyring" constructor PyErr_Clear(); - const s_Name* keyname_obj; - const s_Name* algname_obj; - const s_TSIGKeyRing* keyring_obj; + const PyObject* keyname_obj; + const PyObject* algname_obj; + const PyObject* keyring_obj; if (PyArg_ParseTuple(args, "O!O!O!", &name_type, &keyname_obj, &name_type, &algname_obj, &tsigkeyring_type, &keyring_obj)) { - self->cppobj = new TSIGContext(*keyname_obj->cppobj, - *algname_obj->cppobj, - *keyring_obj->cppobj); + self->cppobj = new TSIGContext(PyName_ToName(keyname_obj), + PyName_ToName(algname_obj), + PyTSIGKeyRing_ToTSIGKeyRing(keyring_obj)); return (0); } } catch (const exception& ex) { @@ -153,7 +148,7 @@ PyObject* TSIGContext_getError(s_TSIGContext* self) { try { PyObjectContainer container(createTSIGErrorObject( - self->cppobj->getError())); + self->cppobj->getError())); return (Py_BuildValue("O", container.get())); } catch (const exception& ex) { const string ex_what = @@ -205,13 +200,13 @@ PyObject* TSIGContext_verify(s_TSIGContext* self, PyObject* args) { const char* data; Py_ssize_t data_len; - s_TSIGRecord* py_record; + PyObject* py_record; PyObject* py_maybe_none; - TSIGRecord* record; + const TSIGRecord* record; if (PyArg_ParseTuple(args, "O!y#", &tsigrecord_type, &py_record, &data, &data_len)) { - record = py_record->cppobj; + record = &PyTSIGRecord_ToTSIGRecord(py_record); } else if (PyArg_ParseTuple(args, "Oy#", &py_maybe_none, &data, &data_len)) { record = NULL; @@ -264,7 +259,7 @@ PyTypeObject tsigcontext_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call NULL, // tp_str NULL, // tp_getattro @@ -307,58 +302,24 @@ PyTypeObject tsigcontext_type = { 0 // tp_version_tag }; -// Module Initialization, all statics are initialized here bool -initModulePart_TSIGContext(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&tsigcontext_type) < 0) { - return (false); +PyTSIGContext_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - void* p = &tsigcontext_type; - if (PyModule_AddObject(mod, "TSIGContext", - static_cast(p)) < 0) { - return (false); - } - Py_INCREF(&tsigcontext_type); - - try { - // Class specific exceptions - po_TSIGContextError = PyErr_NewException("pydnspp.TSIGContextError", - po_IscException, NULL); - PyObjectContainer(po_TSIGContextError).installToModule( - mod, "TSIGContextError"); - - // Constant class variables - installClassVariable(tsigcontext_type, "STATE_INIT", - Py_BuildValue("I", TSIGContext::INIT)); - installClassVariable(tsigcontext_type, "STATE_SENT_REQUEST", - Py_BuildValue("I", TSIGContext::SENT_REQUEST)); - installClassVariable(tsigcontext_type, "STATE_RECEIVED_REQUEST", - Py_BuildValue("I", TSIGContext::RECEIVED_REQUEST)); - installClassVariable(tsigcontext_type, "STATE_SENT_RESPONSE", - Py_BuildValue("I", TSIGContext::SENT_RESPONSE)); - installClassVariable(tsigcontext_type, "STATE_VERIFIED_RESPONSE", - Py_BuildValue("I", - TSIGContext::VERIFIED_RESPONSE)); - - installClassVariable(tsigcontext_type, "DEFAULT_FUDGE", - Py_BuildValue("H", TSIGContext::DEFAULT_FUDGE)); - } catch (const exception& ex) { - const string ex_what = - "Unexpected failure in TSIGContext initialization: " + - string(ex.what()); - PyErr_SetString(po_IscException, ex_what.c_str()); - return (false); - } catch (...) { - PyErr_SetString(PyExc_SystemError, - "Unexpected failure in TSIGContext initialization"); - return (false); - } - - return (true); + return (PyObject_TypeCheck(obj, &tsigcontext_type)); } + +TSIGContext& +PyTSIGContext_ToTSIGContext(PyObject* tsigcontext_obj) { + if (tsigcontext_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in TSIGContext PyObject conversion"); + } + s_TSIGContext* tsigcontext = static_cast(tsigcontext_obj); + return (*tsigcontext->cppobj); +} + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/tsig_python.h b/src/lib/dns/python/tsig_python.h index f9b4f7b94a..e4e9ffffcd 100644 --- a/src/lib/dns/python/tsig_python.h +++ b/src/lib/dns/python/tsig_python.h @@ -23,19 +23,31 @@ class TSIGContext; namespace python { -// The s_* Class simply covers one instantiation of the object -class s_TSIGContext : public PyObject { -public: - s_TSIGContext(); - TSIGContext* cppobj; -}; - extern PyTypeObject tsigcontext_type; // Class specific exceptions extern PyObject* po_TSIGContextError; -bool initModulePart_TSIGContext(PyObject* mod); +/// \brief Checks if the given python object is a TSIGContext object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type TSIGContext, false otherwise +bool PyTSIGContext_Check(PyObject* obj); + +/// \brief Returns a reference to the TSIGContext object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type TSIGContext; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyTSIGContext_Check() +/// +/// \note This is not a copy; if the TSIGContext is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param tsigcontext_obj The tsigcontext object to convert +TSIGContext& PyTSIGContext_ToTSIGContext(PyObject* tsigcontext_obj); + } // namespace python } // namespace dns diff --git a/src/lib/dns/python/tsig_rdata_python.cc b/src/lib/dns/python/tsig_rdata_python.cc index 4e4f2879ed..6ec0f0999f 100644 --- a/src/lib/dns/python/tsig_rdata_python.cc +++ b/src/lib/dns/python/tsig_rdata_python.cc @@ -12,6 +12,7 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#define PY_SSIZE_T_CLEAN #include #include @@ -32,23 +33,19 @@ using namespace isc::dns; using namespace isc::dns::rdata; using namespace isc::dns::python; -// -// Definition of the classes -// - // For each class, we need a struct, a helper functions (init, destroy, // and static wrappers around the methods we export), a list of methods, // and a type description -// -// TSIG RDATA -// - -// Trivial constructor. -s_TSIG::s_TSIG() : cppobj(NULL) { -} - namespace { +// The s_* Class simply covers one instantiation of the object +class s_TSIG : public PyObject { +public: + s_TSIG() : cppobj(NULL) {}; + const rdata::any::TSIG* cppobj; +}; + + // Shortcut type which would be convenient for adding class variables safely. typedef CPPPyObjectContainer TSIGContainer; @@ -235,7 +232,7 @@ TSIG_toWire(const s_TSIG* const self, PyObject* args) { self, args)); } -PyObject* +PyObject* TSIG_richcmp(const s_TSIG* const self, const s_TSIG* const other, const int op) @@ -302,7 +299,7 @@ PyTypeObject tsig_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call TSIG_str, // tp_str NULL, // tp_getattro @@ -340,30 +337,31 @@ PyTypeObject tsig_type = { 0 // tp_version_tag }; -// Module Initialization, all statics are initialized here -bool -initModulePart_TSIG(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&tsig_type) < 0) { - return (false); - } - void* p = &tsig_type; - if (PyModule_AddObject(mod, "TSIG", static_cast(p)) < 0) { - return (false); - } - Py_INCREF(&tsig_type); - - return (true); -} - PyObject* createTSIGObject(const any::TSIG& source) { - TSIGContainer container = PyObject_New(s_TSIG, &tsig_type); + TSIGContainer container(PyObject_New(s_TSIG, &tsig_type)); container.set(new any::TSIG(source)); return (container.release()); } + +bool +PyTSIG_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); + } + return (PyObject_TypeCheck(obj, &tsig_type)); +} + +const any::TSIG& +PyTSIG_ToTSIG(const PyObject* tsig_obj) { + if (tsig_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in TSIG PyObject conversion"); + } + const s_TSIG* tsig = static_cast(tsig_obj); + return (*tsig->cppobj); +} + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/tsig_rdata_python.h b/src/lib/dns/python/tsig_rdata_python.h index e5e0c6cbb0..a84d9e89a0 100644 --- a/src/lib/dns/python/tsig_rdata_python.h +++ b/src/lib/dns/python/tsig_rdata_python.h @@ -27,17 +27,8 @@ class TSIG; namespace python { -// The s_* Class simply covers one instantiation of the object -class s_TSIG : public PyObject { -public: - s_TSIG(); - const rdata::any::TSIG* cppobj; -}; - extern PyTypeObject tsig_type; -bool initModulePart_TSIG(PyObject* mod); - /// This is A simple shortcut to create a python TSIG object (in the /// form of a pointer to PyObject) with minimal exception safety. /// On success, it returns a valid pointer to PyObject with a reference @@ -47,6 +38,26 @@ bool initModulePart_TSIG(PyObject* mod); /// followed by necessary setup for python exception. PyObject* createTSIGObject(const rdata::any::TSIG& source); +/// \brief Checks if the given python object is a TSIG object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type TSIG, false otherwise +bool PyTSIG_Check(PyObject* obj); + +/// \brief Returns a reference to the TSIG object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type TSIG; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyTSIG_Check() +/// +/// \note This is not a copy; if the TSIG is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param tsig_obj The tsig object to convert +const rdata::any::TSIG& PyTSIG_ToTSIG(const PyObject* tsig_obj); + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/tsigerror_python.cc b/src/lib/dns/python/tsigerror_python.cc index 0ad471649a..7a0217e3a1 100644 --- a/src/lib/dns/python/tsigerror_python.cc +++ b/src/lib/dns/python/tsigerror_python.cc @@ -30,26 +30,21 @@ using namespace isc::util::python; using namespace isc::dns; using namespace isc::dns::python; -// -// Definition of the classes -// - // For each class, we need a struct, a helper functions (init, destroy, // and static wrappers around the methods we export), a list of methods, // and a type description -// -// TSIGError -// - -// Trivial constructor. -s_TSIGError::s_TSIGError() : cppobj(NULL) { -} - // Import pydoc text #include "tsigerror_python_inc.cc" namespace { +// The s_* Class simply covers one instantiation of the object +class s_TSIGError : public PyObject { +public: + s_TSIGError() : cppobj(NULL) {}; + const TSIGError* cppobj; +}; + // Shortcut type which would be convenient for adding class variables safely. typedef CPPPyObjectContainer TSIGErrorContainer; @@ -107,9 +102,9 @@ TSIGError_init(s_TSIGError* self, PyObject* args) { // Constructor from Rcode PyErr_Clear(); - s_Rcode* py_rcode; + PyObject* py_rcode; if (PyArg_ParseTuple(args, "O!", &rcode_type, &py_rcode)) { - self->cppobj = new TSIGError(*py_rcode->cppobj); + self->cppobj = new TSIGError(PyRcode_ToRcode(py_rcode)); return (0); } } catch (const isc::OutOfRange& ex) { @@ -172,13 +167,8 @@ TSIGError_str(PyObject* self) { PyObject* TSIGError_toRcode(const s_TSIGError* const self) { - typedef CPPPyObjectContainer RcodePyObjectContainer; - try { - RcodePyObjectContainer rcode_container(PyObject_New(s_Rcode, - &rcode_type)); - rcode_container.set(new Rcode(self->cppobj->toRcode())); - return (rcode_container.release()); + return (createRcodeObject(self->cppobj->toRcode())); } catch (const exception& ex) { const string ex_what = "Failed to convert TSIGError to Rcode: " + string(ex.what()); @@ -190,7 +180,7 @@ TSIGError_toRcode(const s_TSIGError* const self) { return (NULL); } -PyObject* +PyObject* TSIGError_richcmp(const s_TSIGError* const self, const s_TSIGError* const other, const int op) @@ -252,7 +242,7 @@ PyTypeObject tsigerror_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call // THIS MAY HAVE TO BE CHANGED TO NULL: TSIGError_str, // tp_str @@ -290,78 +280,9 @@ PyTypeObject tsigerror_type = { 0 // tp_version_tag }; -namespace { -// Trivial shortcut to create and install TSIGError constants. -inline void -installTSIGErrorConstant(const char* name, const TSIGError& val) { - TSIGErrorContainer container(PyObject_New(s_TSIGError, &tsigerror_type)); - container.installAsClassVariable(tsigerror_type, name, new TSIGError(val)); -} -} - -// Module Initialization, all statics are initialized here -bool -initModulePart_TSIGError(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&tsigerror_type) < 0) { - return (false); - } - void* p = &tsigerror_type; - if (PyModule_AddObject(mod, "TSIGError", static_cast(p)) < 0) { - return (false); - } - Py_INCREF(&tsigerror_type); - - try { - // Constant class variables - // Error codes (bare values) - installClassVariable(tsigerror_type, "BAD_SIG_CODE", - Py_BuildValue("H", TSIGError::BAD_SIG_CODE)); - installClassVariable(tsigerror_type, "BAD_KEY_CODE", - Py_BuildValue("H", TSIGError::BAD_KEY_CODE)); - installClassVariable(tsigerror_type, "BAD_TIME_CODE", - Py_BuildValue("H", TSIGError::BAD_TIME_CODE)); - - // Error codes (constant objects) - installTSIGErrorConstant("NOERROR", TSIGError::NOERROR()); - installTSIGErrorConstant("FORMERR", TSIGError::FORMERR()); - installTSIGErrorConstant("SERVFAIL", TSIGError::SERVFAIL()); - installTSIGErrorConstant("NXDOMAIN", TSIGError::NXDOMAIN()); - installTSIGErrorConstant("NOTIMP", TSIGError::NOTIMP()); - installTSIGErrorConstant("REFUSED", TSIGError::REFUSED()); - installTSIGErrorConstant("YXDOMAIN", TSIGError::YXDOMAIN()); - installTSIGErrorConstant("YXRRSET", TSIGError::YXRRSET()); - installTSIGErrorConstant("NXRRSET", TSIGError::NXRRSET()); - installTSIGErrorConstant("NOTAUTH", TSIGError::NOTAUTH()); - installTSIGErrorConstant("NOTZONE", TSIGError::NOTZONE()); - installTSIGErrorConstant("RESERVED11", TSIGError::RESERVED11()); - installTSIGErrorConstant("RESERVED12", TSIGError::RESERVED12()); - installTSIGErrorConstant("RESERVED13", TSIGError::RESERVED13()); - installTSIGErrorConstant("RESERVED14", TSIGError::RESERVED14()); - installTSIGErrorConstant("RESERVED15", TSIGError::RESERVED15()); - installTSIGErrorConstant("BAD_SIG", TSIGError::BAD_SIG()); - installTSIGErrorConstant("BAD_KEY", TSIGError::BAD_KEY()); - installTSIGErrorConstant("BAD_TIME", TSIGError::BAD_TIME()); - } catch (const exception& ex) { - const string ex_what = - "Unexpected failure in TSIGError initialization: " + - string(ex.what()); - PyErr_SetString(po_IscException, ex_what.c_str()); - return (false); - } catch (...) { - PyErr_SetString(PyExc_SystemError, - "Unexpected failure in TSIGError initialization"); - return (false); - } - - return (true); -} - PyObject* createTSIGErrorObject(const TSIGError& source) { - TSIGErrorContainer container = PyObject_New(s_TSIGError, &tsigerror_type); + TSIGErrorContainer container(PyObject_New(s_TSIGError, &tsigerror_type)); container.set(new TSIGError(source)); return (container.release()); } diff --git a/src/lib/dns/python/tsigerror_python.h b/src/lib/dns/python/tsigerror_python.h index 735a48007f..0b5b630ace 100644 --- a/src/lib/dns/python/tsigerror_python.h +++ b/src/lib/dns/python/tsigerror_python.h @@ -23,17 +23,8 @@ class TSIGError; namespace python { -// The s_* Class simply covers one instantiation of the object -class s_TSIGError : public PyObject { -public: - s_TSIGError(); - const TSIGError* cppobj; -}; - extern PyTypeObject tsigerror_type; -bool initModulePart_TSIGError(PyObject* mod); - /// This is A simple shortcut to create a python TSIGError object (in the /// form of a pointer to PyObject) with minimal exception safety. /// On success, it returns a valid pointer to PyObject with a reference @@ -42,6 +33,7 @@ bool initModulePart_TSIGError(PyObject* mod); /// This function is expected to be called with in a try block /// followed by necessary setup for python exception. PyObject* createTSIGErrorObject(const TSIGError& source); + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/tsigkey_python.cc b/src/lib/dns/python/tsigkey_python.cc index f0906cb449..cf79c1aa82 100644 --- a/src/lib/dns/python/tsigkey_python.cc +++ b/src/lib/dns/python/tsigkey_python.cc @@ -31,10 +31,6 @@ using namespace isc::util::python; using namespace isc::dns; using namespace isc::dns::python; -// -// Definition of the classes -// - // For each class, we need a struct, a helper functions (init, destroy, // and static wrappers around the methods we export), a list of methods, // and a type description @@ -43,11 +39,14 @@ using namespace isc::dns::python; // TSIGKey // -// The s_* Class simply covers one instantiation of the object - -s_TSIGKey::s_TSIGKey() : cppobj(NULL) {} - namespace { +// The s_* Class simply covers one instantiation of the object +class s_TSIGKey : public PyObject { +public: + s_TSIGKey() : cppobj(NULL) {}; + TSIGKey* cppobj; +}; + // // We declare the functions here, the definitions are below // the type definition of the object, since both can use the other @@ -96,8 +95,8 @@ TSIGKey_init(s_TSIGKey* self, PyObject* args) { } PyErr_Clear(); - const s_Name* key_name; - const s_Name* algorithm_name; + const PyObject* key_name; + const PyObject* algorithm_name; PyObject* bytes_obj; const char* secret; Py_ssize_t secret_len; @@ -107,8 +106,8 @@ TSIGKey_init(s_TSIGKey* self, PyObject* args) { if (secret_len == 0) { secret = NULL; } - self->cppobj = new TSIGKey(*key_name->cppobj, - *algorithm_name->cppobj, + self->cppobj = new TSIGKey(PyName_ToName(key_name), + PyName_ToName(algorithm_name), secret, secret_len); return (0); } @@ -196,7 +195,7 @@ PyTypeObject tsigkey_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call NULL, // tp_str NULL, // tp_getattro @@ -233,49 +232,20 @@ PyTypeObject tsigkey_type = { 0 // tp_version_tag }; -// Module Initialization, all statics are initialized here bool -initModulePart_TSIGKey(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&tsigkey_type) < 0) { - return (false); +PyTSIGKey_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - void* p = &tsigkey_type; - if (PyModule_AddObject(mod, "TSIGKey", static_cast(p)) != 0) { - return (false); - } - Py_INCREF(&tsigkey_type); - - try { - // Constant class variables - installClassVariable(tsigkey_type, "HMACMD5_NAME", - createNameObject(TSIGKey::HMACMD5_NAME())); - installClassVariable(tsigkey_type, "HMACSHA1_NAME", - createNameObject(TSIGKey::HMACSHA1_NAME())); - installClassVariable(tsigkey_type, "HMACSHA256_NAME", - createNameObject(TSIGKey::HMACSHA256_NAME())); - installClassVariable(tsigkey_type, "HMACSHA224_NAME", - createNameObject(TSIGKey::HMACSHA224_NAME())); - installClassVariable(tsigkey_type, "HMACSHA384_NAME", - createNameObject(TSIGKey::HMACSHA384_NAME())); - installClassVariable(tsigkey_type, "HMACSHA512_NAME", - createNameObject(TSIGKey::HMACSHA512_NAME())); - } catch (const exception& ex) { - const string ex_what = - "Unexpected failure in TSIGKey initialization: " + - string(ex.what()); - PyErr_SetString(po_IscException, ex_what.c_str()); - return (false); - } catch (...) { - PyErr_SetString(PyExc_SystemError, - "Unexpected failure in TSIGKey initialization"); - return (false); - } - - return (true); + return (PyObject_TypeCheck(obj, &tsigkey_type)); } + +const TSIGKey& +PyTSIGKey_ToTSIGKey(const PyObject* tsigkey_obj) { + const s_TSIGKey* tsigkey = static_cast(tsigkey_obj); + return (*tsigkey->cppobj); +} + } // namespace python } // namespace dns } // namespace isc @@ -287,13 +257,14 @@ initModulePart_TSIGKey(PyObject* mod) { // TSIGKeyRing // -// The s_* Class simply covers one instantiation of the object - -// The s_* Class simply covers one instantiation of the object - -s_TSIGKeyRing::s_TSIGKeyRing() : cppobj(NULL) {} - namespace { +// The s_* Class simply covers one instantiation of the object +class s_TSIGKeyRing : public PyObject { +public: + s_TSIGKeyRing() : cppobj(NULL) {}; + TSIGKeyRing* cppobj; +}; + // // We declare the functions here, the definitions are below // the type definition of the object, since both can use the other @@ -329,7 +300,7 @@ TSIGKeyRing_init(s_TSIGKeyRing* self, PyObject* args) { "Invalid arguments to TSIGKeyRing constructor"); return (-1); } - + self->cppobj = new(nothrow) TSIGKeyRing(); if (self->cppobj == NULL) { PyErr_SetString(po_IscException, "Allocating TSIGKeyRing failed"); @@ -354,7 +325,7 @@ TSIGKeyRing_size(const s_TSIGKeyRing* const self) { PyObject* TSIGKeyRing_add(const s_TSIGKeyRing* const self, PyObject* args) { s_TSIGKey* tsigkey; - + if (PyArg_ParseTuple(args, "O!", &tsigkey_type, &tsigkey)) { try { const TSIGKeyRing::Result result = @@ -374,11 +345,11 @@ TSIGKeyRing_add(const s_TSIGKeyRing* const self, PyObject* args) { PyObject* TSIGKeyRing_remove(const s_TSIGKeyRing* self, PyObject* args) { - s_Name* key_name; + PyObject* key_name; if (PyArg_ParseTuple(args, "O!", &name_type, &key_name)) { const TSIGKeyRing::Result result = - self->cppobj->remove(*key_name->cppobj); + self->cppobj->remove(PyName_ToName(key_name)); return (Py_BuildValue("I", result)); } @@ -390,13 +361,14 @@ TSIGKeyRing_remove(const s_TSIGKeyRing* self, PyObject* args) { PyObject* TSIGKeyRing_find(const s_TSIGKeyRing* self, PyObject* args) { - s_Name* key_name; - s_Name* algorithm_name; + PyObject* key_name; + PyObject* algorithm_name; if (PyArg_ParseTuple(args, "O!O!", &name_type, &key_name, &name_type, &algorithm_name)) { const TSIGKeyRing::FindResult result = - self->cppobj->find(*key_name->cppobj, *algorithm_name->cppobj); + self->cppobj->find(PyName_ToName(key_name), + PyName_ToName(algorithm_name)); if (result.key != NULL) { s_TSIGKey* key = PyObject_New(s_TSIGKey, &tsigkey_type); if (key == NULL) { @@ -436,7 +408,7 @@ PyTypeObject tsigkeyring_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call NULL, // tp_str NULL, // tp_getattro @@ -473,27 +445,24 @@ PyTypeObject tsigkeyring_type = { }; bool -initModulePart_TSIGKeyRing(PyObject* mod) { - if (PyType_Ready(&tsigkeyring_type) < 0) { - return (false); +PyTSIGKeyRing_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); } - Py_INCREF(&tsigkeyring_type); - void* p = &tsigkeyring_type; - if (PyModule_AddObject(mod, "TSIGKeyRing", - static_cast(p)) != 0) { - Py_DECREF(&tsigkeyring_type); - return (false); - } - - addClassVariable(tsigkeyring_type, "SUCCESS", - Py_BuildValue("I", TSIGKeyRing::SUCCESS)); - addClassVariable(tsigkeyring_type, "EXIST", - Py_BuildValue("I", TSIGKeyRing::EXIST)); - addClassVariable(tsigkeyring_type, "NOTFOUND", - Py_BuildValue("I", TSIGKeyRing::NOTFOUND)); - - return (true); + return (PyObject_TypeCheck(obj, &tsigkeyring_type)); } + +const TSIGKeyRing& +PyTSIGKeyRing_ToTSIGKeyRing(const PyObject* tsigkeyring_obj) { + if (tsigkeyring_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in TSIGKeyRing PyObject conversion"); + } + const s_TSIGKeyRing* tsigkeyring = + static_cast(tsigkeyring_obj); + return (*tsigkeyring->cppobj); +} + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/tsigkey_python.h b/src/lib/dns/python/tsigkey_python.h index 51b3ae7a0c..6c3d2e3e92 100644 --- a/src/lib/dns/python/tsigkey_python.h +++ b/src/lib/dns/python/tsigkey_python.h @@ -24,24 +24,46 @@ class TSIGKeyRing; namespace python { -// The s_* Class simply covers one instantiation of the object -class s_TSIGKey : public PyObject { -public: - s_TSIGKey(); - TSIGKey* cppobj; -}; - -class s_TSIGKeyRing : public PyObject { -public: - s_TSIGKeyRing(); - TSIGKeyRing* cppobj; -}; - extern PyTypeObject tsigkey_type; extern PyTypeObject tsigkeyring_type; -bool initModulePart_TSIGKey(PyObject* mod); -bool initModulePart_TSIGKeyRing(PyObject* mod); +/// \brief Checks if the given python object is a TSIGKey object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type TSIGKey, false otherwise +bool PyTSIGKey_Check(PyObject* obj); + +/// \brief Returns a reference to the TSIGKey object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type TSIGKey; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyTSIGKey_Check() +/// +/// \note This is not a copy; if the TSIGKey is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param tsigkey_obj The tsigkey object to convert +const TSIGKey& PyTSIGKey_ToTSIGKey(const PyObject* tsigkey_obj); + +/// \brief Checks if the given python object is a TSIGKeyRing object +/// +/// \param obj The object to check the type of +/// \return true if the object is of type TSIGKeyRing, false otherwise +bool PyTSIGKeyRing_Check(PyObject* obj); + +/// \brief Returns a reference to the TSIGKeyRing object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type TSIGKeyRing; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyTSIGKeyRing_Check() +/// +/// \note This is not a copy; if the TSIGKeyRing is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param tsigkeyring_obj The tsigkeyring object to convert +const TSIGKeyRing& PyTSIGKeyRing_ToTSIGKeyRing(const PyObject* tsigkeyring_obj); } // namespace python } // namespace dns diff --git a/src/lib/dns/python/tsigrecord_python.cc b/src/lib/dns/python/tsigrecord_python.cc index 8a78b5e3a4..c754dd2c82 100644 --- a/src/lib/dns/python/tsigrecord_python.cc +++ b/src/lib/dns/python/tsigrecord_python.cc @@ -32,10 +32,6 @@ using namespace isc::util::python; using namespace isc::dns; using namespace isc::dns::python; -// -// Definition of the classes -// - // For each class, we need a struct, a helper functions (init, destroy, // and static wrappers around the methods we export), a list of methods, // and a type description @@ -44,11 +40,14 @@ using namespace isc::dns::python; // TSIGRecord // -// Trivial constructor. -s_TSIGRecord::s_TSIGRecord() : cppobj(NULL) { -} - namespace { +// The s_* Class simply covers one instantiation of the object +class s_TSIGRecord : public PyObject { +public: + s_TSIGRecord() : cppobj(NULL) {}; + TSIGRecord* cppobj; +}; + // Shortcut type which would be convenient for adding class variables safely. typedef CPPPyObjectContainer TSIGRecordContainer; @@ -102,11 +101,12 @@ PyMethodDef TSIGRecord_methods[] = { int TSIGRecord_init(s_TSIGRecord* self, PyObject* args) { try { - const s_Name* py_name; - const s_TSIG* py_tsig; + const PyObject* py_name; + const PyObject* py_tsig; if (PyArg_ParseTuple(args, "O!O!", &name_type, &py_name, &tsig_type, &py_tsig)) { - self->cppobj = new TSIGRecord(*py_name->cppobj, *py_tsig->cppobj); + self->cppobj = new TSIGRecord(PyName_ToName(py_name), + PyTSIG_ToTSIG(py_tsig)); return (0); } } catch (const exception& ex) { @@ -226,7 +226,7 @@ PyTypeObject tsigrecord_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call TSIGRecord_str, // tp_str NULL, // tp_getattro @@ -262,50 +262,32 @@ PyTypeObject tsigrecord_type = { 0 // tp_version_tag }; -// Module Initialization, all statics are initialized here -bool -initModulePart_TSIGRecord(PyObject* mod) { - // We initialize the static description object with PyType_Ready(), - // then add it to the module. This is not just a check! (leaving - // this out results in segmentation faults) - if (PyType_Ready(&tsigrecord_type) < 0) { - return (false); - } - void* p = &tsigrecord_type; - if (PyModule_AddObject(mod, "TSIGRecord", static_cast(p)) < 0) { - return (false); - } - Py_INCREF(&tsigrecord_type); - - // The following template is the typical procedure for installing class - // variables. If the class doesn't have a class variable, remove the - // entire try-catch clauses. - try { - // Constant class variables - installClassVariable(tsigrecord_type, "TSIG_TTL", - Py_BuildValue("I", 0)); - } catch (const exception& ex) { - const string ex_what = - "Unexpected failure in TSIGRecord initialization: " + - string(ex.what()); - PyErr_SetString(po_IscException, ex_what.c_str()); - return (false); - } catch (...) { - PyErr_SetString(PyExc_SystemError, - "Unexpected failure in TSIGRecord initialization"); - return (false); - } - - return (true); -} - PyObject* createTSIGRecordObject(const TSIGRecord& source) { - TSIGRecordContainer container = PyObject_New(s_TSIGRecord, - &tsigrecord_type); + TSIGRecordContainer container(PyObject_New(s_TSIGRecord, &tsigrecord_type)); container.set(new TSIGRecord(source)); return (container.release()); } + +bool +PyTSIGRecord_Check(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "obj argument NULL in typecheck"); + } + return (PyObject_TypeCheck(obj, &tsigrecord_type)); +} + +const TSIGRecord& +PyTSIGRecord_ToTSIGRecord(PyObject* tsigrecord_obj) { + if (tsigrecord_obj == NULL) { + isc_throw(PyCPPWrapperException, + "obj argument NULL in TSIGRecord PyObject conversion"); + } + s_TSIGRecord* tsigrecord = static_cast(tsigrecord_obj); + return (*tsigrecord->cppobj); +} + + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/python/tsigrecord_python.h b/src/lib/dns/python/tsigrecord_python.h index e0a3526a13..d6252e13fb 100644 --- a/src/lib/dns/python/tsigrecord_python.h +++ b/src/lib/dns/python/tsigrecord_python.h @@ -23,17 +23,9 @@ class TSIGRecord; namespace python { -// The s_* Class simply covers one instantiation of the object -class s_TSIGRecord : public PyObject { -public: - s_TSIGRecord(); - TSIGRecord* cppobj; -}; extern PyTypeObject tsigrecord_type; -bool initModulePart_TSIGRecord(PyObject* mod); - /// This is A simple shortcut to create a python TSIGRecord object (in the /// form of a pointer to PyObject) with minimal exception safety. /// On success, it returns a valid pointer to PyObject with a reference @@ -43,6 +35,26 @@ bool initModulePart_TSIGRecord(PyObject* mod); /// followed by necessary setup for python exception. PyObject* createTSIGRecordObject(const TSIGRecord& source); +/// \brief Checks if the given python object is a TSIGRecord object +/// +/// \exception PyCPPWrapperException if obj is NULL +/// +/// \param obj The object to check the type of +/// \return true if the object is of type TSIGRecord, false otherwise +bool PyTSIGRecord_Check(PyObject* obj); + +/// \brief Returns a reference to the TSIGRecord object contained within the given +/// Python object. +/// +/// \note The given object MUST be of type TSIGRecord; this can be checked with +/// either the right call to ParseTuple("O!"), or with PyTSIGRecord_Check() +/// +/// \note This is not a copy; if the TSIGRecord is needed when the PyObject +/// may be destroyed, the caller must copy it itself. +/// +/// \param rrtype_obj The rrtype object to convert +const TSIGRecord& PyTSIGRecord_ToTSIGRecord(PyObject* tsigrecord_obj); + } // namespace python } // namespace dns } // namespace isc diff --git a/src/lib/dns/question.cc b/src/lib/dns/question.cc index 96e2a9c895..6ccb164ed1 100644 --- a/src/lib/dns/question.cc +++ b/src/lib/dns/question.cc @@ -57,10 +57,19 @@ Question::toWire(OutputBuffer& buffer) const { unsigned int Question::toWire(AbstractMessageRenderer& renderer) const { + const size_t pos0 = renderer.getLength(); + renderer.writeName(name_); rrtype_.toWire(renderer); rrclass_.toWire(renderer); + // Make sure the renderer has a room for the question + if (renderer.getLength() > renderer.getLengthLimit()) { + renderer.trim(renderer.getLength() - pos0); + renderer.setTruncated(); + return (0); + } + return (1); // number of "entries" } diff --git a/src/lib/dns/question.h b/src/lib/dns/question.h index b3f3d98356..5d2783b6f0 100644 --- a/src/lib/dns/question.h +++ b/src/lib/dns/question.h @@ -201,23 +201,23 @@ public: /// class description). /// /// The owner name will be compressed if possible, although it's an - /// unlikely event in practice because the %Question section a DNS + /// unlikely event in practice because the Question section a DNS /// message normally doesn't contain multiple question entries and /// it's located right after the Header section. /// Nevertheless, \c renderer records the information of the owner name /// so that it can be pointed by other RRs in other sections (which is /// more likely to happen). /// - /// In theory, an attempt to render a Question may cause truncation - /// (when the Question section contains a large number of entries), - /// but this implementation doesn't catch that situation. - /// It would make the code unnecessarily complicated (though perhaps - /// slightly) for almost impossible case in practice. - /// An upper layer will handle the pathological case as a general error. + /// It could be possible, though very rare in practice, that + /// an attempt to render a Question may cause truncation + /// (when the Question section contains a large number of entries). + /// In such a case this method avoid the rendering and indicate the + /// truncation in the \c renderer. This method returns 0 in this case. /// /// \param renderer DNS message rendering context that encapsulates the /// output buffer and name compression information. - /// \return 1 + /// + /// \return 1 on success; 0 if it causes truncation unsigned int toWire(AbstractMessageRenderer& renderer) const; /// \brief Render the Question in the wire format without name compression. diff --git a/src/lib/dns/rdata/any_255/tsig_250.cc b/src/lib/dns/rdata/any_255/tsig_250.cc index 2557965c47..4eb72bcf0b 100644 --- a/src/lib/dns/rdata/any_255/tsig_250.cc +++ b/src/lib/dns/rdata/any_255/tsig_250.cc @@ -19,9 +19,11 @@ #include #include +#include #include #include +#include #include #include #include @@ -30,6 +32,7 @@ using namespace std; using namespace boost; using namespace isc::util; using namespace isc::util::encode; +using namespace isc::util::str; // BEGIN_ISC_NAMESPACE // BEGIN_RDATA_NAMESPACE @@ -65,45 +68,6 @@ struct TSIG::TSIGImpl { const vector other_data_; }; -namespace { -string -getToken(istringstream& iss, const string& full_input) { - string token; - iss >> token; - if (iss.bad() || iss.fail()) { - isc_throw(InvalidRdataText, "Invalid TSIG text: parse error " << - full_input); - } - return (token); -} - -// This helper function converts a string token to an *unsigned* integer. -// NumType is a *signed* integral type (e.g. int32_t) that is sufficiently -// wide to store resulting integers. -// BitSize is the maximum number of bits that the resulting integer can take. -// This function first checks whether the given token can be converted to -// an integer of NumType type. It then confirms the conversion result is -// within the valid range, i.e., [0, 2^NumType - 1]. The second check is -// necessary because lexical_cast where T is an unsigned integer type -// doesn't correctly reject negative numbers when compiled with SunStudio. -template -NumType -tokenToNum(const string& num_token) { - NumType num; - try { - num = lexical_cast(num_token); - } catch (const boost::bad_lexical_cast& ex) { - isc_throw(InvalidRdataText, "Invalid TSIG numeric parameter: " << - num_token); - } - if (num < 0 || num >= (static_cast(1) << BitSize)) { - isc_throw(InvalidRdataText, "Numeric TSIG parameter out of range: " << - num); - } - return (num); -} -} - /// \brief Constructor from string. /// /// \c tsig_str must be formatted as follows: @@ -148,47 +112,52 @@ tokenToNum(const string& num_token) { TSIG::TSIG(const std::string& tsig_str) : impl_(NULL) { istringstream iss(tsig_str); - const Name algorithm(getToken(iss, tsig_str)); - const int64_t time_signed = tokenToNum(getToken(iss, - tsig_str)); - const int32_t fudge = tokenToNum(getToken(iss, tsig_str)); - const int32_t macsize = tokenToNum(getToken(iss, tsig_str)); + try { + const Name algorithm(getToken(iss)); + const int64_t time_signed = tokenToNum(getToken(iss)); + const int32_t fudge = tokenToNum(getToken(iss)); + const int32_t macsize = tokenToNum(getToken(iss)); - const string mac_txt = (macsize > 0) ? getToken(iss, tsig_str) : ""; - vector mac; - decodeBase64(mac_txt, mac); - if (mac.size() != macsize) { - isc_throw(InvalidRdataText, "TSIG MAC size and data are inconsistent"); + const string mac_txt = (macsize > 0) ? getToken(iss) : ""; + vector mac; + decodeBase64(mac_txt, mac); + if (mac.size() != macsize) { + isc_throw(InvalidRdataText, "TSIG MAC size and data are inconsistent"); + } + + const int32_t orig_id = tokenToNum(getToken(iss)); + + const string error_txt = getToken(iss); + int32_t error = 0; + // XXX: In the initial implementation we hardcode the mnemonics. + // We'll soon generalize this. + if (error_txt == "BADSIG") { + error = 16; + } else if (error_txt == "BADKEY") { + error = 17; + } else if (error_txt == "BADTIME") { + error = 18; + } else { + error = tokenToNum(error_txt); + } + + const int32_t otherlen = tokenToNum(getToken(iss)); + const string otherdata_txt = (otherlen > 0) ? getToken(iss) : ""; + vector other_data; + decodeBase64(otherdata_txt, other_data); + + if (!iss.eof()) { + isc_throw(InvalidRdataText, "Unexpected input for TSIG RDATA: " << + tsig_str); + } + + impl_ = new TSIGImpl(algorithm, time_signed, fudge, mac, orig_id, + error, other_data); + + } catch (const StringTokenError& ste) { + isc_throw(InvalidRdataText, "Invalid TSIG text: " << ste.what() << + ": " << tsig_str); } - - const int32_t orig_id = tokenToNum(getToken(iss, tsig_str)); - - const string error_txt = getToken(iss, tsig_str); - int32_t error = 0; - // XXX: In the initial implementation we hardcode the mnemonics. - // We'll soon generalize this. - if (error_txt == "BADSIG") { - error = 16; - } else if (error_txt == "BADKEY") { - error = 17; - } else if (error_txt == "BADTIME") { - error = 18; - } else { - error = tokenToNum(error_txt); - } - - const int32_t otherlen = tokenToNum(getToken(iss, tsig_str)); - const string otherdata_txt = (otherlen > 0) ? getToken(iss, tsig_str) : ""; - vector other_data; - decodeBase64(otherdata_txt, other_data); - - if (!iss.eof()) { - isc_throw(InvalidRdataText, "Unexpected input for TSIG RDATA: " << - tsig_str); - } - - impl_ = new TSIGImpl(algorithm, time_signed, fudge, mac, orig_id, - error, other_data); } /// \brief Constructor from wire-format data. diff --git a/src/lib/dns/rdata/generic/afsdb_18.cc b/src/lib/dns/rdata/generic/afsdb_18.cc new file mode 100644 index 0000000000..6afc4de51e --- /dev/null +++ b/src/lib/dns/rdata/generic/afsdb_18.cc @@ -0,0 +1,171 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include +#include + +#include +#include +#include +#include + +#include + +using namespace std; +using namespace isc::util; +using namespace isc::util::str; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +/// \brief Constructor from string. +/// +/// \c afsdb_str must be formatted as follows: +/// \code +/// \endcode +/// where server name field must represent a valid domain name. +/// +/// An example of valid string is: +/// \code "1 server.example.com." \endcode +/// +/// Exceptions +/// +/// \exception InvalidRdataText The number of RDATA fields (must be 2) is +/// incorrect. +/// \exception std::bad_alloc Memory allocation fails. +/// \exception Other The constructor of the \c Name class will throw if the +/// names in the string is invalid. +AFSDB::AFSDB(const std::string& afsdb_str) : + subtype_(0), server_(Name::ROOT_NAME()) +{ + istringstream iss(afsdb_str); + + try { + const uint32_t subtype = tokenToNum(getToken(iss)); + const Name servername(getToken(iss)); + string server; + + if (!iss.eof()) { + isc_throw(InvalidRdataText, "Unexpected input for AFSDB" + "RDATA: " << afsdb_str); + } + + subtype_ = subtype; + server_ = servername; + + } catch (const StringTokenError& ste) { + isc_throw(InvalidRdataText, "Invalid AFSDB text: " << + ste.what() << ": " << afsdb_str); + } +} + +/// \brief Constructor from wire-format data. +/// +/// This constructor doesn't check the validity of the second parameter (rdata +/// length) for parsing. +/// If necessary, the caller will check consistency. +/// +/// \exception std::bad_alloc Memory allocation fails. +/// \exception Other The constructor of the \c Name class will throw if the +/// names in the wire is invalid. +AFSDB::AFSDB(InputBuffer& buffer, size_t) : + subtype_(buffer.readUint16()), server_(buffer) +{} + +/// \brief Copy constructor. +/// +/// \exception std::bad_alloc Memory allocation fails in copying internal +/// member variables (this should be very rare). +AFSDB::AFSDB(const AFSDB& other) : + Rdata(), subtype_(other.subtype_), server_(other.server_) +{} + +AFSDB& +AFSDB::operator=(const AFSDB& source) { + subtype_ = source.subtype_; + server_ = source.server_; + + return (*this); +} + +/// \brief Convert the \c AFSDB to a string. +/// +/// The output of this method is formatted as described in the "from string" +/// constructor (\c AFSDB(const std::string&))). +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \return A \c string object that represents the \c AFSDB object. +string +AFSDB::toText() const { + return (boost::lexical_cast(subtype_) + " " + server_.toText()); +} + +/// \brief Render the \c AFSDB in the wire format without name compression. +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \param buffer An output buffer to store the wire data. +void +AFSDB::toWire(OutputBuffer& buffer) const { + buffer.writeUint16(subtype_); + server_.toWire(buffer); +} + +/// \brief Render the \c AFSDB in the wire format with taking into account +/// compression. +/// +/// As specified in RFC3597, TYPE AFSDB is not "well-known", the server +/// field (domain name) will not be compressed. +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \param renderer DNS message rendering context that encapsulates the +/// output buffer and name compression information. +void +AFSDB::toWire(AbstractMessageRenderer& renderer) const { + renderer.writeUint16(subtype_); + renderer.writeName(server_, false); +} + +/// \brief Compare two instances of \c AFSDB RDATA. +/// +/// See documentation in \c Rdata. +int +AFSDB::compare(const Rdata& other) const { + const AFSDB& other_afsdb = dynamic_cast(other); + if (subtype_ < other_afsdb.subtype_) { + return (-1); + } else if (subtype_ > other_afsdb.subtype_) { + return (1); + } + + return (compareNames(server_, other_afsdb.server_)); +} + +const Name& +AFSDB::getServer() const { + return (server_); +} + +uint16_t +AFSDB::getSubtype() const { + return (subtype_); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/afsdb_18.h b/src/lib/dns/rdata/generic/afsdb_18.h new file mode 100644 index 0000000000..4a4677502c --- /dev/null +++ b/src/lib/dns/rdata/generic/afsdb_18.h @@ -0,0 +1,74 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include + +#include + +#include +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +/// \brief \c rdata::AFSDB class represents the AFSDB RDATA as defined %in +/// RFC1183. +/// +/// This class implements the basic interfaces inherited from the abstract +/// \c rdata::Rdata class, and provides trivial accessors specific to the +/// AFSDB RDATA. +class AFSDB : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + /// \brief Assignment operator. + /// + /// This method never throws an exception. + AFSDB& operator=(const AFSDB& source); + /// + /// Specialized methods + /// + + /// \brief Return the value of the server field. + /// + /// \return A reference to a \c Name class object corresponding to the + /// internal server name. + /// + /// This method never throws an exception. + const Name& getServer() const; + + /// \brief Return the value of the subtype field. + /// + /// This method never throws an exception. + uint16_t getSubtype() const; + +private: + uint16_t subtype_; + Name server_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/detail/ds_like.h b/src/lib/dns/rdata/generic/detail/ds_like.h new file mode 100644 index 0000000000..b5a35cd967 --- /dev/null +++ b/src/lib/dns/rdata/generic/detail/ds_like.h @@ -0,0 +1,225 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __DS_LIKE_H +#define __DS_LIKE_H 1 + +#include + +#include +#include +#include +#include + +#include + +#include + +#include +#include +#include +#include + +namespace isc { +namespace dns { +namespace rdata { +namespace generic { +namespace detail { + +/// \brief \c rdata::DSLikeImpl class represents the DS-like RDATA for DS +/// and DLV types. +/// +/// This class implements the basic interfaces inherited by the DS and DLV +/// classes from the abstract \c rdata::Rdata class, and provides trivial +/// accessors to DS-like RDATA. +template class DSLikeImpl { + // Common sequence of toWire() operations used for the two versions of + // toWire(). + template + void + toWireCommon(Output& output) const { + output.writeUint16(tag_); + output.writeUint8(algorithm_); + output.writeUint8(digest_type_); + output.writeData(&digest_[0], digest_.size()); + } + +public: + /// \brief Constructor from string. + /// + /// Exceptions + /// + /// \c InvalidRdataText is thrown if the method cannot process the + /// parameter data for any of the number of reasons. + DSLikeImpl(const std::string& ds_str) { + std::istringstream iss(ds_str); + // peekc should be of iss's char_type for isspace to work + std::istringstream::char_type peekc; + std::stringbuf digestbuf; + uint32_t tag, algorithm, digest_type; + + iss >> tag >> algorithm >> digest_type; + if (iss.bad() || iss.fail()) { + isc_throw(InvalidRdataText, + "Invalid " << RRType(typeCode) << " text"); + } + if (tag > 0xffff) { + isc_throw(InvalidRdataText, + RRType(typeCode) << " tag out of range"); + } + if (algorithm > 0xff) { + isc_throw(InvalidRdataText, + RRType(typeCode) << " algorithm out of range"); + } + if (digest_type > 0xff) { + isc_throw(InvalidRdataText, + RRType(typeCode) << " digest type out of range"); + } + + iss.read(&peekc, 1); + if (!iss.good() || !isspace(peekc, iss.getloc())) { + isc_throw(InvalidRdataText, + RRType(typeCode) << " presentation format error"); + } + + iss >> &digestbuf; + + tag_ = tag; + algorithm_ = algorithm; + digest_type_ = digest_type; + decodeHex(digestbuf.str(), digest_); + } + + /// \brief Constructor from wire-format data. + /// + /// \param buffer A buffer storing the wire format data. + /// \param rdata_len The length of the RDATA in bytes, normally expected + /// to be the value of the RDLENGTH field of the corresponding RR. + /// + /// Exceptions + /// + /// \c InvalidRdataLength is thrown if the input data is too short for the + /// type. + DSLikeImpl(InputBuffer& buffer, size_t rdata_len) { + if (rdata_len < 4) { + isc_throw(InvalidRdataLength, RRType(typeCode) << " too short"); + } + + tag_ = buffer.readUint16(); + algorithm_ = buffer.readUint8(); + digest_type_ = buffer.readUint8(); + + rdata_len -= 4; + digest_.resize(rdata_len); + buffer.readData(&digest_[0], rdata_len); + } + + /// \brief The copy constructor. + /// + /// Trivial for now, we could've used the default one. + DSLikeImpl(const DSLikeImpl& source) { + digest_ = source.digest_; + tag_ = source.tag_; + algorithm_ = source.algorithm_; + digest_type_ = source.digest_type_; + } + + /// \brief Convert the DS-like data to a string. + /// + /// \return A \c string object that represents the DS-like data. + std::string + toText() const { + using namespace boost; + return (lexical_cast(static_cast(tag_)) + + " " + lexical_cast(static_cast(algorithm_)) + + " " + lexical_cast(static_cast(digest_type_)) + + " " + encodeHex(digest_)); + } + + /// \brief Render the DS-like data in the wire format to an OutputBuffer + /// object. + /// + /// \param buffer An output buffer to store the wire data. + void + toWire(OutputBuffer& buffer) const { + toWireCommon(buffer); + } + + /// \brief Render the DS-like data in the wire format to an + /// AbstractMessageRenderer object. + /// + /// \param renderer A renderer object to send the wire data to. + void + toWire(AbstractMessageRenderer& renderer) const { + toWireCommon(renderer); + } + + /// \brief Compare two instances of DS-like RDATA. + /// + /// It is up to the caller to make sure that \c other is an object of the + /// same \c DSLikeImpl class. + /// + /// \param other the right-hand operand to compare against. + /// \return < 0 if \c this would be sorted before \c other. + /// \return 0 if \c this is identical to \c other in terms of sorting + /// order. + /// \return > 0 if \c this would be sorted after \c other. + int + compare(const DSLikeImpl& other_ds) const { + if (tag_ != other_ds.tag_) { + return (tag_ < other_ds.tag_ ? -1 : 1); + } + if (algorithm_ != other_ds.algorithm_) { + return (algorithm_ < other_ds.algorithm_ ? -1 : 1); + } + if (digest_type_ != other_ds.digest_type_) { + return (digest_type_ < other_ds.digest_type_ ? -1 : 1); + } + + size_t this_len = digest_.size(); + size_t other_len = other_ds.digest_.size(); + size_t cmplen = min(this_len, other_len); + int cmp = memcmp(&digest_[0], &other_ds.digest_[0], cmplen); + if (cmp != 0) { + return (cmp); + } else { + return ((this_len == other_len) + ? 0 : (this_len < other_len) ? -1 : 1); + } + } + + /// \brief Accessors + uint16_t + getTag() const { + return (tag_); + } + +private: + // straightforward representation of DS RDATA fields + uint16_t tag_; + uint8_t algorithm_; + uint8_t digest_type_; + std::vector digest_; +}; + +} +} +} +} +} +#endif // __DS_LIKE_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/detail/txt_like.h b/src/lib/dns/rdata/generic/detail/txt_like.h new file mode 100644 index 0000000000..392a8ce593 --- /dev/null +++ b/src/lib/dns/rdata/generic/detail/txt_like.h @@ -0,0 +1,172 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __TXT_LIKE_H +#define __TXT_LIKE_H 1 + +#include + +#include +#include + +using namespace std; +using namespace isc::util; + +templateclass TXTLikeImpl { +public: + TXTLikeImpl(InputBuffer& buffer, size_t rdata_len) { + if (rdata_len > MAX_RDLENGTH) { + isc_throw(InvalidRdataLength, "RDLENGTH too large: " << rdata_len); + } + + if (rdata_len == 0) { // note that this couldn't happen in the loop. + isc_throw(DNSMessageFORMERR, "Error in parsing " << + RRType(typeCode) << " RDATA: 0-length character string"); + } + + do { + const uint8_t len = buffer.readUint8(); + if (rdata_len < len + 1) { + isc_throw(DNSMessageFORMERR, "Error in parsing " << + RRType(typeCode) << + " RDATA: character string length is too large: " << + static_cast(len)); + } + vector data(len + 1); + data[0] = len; + buffer.readData(&data[0] + 1, len); + string_list_.push_back(data); + + rdata_len -= (len + 1); + } while (rdata_len > 0); + } + + explicit TXTLikeImpl(const std::string& txtstr) { + // TBD: this is a simple, incomplete implementation that only supports + // a single character-string. + + size_t length = txtstr.size(); + size_t pos_begin = 0; + + if (length > 1 && txtstr[0] == '"' && txtstr[length - 1] == '"') { + pos_begin = 1; + length -= 2; + } + + if (length > MAX_CHARSTRING_LEN) { + isc_throw(CharStringTooLong, RRType(typeCode) << + " RDATA construction from text:" + " string length is too long: " << length); + } + + // TBD: right now, we don't support escaped characters + if (txtstr.find('\\') != string::npos) { + isc_throw(InvalidRdataText, RRType(typeCode) << + " RDATA from text:" + " escaped character is currently not supported: " << + txtstr); + } + + vector data; + data.reserve(length + 1); + data.push_back(length); + data.insert(data.end(), txtstr.begin() + pos_begin, + txtstr.begin() + pos_begin + length); + string_list_.push_back(data); + } + + TXTLikeImpl(const TXTLikeImpl& other) : + string_list_(other.string_list_) + {} + + void + toWire(OutputBuffer& buffer) const { + for (vector >::const_iterator it = + string_list_.begin(); + it != string_list_.end(); + ++it) + { + buffer.writeData(&(*it)[0], (*it).size()); + } + } + + void + toWire(AbstractMessageRenderer& renderer) const { + for (vector >::const_iterator it = + string_list_.begin(); + it != string_list_.end(); + ++it) + { + renderer.writeData(&(*it)[0], (*it).size()); + } + } + + string + toText() const { + string s; + + // XXX: this implementation is not entirely correct. for example, it + // should escape double-quotes if they appear in the character string. + for (vector >::const_iterator it = + string_list_.begin(); + it != string_list_.end(); + ++it) + { + if (!s.empty()) { + s.push_back(' '); + } + s.push_back('"'); + s.insert(s.end(), (*it).begin() + 1, (*it).end()); + s.push_back('"'); + } + + return (s); + } + + int + compare(const TXTLikeImpl& other) const { + // This implementation is not efficient. Revisit this (TBD). + OutputBuffer this_buffer(0); + toWire(this_buffer); + size_t this_len = this_buffer.getLength(); + + OutputBuffer other_buffer(0); + other.toWire(other_buffer); + const size_t other_len = other_buffer.getLength(); + + const size_t cmplen = min(this_len, other_len); + const int cmp = memcmp(this_buffer.getData(), other_buffer.getData(), + cmplen); + if (cmp != 0) { + return (cmp); + } else { + return ((this_len == other_len) ? 0 : + (this_len < other_len) ? -1 : 1); + } + } + +private: + /// Note: this is a prototype version; we may reconsider + /// this representation later. + std::vector > string_list_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE + +#endif // __TXT_LIKE_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/dlv_32769.cc b/src/lib/dns/rdata/generic/dlv_32769.cc new file mode 100644 index 0000000000..9887aa88bd --- /dev/null +++ b/src/lib/dns/rdata/generic/dlv_32769.cc @@ -0,0 +1,121 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include +#include + +#include +#include +#include + +#include + +using namespace std; +using namespace isc::util; +using namespace isc::util::encode; +using namespace isc::dns::rdata::generic::detail; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +/// \brief Constructor from string. +/// +/// A copy of the implementation object is allocated and constructed. +DLV::DLV(const string& ds_str) : + impl_(new DLVImpl(ds_str)) +{} + +/// \brief Constructor from wire-format data. +/// +/// A copy of the implementation object is allocated and constructed. +DLV::DLV(InputBuffer& buffer, size_t rdata_len) : + impl_(new DLVImpl(buffer, rdata_len)) +{} + +/// \brief Copy constructor +/// +/// A copy of the implementation object is allocated and constructed. +DLV::DLV(const DLV& source) : + Rdata(), impl_(new DLVImpl(*source.impl_)) +{} + +/// \brief Assignment operator +/// +/// PIMPL-induced logic +DLV& +DLV::operator=(const DLV& source) { + if (impl_ == source.impl_) { + return (*this); + } + + DLVImpl* newimpl = new DLVImpl(*source.impl_); + delete impl_; + impl_ = newimpl; + + return (*this); +} + +/// \brief Destructor +/// +/// Deallocates an internal resource. +DLV::~DLV() { + delete impl_; +} + +/// \brief Convert the \c DLV to a string. +/// +/// A pass-thru to the corresponding implementation method. +string +DLV::toText() const { + return (impl_->toText()); +} + +/// \brief Render the \c DLV in the wire format to a OutputBuffer object +/// +/// A pass-thru to the corresponding implementation method. +void +DLV::toWire(OutputBuffer& buffer) const { + impl_->toWire(buffer); +} + +/// \brief Render the \c DLV in the wire format to a AbstractMessageRenderer +/// object +/// +/// A pass-thru to the corresponding implementation method. +void +DLV::toWire(AbstractMessageRenderer& renderer) const { + impl_->toWire(renderer); +} + +/// \brief Compare two instances of \c DLV RDATA. +/// +/// The type check is performed here. Otherwise, a pass-thru to the +/// corresponding implementation method. +int +DLV::compare(const Rdata& other) const { + const DLV& other_ds = dynamic_cast(other); + + return (impl_->compare(*other_ds.impl_)); +} + +/// \brief Tag accessor +uint16_t +DLV::getTag() const { + return (impl_->getTag()); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/dlv_32769.h b/src/lib/dns/rdata/generic/dlv_32769.h new file mode 100644 index 0000000000..86cd98ce05 --- /dev/null +++ b/src/lib/dns/rdata/generic/dlv_32769.h @@ -0,0 +1,77 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include + +#include + +#include +#include +#include +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +namespace detail { +template class DSLikeImpl; +} + +/// \brief \c rdata::generic::DLV class represents the DLV RDATA as defined in +/// RFC4431. +/// +/// This class implements the basic interfaces inherited from the abstract +/// \c rdata::Rdata class, and provides trivial accessors specific to the +/// DLV RDATA. +class DLV : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + /// \brief Assignment operator. + /// + /// It internally allocates a resource, and if it fails a corresponding + /// standard exception will be thrown. + /// This operator never throws an exception otherwise. + /// + /// This operator provides the strong exception guarantee: When an + /// exception is thrown the content of the assignment target will be + /// intact. + DLV& operator=(const DLV& source); + + /// \brief The destructor. + ~DLV(); + + /// \brief Return the value of the Tag field. + /// + /// This method never throws an exception. + uint16_t getTag() const; +private: + typedef detail::DSLikeImpl DLVImpl; + DLVImpl* impl_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/ds_43.cc b/src/lib/dns/rdata/generic/ds_43.cc index 1b48456d8b..20b62dca83 100644 --- a/src/lib/dns/rdata/generic/ds_43.cc +++ b/src/lib/dns/rdata/generic/ds_43.cc @@ -1,4 +1,4 @@ -// Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") // // Permission to use, copy, modify, and/or distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -12,87 +12,32 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. -#include #include -#include -#include - -#include #include #include #include -#include #include #include -#include -#include +#include using namespace std; using namespace isc::util; using namespace isc::util::encode; +using namespace isc::dns::rdata::generic::detail; // BEGIN_ISC_NAMESPACE // BEGIN_RDATA_NAMESPACE -struct DSImpl { - // straightforward representation of DS RDATA fields - DSImpl(uint16_t tag, uint8_t algorithm, uint8_t digest_type, - const vector& digest) : - tag_(tag), algorithm_(algorithm), digest_type_(digest_type), - digest_(digest) - {} - - uint16_t tag_; - uint8_t algorithm_; - uint8_t digest_type_; - const vector digest_; -}; - DS::DS(const string& ds_str) : - impl_(NULL) -{ - istringstream iss(ds_str); - unsigned int tag, algorithm, digest_type; - stringbuf digestbuf; + impl_(new DSImpl(ds_str)) +{} - iss >> tag >> algorithm >> digest_type >> &digestbuf; - if (iss.bad() || iss.fail()) { - isc_throw(InvalidRdataText, "Invalid DS text"); - } - if (tag > 0xffff) { - isc_throw(InvalidRdataText, "DS tag out of range"); - } - if (algorithm > 0xff) { - isc_throw(InvalidRdataText, "DS algorithm out of range"); - } - if (digest_type > 0xff) { - isc_throw(InvalidRdataText, "DS digest type out of range"); - } - - vector digest; - decodeHex(digestbuf.str(), digest); - - impl_ = new DSImpl(tag, algorithm, digest_type, digest); -} - -DS::DS(InputBuffer& buffer, size_t rdata_len) { - if (rdata_len < 4) { - isc_throw(InvalidRdataLength, "DS too short"); - } - - uint16_t tag = buffer.readUint16(); - uint16_t algorithm = buffer.readUint8(); - uint16_t digest_type = buffer.readUint8(); - - rdata_len -= 4; - vector digest(rdata_len); - buffer.readData(&digest[0], rdata_len); - - impl_ = new DSImpl(tag, algorithm, digest_type, digest); -} +DS::DS(InputBuffer& buffer, size_t rdata_len) : + impl_(new DSImpl(buffer, rdata_len)) +{} DS::DS(const DS& source) : Rdata(), impl_(new DSImpl(*source.impl_)) @@ -117,57 +62,29 @@ DS::~DS() { string DS::toText() const { - using namespace boost; - return (lexical_cast(static_cast(impl_->tag_)) + - " " + lexical_cast(static_cast(impl_->algorithm_)) + - " " + lexical_cast(static_cast(impl_->digest_type_)) + - " " + encodeHex(impl_->digest_)); + return (impl_->toText()); } void DS::toWire(OutputBuffer& buffer) const { - buffer.writeUint16(impl_->tag_); - buffer.writeUint8(impl_->algorithm_); - buffer.writeUint8(impl_->digest_type_); - buffer.writeData(&impl_->digest_[0], impl_->digest_.size()); + impl_->toWire(buffer); } void DS::toWire(AbstractMessageRenderer& renderer) const { - renderer.writeUint16(impl_->tag_); - renderer.writeUint8(impl_->algorithm_); - renderer.writeUint8(impl_->digest_type_); - renderer.writeData(&impl_->digest_[0], impl_->digest_.size()); + impl_->toWire(renderer); } int DS::compare(const Rdata& other) const { const DS& other_ds = dynamic_cast(other); - if (impl_->tag_ != other_ds.impl_->tag_) { - return (impl_->tag_ < other_ds.impl_->tag_ ? -1 : 1); - } - if (impl_->algorithm_ != other_ds.impl_->algorithm_) { - return (impl_->algorithm_ < other_ds.impl_->algorithm_ ? -1 : 1); - } - if (impl_->digest_type_ != other_ds.impl_->digest_type_) { - return (impl_->digest_type_ < other_ds.impl_->digest_type_ ? -1 : 1); - } - - size_t this_len = impl_->digest_.size(); - size_t other_len = other_ds.impl_->digest_.size(); - size_t cmplen = min(this_len, other_len); - int cmp = memcmp(&impl_->digest_[0], &other_ds.impl_->digest_[0], cmplen); - if (cmp != 0) { - return (cmp); - } else { - return ((this_len == other_len) ? 0 : (this_len < other_len) ? -1 : 1); - } + return (impl_->compare(*other_ds.impl_)); } uint16_t DS::getTag() const { - return (impl_->tag_); + return (impl_->getTag()); } // END_RDATA_NAMESPACE diff --git a/src/lib/dns/rdata/generic/ds_43.h b/src/lib/dns/rdata/generic/ds_43.h index 03b19a0903..2697f513be 100644 --- a/src/lib/dns/rdata/generic/ds_43.h +++ b/src/lib/dns/rdata/generic/ds_43.h @@ -1,4 +1,4 @@ -// Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") // // Permission to use, copy, modify, and/or distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -12,6 +12,8 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +// BEGIN_HEADER_GUARD + #include #include @@ -21,8 +23,6 @@ #include #include -// BEGIN_HEADER_GUARD - // BEGIN_ISC_NAMESPACE // BEGIN_COMMON_DECLARATIONS @@ -30,20 +30,41 @@ // BEGIN_RDATA_NAMESPACE -struct DSImpl; +namespace detail { +template class DSLikeImpl; +} +/// \brief \c rdata::generic::DS class represents the DS RDATA as defined in +/// RFC3658. +/// +/// This class implements the basic interfaces inherited from the abstract +/// \c rdata::Rdata class, and provides trivial accessors specific to the +/// DS RDATA. class DS : public Rdata { public: // BEGIN_COMMON_MEMBERS // END_COMMON_MEMBERS + + /// \brief Assignment operator. + /// + /// It internally allocates a resource, and if it fails a corresponding + /// standard exception will be thrown. + /// This operator never throws an exception otherwise. + /// + /// This operator provides the strong exception guarantee: When an + /// exception is thrown the content of the assignment target will be + /// intact. DS& operator=(const DS& source); + + /// \brief The destructor. ~DS(); + /// \brief Return the value of the Tag field. /// - /// Specialized methods - /// + /// This method never throws an exception. uint16_t getTag() const; private: + typedef detail::DSLikeImpl DSImpl; DSImpl* impl_; }; diff --git a/src/lib/dns/rdata/generic/hinfo_13.cc b/src/lib/dns/rdata/generic/hinfo_13.cc new file mode 100644 index 0000000000..45f4209863 --- /dev/null +++ b/src/lib/dns/rdata/generic/hinfo_13.cc @@ -0,0 +1,129 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +#include + +#include + +#include +#include +#include +#include +#include +#include + +using namespace std; +using namespace boost; +using namespace isc::util; +using namespace isc::dns; +using namespace isc::dns::characterstr; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + + +HINFO::HINFO(const string& hinfo_str) { + string::const_iterator input_iterator = hinfo_str.begin(); + cpu_ = getNextCharacterString(hinfo_str, input_iterator); + + skipLeftSpaces(hinfo_str, input_iterator); + + os_ = getNextCharacterString(hinfo_str, input_iterator); +} + +HINFO::HINFO(InputBuffer& buffer, size_t rdata_len) { + cpu_ = getNextCharacterString(buffer, rdata_len); + os_ = getNextCharacterString(buffer, rdata_len); +} + +HINFO::HINFO(const HINFO& source): + Rdata(), cpu_(source.cpu_), os_(source.os_) +{ +} + +std::string +HINFO::toText() const { + string result; + result += "\""; + result += cpu_; + result += "\" \""; + result += os_; + result += "\""; + return (result); +} + +void +HINFO::toWire(OutputBuffer& buffer) const { + toWireHelper(buffer); +} + +void +HINFO::toWire(AbstractMessageRenderer& renderer) const { + toWireHelper(renderer); +} + +int +HINFO::compare(const Rdata& other) const { + const HINFO& other_hinfo = dynamic_cast(other); + + if (cpu_ < other_hinfo.cpu_) { + return (-1); + } else if (cpu_ > other_hinfo.cpu_) { + return (1); + } + + if (os_ < other_hinfo.os_) { + return (-1); + } else if (os_ > other_hinfo.os_) { + return (1); + } + + return (0); +} + +const std::string& +HINFO::getCPU() const { + return (cpu_); +} + +const std::string& +HINFO::getOS() const { + return (os_); +} + +void +HINFO::skipLeftSpaces(const std::string& input_str, + std::string::const_iterator& input_iterator) +{ + if (input_iterator >= input_str.end()) { + isc_throw(InvalidRdataText, + "Invalid HINFO text format, field is missing."); + } + + if (!isspace(*input_iterator)) { + isc_throw(InvalidRdataText, + "Invalid HINFO text format, fields are not separated by space."); + } + // Skip white spaces + while (input_iterator < input_str.end() && isspace(*input_iterator)) { + ++input_iterator; + } +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/hinfo_13.h b/src/lib/dns/rdata/generic/hinfo_13.h new file mode 100644 index 0000000000..85134198b6 --- /dev/null +++ b/src/lib/dns/rdata/generic/hinfo_13.h @@ -0,0 +1,77 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD +#include + +#include + +#include +#include +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +/// \brief \c HINFO class represents the HINFO rdata defined in +/// RFC1034, RFC1035 +/// +/// This class implements the basic interfaces inherited from the +/// \c rdata::Rdata class, and provides accessors specific to the +/// HINFO rdata. +class HINFO : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + // HINFO specific methods + const std::string& getCPU() const; + const std::string& getOS() const; + +private: + /// Skip the left whitespaces of the input string + /// + /// \param input_str The input string + /// \param input_iterator From which the skipping started + void skipLeftSpaces(const std::string& input_str, + std::string::const_iterator& input_iterator); + + /// Helper template function for toWire() + /// + /// \param outputer Where to write data in + template + void toWireHelper(T& outputer) const { + outputer.writeUint8(cpu_.size()); + outputer.writeData(cpu_.c_str(), cpu_.size()); + + outputer.writeUint8(os_.size()); + outputer.writeData(os_.c_str(), os_.size()); + } + + std::string cpu_; + std::string os_; +}; + + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/minfo_14.cc b/src/lib/dns/rdata/generic/minfo_14.cc new file mode 100644 index 0000000000..aa5272cfc2 --- /dev/null +++ b/src/lib/dns/rdata/generic/minfo_14.cc @@ -0,0 +1,156 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include + +#include +#include +#include +#include + +using namespace std; +using namespace isc::dns; +using namespace isc::util; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +/// \brief Constructor from string. +/// +/// \c minfo_str must be formatted as follows: +/// \code +/// \endcode +/// where both fields must represent a valid domain name. +/// +/// An example of valid string is: +/// \code "rmail.example.com. email.example.com." \endcode +/// +/// Exceptions +/// +/// \exception InvalidRdataText The number of RDATA fields (must be 2) is +/// incorrect. +/// \exception std::bad_alloc Memory allocation for names fails. +/// \exception Other The constructor of the \c Name class will throw if the +/// names in the string is invalid. +MINFO::MINFO(const std::string& minfo_str) : + // We cannot construct both names in the initialization list due to the + // necessary text processing, so we have to initialize them with a dummy + // name and replace them later. + rmailbox_(Name::ROOT_NAME()), emailbox_(Name::ROOT_NAME()) +{ + istringstream iss(minfo_str); + string rmailbox_str, emailbox_str; + iss >> rmailbox_str >> emailbox_str; + + // Validation: A valid MINFO RR must have exactly two fields. + if (iss.bad() || iss.fail()) { + isc_throw(InvalidRdataText, "Invalid MINFO text: " << minfo_str); + } + if (!iss.eof()) { + isc_throw(InvalidRdataText, "Invalid MINFO text (redundant field): " + << minfo_str); + } + + rmailbox_ = Name(rmailbox_str); + emailbox_ = Name(emailbox_str); +} + +/// \brief Constructor from wire-format data. +/// +/// This constructor doesn't check the validity of the second parameter (rdata +/// length) for parsing. +/// If necessary, the caller will check consistency. +/// +/// \exception std::bad_alloc Memory allocation for names fails. +/// \exception Other The constructor of the \c Name class will throw if the +/// names in the wire is invalid. +MINFO::MINFO(InputBuffer& buffer, size_t) : + rmailbox_(buffer), emailbox_(buffer) +{} + +/// \brief Copy constructor. +/// +/// \exception std::bad_alloc Memory allocation fails in copying internal +/// member variables (this should be very rare). +MINFO::MINFO(const MINFO& other) : + Rdata(), rmailbox_(other.rmailbox_), emailbox_(other.emailbox_) +{} + +/// \brief Convert the \c MINFO to a string. +/// +/// The output of this method is formatted as described in the "from string" +/// constructor (\c MINFO(const std::string&))). +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \return A \c string object that represents the \c MINFO object. +std::string +MINFO::toText() const { + return (rmailbox_.toText() + " " + emailbox_.toText()); +} + +/// \brief Render the \c MINFO in the wire format without name compression. +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \param buffer An output buffer to store the wire data. +void +MINFO::toWire(OutputBuffer& buffer) const { + rmailbox_.toWire(buffer); + emailbox_.toWire(buffer); +} + +MINFO& +MINFO::operator=(const MINFO& source) { + rmailbox_ = source.rmailbox_; + emailbox_ = source.emailbox_; + + return (*this); +} + +/// \brief Render the \c MINFO in the wire format with taking into account +/// compression. +/// +/// As specified in RFC3597, TYPE MINFO is "well-known", the rmailbox and +/// emailbox fields (domain names) will be compressed. +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \param renderer DNS message rendering context that encapsulates the +/// output buffer and name compression information. +void +MINFO::toWire(AbstractMessageRenderer& renderer) const { + renderer.writeName(rmailbox_); + renderer.writeName(emailbox_); +} + +/// \brief Compare two instances of \c MINFO RDATA. +/// +/// See documentation in \c Rdata. +int +MINFO::compare(const Rdata& other) const { + const MINFO& other_minfo = dynamic_cast(other); + + const int cmp = compareNames(rmailbox_, other_minfo.rmailbox_); + if (cmp != 0) { + return (cmp); + } + return (compareNames(emailbox_, other_minfo.emailbox_)); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/minfo_14.h b/src/lib/dns/rdata/generic/minfo_14.h new file mode 100644 index 0000000000..f3ee1d07fb --- /dev/null +++ b/src/lib/dns/rdata/generic/minfo_14.h @@ -0,0 +1,82 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include + +#include +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +/// \brief \c rdata::generic::MINFO class represents the MINFO RDATA as +/// defined in RFC1035. +/// +/// This class implements the basic interfaces inherited from the abstract +/// \c rdata::Rdata class, and provides trivial accessors specific to the +/// MINFO RDATA. +class MINFO : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + /// \brief Define the assignment operator. + /// + /// \exception std::bad_alloc Memory allocation fails in copying + /// internal member variables (this should be very rare). + MINFO& operator=(const MINFO& source); + + /// \brief Return the value of the rmailbox field. + /// + /// \exception std::bad_alloc If resource allocation for the returned + /// \c Name fails. + /// + /// \note + /// Unlike the case of some other RDATA classes (such as + /// \c NS::getNSName()), this method constructs a new \c Name object + /// and returns it, instead of returning a reference to a \c Name object + /// internally maintained in the class (which is a private member). + /// This is based on the observation that this method will be rarely + /// used and even when it's used it will not be in a performance context + /// (for example, a recursive resolver won't need this field in its + /// resolution process). By returning a new object we have flexibility + /// of changing the internal representation without the risk of changing + /// the interface or method property. + /// The same note applies to the \c getEmailbox() method. + Name getRmailbox() const { return (rmailbox_); } + + /// \brief Return the value of the emailbox field. + /// + /// \exception std::bad_alloc If resource allocation for the returned + /// \c Name fails. + Name getEmailbox() const { return (emailbox_); } + +private: + Name rmailbox_; + Name emailbox_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/naptr_35.cc b/src/lib/dns/rdata/generic/naptr_35.cc new file mode 100644 index 0000000000..129bf6cb75 --- /dev/null +++ b/src/lib/dns/rdata/generic/naptr_35.cc @@ -0,0 +1,220 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +#include + +#include + +#include +#include +#include +#include +#include + +using namespace std; +using namespace boost; +using namespace isc::util; +using namespace isc::dns; +using namespace isc::dns::characterstr; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +namespace { +/// Skip the left whitespaces of the input string +/// +/// \param input_str The input string +/// \param input_iterator From which the skipping started +void +skipLeftSpaces(const std::string& input_str, + std::string::const_iterator& input_iterator) +{ + if (input_iterator >= input_str.end()) { + isc_throw(InvalidRdataText, + "Invalid NAPTR text format, field is missing."); + } + + if (!isspace(*input_iterator)) { + isc_throw(InvalidRdataText, + "Invalid NAPTR text format, fields are not separated by space."); + } + // Skip white spaces + while (input_iterator < input_str.end() && isspace(*input_iterator)) { + ++input_iterator; + } +} + +} // Anonymous namespace + +NAPTR::NAPTR(InputBuffer& buffer, size_t len): + replacement_(".") +{ + order_ = buffer.readUint16(); + preference_ = buffer.readUint16(); + + flags_ = getNextCharacterString(buffer, len); + services_ = getNextCharacterString(buffer, len); + regexp_ = getNextCharacterString(buffer, len); + replacement_ = Name(buffer); +} + +NAPTR::NAPTR(const std::string& naptr_str): + replacement_(".") +{ + istringstream iss(naptr_str); + uint16_t order; + uint16_t preference; + + iss >> order >> preference; + + if (iss.bad() || iss.fail()) { + isc_throw(InvalidRdataText, "Invalid NAPTR text format"); + } + + order_ = order; + preference_ = preference; + + string::const_iterator input_iterator = naptr_str.begin() + iss.tellg(); + + skipLeftSpaces(naptr_str, input_iterator); + + flags_ = getNextCharacterString(naptr_str, input_iterator); + + skipLeftSpaces(naptr_str, input_iterator); + + services_ = getNextCharacterString(naptr_str, input_iterator); + + skipLeftSpaces(naptr_str, input_iterator); + + regexp_ = getNextCharacterString(naptr_str, input_iterator); + + skipLeftSpaces(naptr_str, input_iterator); + + if (input_iterator < naptr_str.end()) { + string replacementStr(input_iterator, naptr_str.end()); + + replacement_ = Name(replacementStr); + } else { + isc_throw(InvalidRdataText, + "Invalid NAPTR text format, replacement field is missing"); + } +} + +NAPTR::NAPTR(const NAPTR& naptr): + Rdata(), order_(naptr.order_), preference_(naptr.preference_), + flags_(naptr.flags_), services_(naptr.services_), regexp_(naptr.regexp_), + replacement_(naptr.replacement_) +{ +} + +void +NAPTR::toWire(OutputBuffer& buffer) const { + toWireHelper(buffer); +} + +void +NAPTR::toWire(AbstractMessageRenderer& renderer) const { + toWireHelper(renderer); +} + +string +NAPTR::toText() const { + string result; + result += lexical_cast(order_); + result += " "; + result += lexical_cast(preference_); + result += " \""; + result += flags_; + result += "\" \""; + result += services_; + result += "\" \""; + result += regexp_; + result += "\" "; + result += replacement_.toText(); + return (result); +} + +int +NAPTR::compare(const Rdata& other) const { + const NAPTR other_naptr = dynamic_cast(other); + + if (order_ < other_naptr.order_) { + return (-1); + } else if (order_ > other_naptr.order_) { + return (1); + } + + if (preference_ < other_naptr.preference_) { + return (-1); + } else if (preference_ > other_naptr.preference_) { + return (1); + } + + if (flags_ < other_naptr.flags_) { + return (-1); + } else if (flags_ > other_naptr.flags_) { + return (1); + } + + if (services_ < other_naptr.services_) { + return (-1); + } else if (services_ > other_naptr.services_) { + return (1); + } + + if (regexp_ < other_naptr.regexp_) { + return (-1); + } else if (regexp_ > other_naptr.regexp_) { + return (1); + } + + return (compareNames(replacement_, other_naptr.replacement_)); +} + +uint16_t +NAPTR::getOrder() const { + return (order_); +} + +uint16_t +NAPTR::getPreference() const { + return (preference_); +} + +const std::string& +NAPTR::getFlags() const { + return (flags_); +} + +const std::string& +NAPTR::getServices() const { + return (services_); +} + +const std::string& +NAPTR::getRegexp() const { + return (regexp_); +} + +const Name& +NAPTR::getReplacement() const { + return (replacement_); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/naptr_35.h b/src/lib/dns/rdata/generic/naptr_35.h new file mode 100644 index 0000000000..ca16b3c9f1 --- /dev/null +++ b/src/lib/dns/rdata/generic/naptr_35.h @@ -0,0 +1,83 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include + +#include +#include +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +/// \brief \c NAPTR class represents the NAPTR rdata defined in +/// RFC2915, RFC2168 and RFC3403 +/// +/// This class implements the basic interfaces inherited from the +/// \c rdata::Rdata class, and provides accessors specific to the +/// NAPTR rdata. +class NAPTR : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + // NAPTR specific methods + uint16_t getOrder() const; + uint16_t getPreference() const; + const std::string& getFlags() const; + const std::string& getServices() const; + const std::string& getRegexp() const; + const Name& getReplacement() const; +private: + /// Helper template function for toWire() + /// + /// \param outputer Where to write data in + template + void toWireHelper(T& outputer) const { + outputer.writeUint16(order_); + outputer.writeUint16(preference_); + + outputer.writeUint8(flags_.size()); + outputer.writeData(flags_.c_str(), flags_.size()); + + outputer.writeUint8(services_.size()); + outputer.writeData(services_.c_str(), services_.size()); + + outputer.writeUint8(regexp_.size()); + outputer.writeData(regexp_.c_str(), regexp_.size()); + + replacement_.toWire(outputer); + } + + uint16_t order_; + uint16_t preference_; + std::string flags_; + std::string services_; + std::string regexp_; + Name replacement_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/rp_17.cc b/src/lib/dns/rdata/generic/rp_17.cc index b8b2ba21b1..781b55d6bf 100644 --- a/src/lib/dns/rdata/generic/rp_17.cc +++ b/src/lib/dns/rdata/generic/rp_17.cc @@ -24,6 +24,7 @@ using namespace std; using namespace isc::dns; +using namespace isc::util; // BEGIN_ISC_NAMESPACE // BEGIN_RDATA_NAMESPACE diff --git a/src/lib/dns/rdata/generic/rrsig_46.cc b/src/lib/dns/rdata/generic/rrsig_46.cc index 0c82406895..59ff030541 100644 --- a/src/lib/dns/rdata/generic/rrsig_46.cc +++ b/src/lib/dns/rdata/generic/rrsig_46.cc @@ -243,5 +243,10 @@ RRSIG::compare(const Rdata& other) const { } } +const RRType& +RRSIG::typeCovered() const { + return (impl_->covered_); +} + // END_RDATA_NAMESPACE // END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/rrsig_46.h b/src/lib/dns/rdata/generic/rrsig_46.h index 19acc40c81..b32c17f86b 100644 --- a/src/lib/dns/rdata/generic/rrsig_46.h +++ b/src/lib/dns/rdata/generic/rrsig_46.h @@ -38,6 +38,9 @@ public: // END_COMMON_MEMBERS RRSIG& operator=(const RRSIG& source); ~RRSIG(); + + // specialized methods + const RRType& typeCovered() const; private: RRSIGImpl* impl_; }; diff --git a/src/lib/dns/rdata/generic/spf_99.cc b/src/lib/dns/rdata/generic/spf_99.cc new file mode 100644 index 0000000000..492de98551 --- /dev/null +++ b/src/lib/dns/rdata/generic/spf_99.cc @@ -0,0 +1,87 @@ +// Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include +#include + +#include +#include +#include +#include +#include + +using namespace std; +using namespace isc::util; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +#include + +SPF& +SPF::operator=(const SPF& source) { + if (impl_ == source.impl_) { + return (*this); + } + + SPFImpl* newimpl = new SPFImpl(*source.impl_); + delete impl_; + impl_ = newimpl; + + return (*this); +} + +SPF::~SPF() { + delete impl_; +} + +SPF::SPF(InputBuffer& buffer, size_t rdata_len) : + impl_(new SPFImpl(buffer, rdata_len)) +{} + +SPF::SPF(const std::string& txtstr) : + impl_(new SPFImpl(txtstr)) +{} + +SPF::SPF(const SPF& other) : + Rdata(), impl_(new SPFImpl(*other.impl_)) +{} + +void +SPF::toWire(OutputBuffer& buffer) const { + impl_->toWire(buffer); +} + +void +SPF::toWire(AbstractMessageRenderer& renderer) const { + impl_->toWire(renderer); +} + +string +SPF::toText() const { + return (impl_->toText()); +} + +int +SPF::compare(const Rdata& other) const { + const SPF& other_txt = dynamic_cast(other); + + return (impl_->compare(*other_txt.impl_)); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/spf_99.h b/src/lib/dns/rdata/generic/spf_99.h new file mode 100644 index 0000000000..956adb9d64 --- /dev/null +++ b/src/lib/dns/rdata/generic/spf_99.h @@ -0,0 +1,52 @@ +// Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include + +#include +#include + +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +template class TXTLikeImpl; + +class SPF : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + SPF& operator=(const SPF& source); + ~SPF(); + +private: + typedef TXTLikeImpl SPFImpl; + SPFImpl* impl_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/generic/txt_16.cc b/src/lib/dns/rdata/generic/txt_16.cc index ac2ba8a9f0..418bc05fbc 100644 --- a/src/lib/dns/rdata/generic/txt_16.cc +++ b/src/lib/dns/rdata/generic/txt_16.cc @@ -30,130 +30,57 @@ using namespace isc::util; // BEGIN_ISC_NAMESPACE // BEGIN_RDATA_NAMESPACE -TXT::TXT(InputBuffer& buffer, size_t rdata_len) { - if (rdata_len > MAX_RDLENGTH) { - isc_throw(InvalidRdataLength, "RDLENGTH too large: " << rdata_len); +#include + +TXT& +TXT::operator=(const TXT& source) { + if (impl_ == source.impl_) { + return (*this); } - if (rdata_len == 0) { // note that this couldn't happen in the loop. - isc_throw(DNSMessageFORMERR, - "Error in parsing TXT RDATA: 0-length character string"); - } + TXTImpl* newimpl = new TXTImpl(*source.impl_); + delete impl_; + impl_ = newimpl; - do { - const uint8_t len = buffer.readUint8(); - if (rdata_len < len + 1) { - isc_throw(DNSMessageFORMERR, - "Error in parsing TXT RDATA: character string length " - "is too large: " << static_cast(len)); - } - vector data(len + 1); - data[0] = len; - buffer.readData(&data[0] + 1, len); - string_list_.push_back(data); - - rdata_len -= (len + 1); - } while (rdata_len > 0); + return (*this); } -TXT::TXT(const std::string& txtstr) { - // TBD: this is a simple, incomplete implementation that only supports - // a single character-string. - - size_t length = txtstr.size(); - size_t pos_begin = 0; - - if (length > 1 && txtstr[0] == '"' && txtstr[length - 1] == '"') { - pos_begin = 1; - length -= 2; - } - - if (length > MAX_CHARSTRING_LEN) { - isc_throw(CharStringTooLong, "TXT RDATA construction from text: " - "string length is too long: " << length); - } - - // TBD: right now, we don't support escaped characters - if (txtstr.find('\\') != string::npos) { - isc_throw(InvalidRdataText, "TXT RDATA from text: " - "escaped character is currently not supported: " << txtstr); - } - - vector data; - data.reserve(length + 1); - data.push_back(length); - data.insert(data.end(), txtstr.begin() + pos_begin, - txtstr.begin() + pos_begin + length); - string_list_.push_back(data); +TXT::~TXT() { + delete impl_; } +TXT::TXT(InputBuffer& buffer, size_t rdata_len) : + impl_(new TXTImpl(buffer, rdata_len)) +{} + +TXT::TXT(const std::string& txtstr) : + impl_(new TXTImpl(txtstr)) +{} + TXT::TXT(const TXT& other) : - Rdata(), string_list_(other.string_list_) + Rdata(), impl_(new TXTImpl(*other.impl_)) {} void TXT::toWire(OutputBuffer& buffer) const { - for (vector >::const_iterator it = string_list_.begin(); - it != string_list_.end(); - ++it) - { - buffer.writeData(&(*it)[0], (*it).size()); - } + impl_->toWire(buffer); } void TXT::toWire(AbstractMessageRenderer& renderer) const { - for (vector >::const_iterator it = string_list_.begin(); - it != string_list_.end(); - ++it) - { - renderer.writeData(&(*it)[0], (*it).size()); - } + impl_->toWire(renderer); } string TXT::toText() const { - string s; - - // XXX: this implementation is not entirely correct. for example, it - // should escape double-quotes if they appear in the character string. - for (vector >::const_iterator it = string_list_.begin(); - it != string_list_.end(); - ++it) - { - if (!s.empty()) { - s.push_back(' '); - } - s.push_back('"'); - s.insert(s.end(), (*it).begin() + 1, (*it).end()); - s.push_back('"'); - } - - return (s); + return (impl_->toText()); } int TXT::compare(const Rdata& other) const { const TXT& other_txt = dynamic_cast(other); - // This implementation is not efficient. Revisit this (TBD). - OutputBuffer this_buffer(0); - toWire(this_buffer); - size_t this_len = this_buffer.getLength(); - - OutputBuffer other_buffer(0); - other_txt.toWire(other_buffer); - const size_t other_len = other_buffer.getLength(); - - const size_t cmplen = min(this_len, other_len); - const int cmp = memcmp(this_buffer.getData(), other_buffer.getData(), - cmplen); - if (cmp != 0) { - return (cmp); - } else { - return ((this_len == other_len) ? 0 : - (this_len < other_len) ? -1 : 1); - } + return (impl_->compare(*other_txt.impl_)); } // END_RDATA_NAMESPACE diff --git a/src/lib/dns/rdata/generic/txt_16.h b/src/lib/dns/rdata/generic/txt_16.h index b4c791f6ae..d99d69b75d 100644 --- a/src/lib/dns/rdata/generic/txt_16.h +++ b/src/lib/dns/rdata/generic/txt_16.h @@ -28,14 +28,19 @@ // BEGIN_RDATA_NAMESPACE +template class TXTLikeImpl; + class TXT : public Rdata { public: // BEGIN_COMMON_MEMBERS // END_COMMON_MEMBERS + + TXT& operator=(const TXT& source); + ~TXT(); + private: - /// Note: this is a prototype version; we may reconsider - /// this representation later. - std::vector > string_list_; + typedef TXTLikeImpl TXTImpl; + TXTImpl* impl_; }; // END_RDATA_NAMESPACE diff --git a/src/lib/dns/rdata/in_1/dhcid_49.cc b/src/lib/dns/rdata/in_1/dhcid_49.cc new file mode 100644 index 0000000000..0a9a23c792 --- /dev/null +++ b/src/lib/dns/rdata/in_1/dhcid_49.cc @@ -0,0 +1,145 @@ +// Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include + +#include + +#include +#include +#include +#include +#include +#include + +using namespace std; +using namespace isc::util; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +/// \brief Constructor from string. +/// +/// \param dhcid_str A base-64 representation of the DHCID binary data. +/// The data is considered to be opaque, but a sanity check is performed. +/// +/// Exceptions +/// +/// \c dhcid_str must be a valid BASE-64 string, otherwise an exception +/// of class \c isc::BadValue will be thrown; +/// the binary data should consist of at leat of 3 octets as per RFC4701: +/// < 2 octets > Identifier type code +/// < 1 octet > Digest type code +/// < n octets > Digest (length depends on digest type) +/// If the data is less than 3 octets (i.e. it cannot contain id type code and +/// digest type code), an exception of class \c InvalidRdataLength is thrown. +DHCID::DHCID(const string& dhcid_str) { + istringstream iss(dhcid_str); + stringbuf digestbuf; + + iss >> &digestbuf; + isc::util::encode::decodeHex(digestbuf.str(), digest_); + + // RFC4701 states DNS software should consider the RDATA section to + // be opaque, but there must be at least three bytes in the data: + // < 2 octets > Identifier type code + // < 1 octet > Digest type code + if (digest_.size() < 3) { + isc_throw(InvalidRdataLength, "DHCID length " << digest_.size() << + " too short, need at least 3 bytes"); + } +} + +/// \brief Constructor from wire-format data. +/// +/// \param buffer A buffer storing the wire format data. +/// \param rdata_len The length of the RDATA in bytes +/// +/// Exceptions +/// \c InvalidRdataLength is thrown if \c rdata_len is than minimum of 3 octets +DHCID::DHCID(InputBuffer& buffer, size_t rdata_len) { + if (rdata_len < 3) { + isc_throw(InvalidRdataLength, "DHCID length " << rdata_len << + " too short, need at least 3 bytes"); + } + + digest_.resize(rdata_len); + buffer.readData(&digest_[0], rdata_len); +} + +/// \brief The copy constructor. +/// +/// This trivial copy constructor never throws an exception. +DHCID::DHCID(const DHCID& other) : Rdata(), digest_(other.digest_) +{} + +/// \brief Render the \c DHCID in the wire format. +/// +/// \param buffer An output buffer to store the wire data. +void +DHCID::toWire(OutputBuffer& buffer) const { + buffer.writeData(&digest_[0], digest_.size()); +} + +/// \brief Render the \c DHCID in the wire format into a +/// \c MessageRenderer object. +/// +/// \param renderer DNS message rendering context that encapsulates the +/// output buffer in which the \c DHCID is to be stored. +void +DHCID::toWire(AbstractMessageRenderer& renderer) const { + renderer.writeData(&digest_[0], digest_.size()); +} + +/// \brief Convert the \c DHCID to a string. +/// +/// This method returns a \c std::string object representing the \c DHCID. +/// +/// \return A string representation of \c DHCID. +string +DHCID::toText() const { + return (isc::util::encode::encodeHex(digest_)); +} + +/// \brief Compare two instances of \c DHCID RDATA. +/// +/// See documentation in \c Rdata. +int +DHCID::compare(const Rdata& other) const { + const DHCID& other_dhcid = dynamic_cast(other); + + size_t this_len = digest_.size(); + size_t other_len = other_dhcid.digest_.size(); + size_t cmplen = min(this_len, other_len); + int cmp = memcmp(&digest_[0], &other_dhcid.digest_[0], cmplen); + if (cmp != 0) { + return (cmp); + } else { + return ((this_len == other_len) ? 0 : (this_len < other_len) ? -1 : 1); + } +} + +/// \brief Accessor method to get the DHCID digest +/// +/// \return A reference to the binary DHCID data +const std::vector& +DHCID::getDigest() const { + return (digest_); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/in_1/dhcid_49.h b/src/lib/dns/rdata/in_1/dhcid_49.h new file mode 100644 index 0000000000..919395fba1 --- /dev/null +++ b/src/lib/dns/rdata/in_1/dhcid_49.h @@ -0,0 +1,58 @@ +// Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include +#include + +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +/// \brief \c rdata::DHCID class represents the DHCID RDATA as defined %in +/// RFC4701. +/// +/// This class implements the basic interfaces inherited from the abstract +/// \c rdata::Rdata class, and provides trivial accessors specific to the +/// DHCID RDATA. +class DHCID : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + /// \brief Return the digest. + /// + /// This method never throws an exception. + const std::vector& getDigest() const; + +private: + /// \brief Private data representation + /// + /// Opaque data at least 3 octets long as per RFC4701. + /// + std::vector digest_; +}; +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/in_1/srv_33.cc b/src/lib/dns/rdata/in_1/srv_33.cc new file mode 100644 index 0000000000..93b5d4d60a --- /dev/null +++ b/src/lib/dns/rdata/in_1/srv_33.cc @@ -0,0 +1,245 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include + +#include +#include + +#include +#include +#include +#include + +using namespace std; +using namespace isc::util; +using namespace isc::util::str; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +struct SRVImpl { + // straightforward representation of SRV RDATA fields + SRVImpl(uint16_t priority, uint16_t weight, uint16_t port, + const Name& target) : + priority_(priority), weight_(weight), port_(port), + target_(target) + {} + + uint16_t priority_; + uint16_t weight_; + uint16_t port_; + Name target_; +}; + +/// \brief Constructor from string. +/// +/// \c srv_str must be formatted as follows: +/// \code +/// \endcode +/// where +/// - , , and are an unsigned 16-bit decimal +/// integer. +/// - is a valid textual representation of domain name. +/// +/// An example of valid string is: +/// \code "1 5 1500 example.com." \endcode +/// +/// Exceptions +/// +/// If is not a valid domain name, a corresponding exception from +/// the \c Name class will be thrown; +/// if %any of the other bullet points above is not met, an exception of +/// class \c InvalidRdataText will be thrown. +/// This constructor internally involves resource allocation, and if it fails +/// a corresponding standard exception will be thrown. +SRV::SRV(const string& srv_str) : + impl_(NULL) +{ + istringstream iss(srv_str); + + try { + const int32_t priority = tokenToNum(getToken(iss)); + const int32_t weight = tokenToNum(getToken(iss)); + const int32_t port = tokenToNum(getToken(iss)); + const Name targetname(getToken(iss)); + + if (!iss.eof()) { + isc_throw(InvalidRdataText, "Unexpected input for SRV RDATA: " << + srv_str); + } + + impl_ = new SRVImpl(priority, weight, port, targetname); + } catch (const StringTokenError& ste) { + isc_throw(InvalidRdataText, "Invalid SRV text: " << + ste.what() << ": " << srv_str); + } +} + +/// \brief Constructor from wire-format data. +/// +/// When a read operation on \c buffer fails (e.g., due to a corrupted +/// message) a corresponding exception from the \c InputBuffer class will +/// be thrown. +/// If the wire-format data does not end with a valid domain name, +/// a corresponding exception from the \c Name class will be thrown. +/// In addition, this constructor internally involves resource allocation, +/// and if it fails a corresponding standard exception will be thrown. +/// +/// According to RFC2782, the Target field must be a non compressed form +/// of domain name. But this implementation accepts a %SRV RR even if that +/// field is compressed as suggested in RFC3597. +/// +/// \param buffer A buffer storing the wire format data. +/// \param rdata_len The length of the RDATA in bytes, normally expected +/// to be the value of the RDLENGTH field of the corresponding RR. +SRV::SRV(InputBuffer& buffer, size_t rdata_len) { + if (rdata_len < 6) { + isc_throw(InvalidRdataLength, "SRV too short"); + } + + uint16_t priority = buffer.readUint16(); + uint16_t weight = buffer.readUint16(); + uint16_t port = buffer.readUint16(); + const Name targetname(buffer); + + impl_ = new SRVImpl(priority, weight, port, targetname); +} + +/// \brief The copy constructor. +/// +/// It internally allocates a resource, and if it fails a corresponding +/// standard exception will be thrown. +/// This constructor never throws an exception otherwise. +SRV::SRV(const SRV& source) : + Rdata(), impl_(new SRVImpl(*source.impl_)) +{} + +SRV& +SRV::operator=(const SRV& source) { + if (impl_ == source.impl_) { + return (*this); + } + + SRVImpl* newimpl = new SRVImpl(*source.impl_); + delete impl_; + impl_ = newimpl; + + return (*this); +} + +SRV::~SRV() { + delete impl_; +} + +/// \brief Convert the \c SRV to a string. +/// +/// The output of this method is formatted as described in the "from string" +/// constructor (\c SRV(const std::string&))). +/// +/// If internal resource allocation fails, a corresponding +/// standard exception will be thrown. +/// +/// \return A \c string object that represents the \c SRV object. +string +SRV::toText() const { + using namespace boost; + return (lexical_cast(impl_->priority_) + + " " + lexical_cast(impl_->weight_) + + " " + lexical_cast(impl_->port_) + + " " + impl_->target_.toText()); +} + +/// \brief Render the \c SRV in the wire format without name compression. +/// +/// If internal resource allocation fails, a corresponding +/// standard exception will be thrown. +/// This method never throws an exception otherwise. +/// +/// \param buffer An output buffer to store the wire data. +void +SRV::toWire(OutputBuffer& buffer) const { + buffer.writeUint16(impl_->priority_); + buffer.writeUint16(impl_->weight_); + buffer.writeUint16(impl_->port_); + impl_->target_.toWire(buffer); +} + +/// \brief Render the \c SRV in the wire format with taking into account +/// compression. +/// +/// As specified in RFC2782, the Target field (a domain name) will not be +/// compressed. However, the domain name could be a target of compression +/// of other compressible names (though pretty unlikely), the offset +/// information of the algorithm name may be recorded in \c renderer. +/// +/// If internal resource allocation fails, a corresponding +/// standard exception will be thrown. +/// This method never throws an exception otherwise. +/// +/// \param renderer DNS message rendering context that encapsulates the +/// output buffer and name compression information. +void +SRV::toWire(AbstractMessageRenderer& renderer) const { + renderer.writeUint16(impl_->priority_); + renderer.writeUint16(impl_->weight_); + renderer.writeUint16(impl_->port_); + renderer.writeName(impl_->target_, false); +} + +/// \brief Compare two instances of \c SRV RDATA. +/// +/// See documentation in \c Rdata. +int +SRV::compare(const Rdata& other) const { + const SRV& other_srv = dynamic_cast(other); + + if (impl_->priority_ != other_srv.impl_->priority_) { + return (impl_->priority_ < other_srv.impl_->priority_ ? -1 : 1); + } + if (impl_->weight_ != other_srv.impl_->weight_) { + return (impl_->weight_ < other_srv.impl_->weight_ ? -1 : 1); + } + if (impl_->port_ != other_srv.impl_->port_) { + return (impl_->port_ < other_srv.impl_->port_ ? -1 : 1); + } + + return (compareNames(impl_->target_, other_srv.impl_->target_)); +} + +uint16_t +SRV::getPriority() const { + return (impl_->priority_); +} + +uint16_t +SRV::getWeight() const { + return (impl_->weight_); +} + +uint16_t +SRV::getPort() const { + return (impl_->port_); +} + +const Name& +SRV::getTarget() const { + return (impl_->target_); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/in_1/srv_33.h b/src/lib/dns/rdata/in_1/srv_33.h new file mode 100644 index 0000000000..32b7dc07b8 --- /dev/null +++ b/src/lib/dns/rdata/in_1/srv_33.h @@ -0,0 +1,93 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include + +#include +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +struct SRVImpl; + +/// \brief \c rdata::SRV class represents the SRV RDATA as defined %in +/// RFC2782. +/// +/// This class implements the basic interfaces inherited from the abstract +/// \c rdata::Rdata class, and provides trivial accessors specific to the +/// SRV RDATA. +class SRV : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + /// \brief Assignment operator. + /// + /// It internally allocates a resource, and if it fails a corresponding + /// standard exception will be thrown. + /// This operator never throws an exception otherwise. + /// + /// This operator provides the strong exception guarantee: When an + /// exception is thrown the content of the assignment target will be + /// intact. + SRV& operator=(const SRV& source); + + /// \brief The destructor. + ~SRV(); + + /// + /// Specialized methods + /// + + /// \brief Return the value of the priority field. + /// + /// This method never throws an exception. + uint16_t getPriority() const; + + /// \brief Return the value of the weight field. + /// + /// This method never throws an exception. + uint16_t getWeight() const; + + /// \brief Return the value of the port field. + /// + /// This method never throws an exception. + uint16_t getPort() const; + + /// \brief Return the value of the target field. + /// + /// \return A reference to a \c Name class object corresponding to the + /// internal target name. + /// + /// This method never throws an exception. + const Name& getTarget() const; + +private: + SRVImpl* impl_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/rdata/template.cc b/src/lib/dns/rdata/template.cc index d9f08ee91b..e85f82c336 100644 --- a/src/lib/dns/rdata/template.cc +++ b/src/lib/dns/rdata/template.cc @@ -18,6 +18,7 @@ #include #include #include +#include using namespace std; using namespace isc::util; diff --git a/src/lib/dns/rrtype-placeholder.h b/src/lib/dns/rrtype-placeholder.h index 1cb028c177..dad1b2b5ab 100644 --- a/src/lib/dns/rrtype-placeholder.h +++ b/src/lib/dns/rrtype-placeholder.h @@ -22,6 +22,11 @@ #include +// Solaris x86 defines DS in , which gets pulled in by Boost +#if defined(__sun) && defined(DS) +# undef DS +#endif + namespace isc { namespace util { class InputBuffer; diff --git a/src/lib/dns/tests/Makefile.am b/src/lib/dns/tests/Makefile.am index 3a249c1768..37946782bd 100644 --- a/src/lib/dns/tests/Makefile.am +++ b/src/lib/dns/tests/Makefile.am @@ -32,16 +32,21 @@ run_unittests_SOURCES += rdata_ns_unittest.cc rdata_soa_unittest.cc run_unittests_SOURCES += rdata_txt_unittest.cc rdata_mx_unittest.cc run_unittests_SOURCES += rdata_ptr_unittest.cc rdata_cname_unittest.cc run_unittests_SOURCES += rdata_dname_unittest.cc +run_unittests_SOURCES += rdata_afsdb_unittest.cc run_unittests_SOURCES += rdata_opt_unittest.cc run_unittests_SOURCES += rdata_dnskey_unittest.cc -run_unittests_SOURCES += rdata_ds_unittest.cc +run_unittests_SOURCES += rdata_ds_like_unittest.cc run_unittests_SOURCES += rdata_nsec_unittest.cc run_unittests_SOURCES += rdata_nsec3_unittest.cc run_unittests_SOURCES += rdata_nsecbitmap_unittest.cc run_unittests_SOURCES += rdata_nsec3param_unittest.cc run_unittests_SOURCES += rdata_rrsig_unittest.cc run_unittests_SOURCES += rdata_rp_unittest.cc +run_unittests_SOURCES += rdata_srv_unittest.cc +run_unittests_SOURCES += rdata_minfo_unittest.cc run_unittests_SOURCES += rdata_tsig_unittest.cc +run_unittests_SOURCES += rdata_naptr_unittest.cc +run_unittests_SOURCES += rdata_hinfo_unittest.cc run_unittests_SOURCES += rrset_unittest.cc rrsetlist_unittest.cc run_unittests_SOURCES += question_unittest.cc run_unittests_SOURCES += rrparamregistry_unittest.cc @@ -51,6 +56,7 @@ run_unittests_SOURCES += tsig_unittest.cc run_unittests_SOURCES += tsigerror_unittest.cc run_unittests_SOURCES += tsigkey_unittest.cc run_unittests_SOURCES += tsigrecord_unittest.cc +run_unittests_SOURCES += character_string_unittest.cc run_unittests_SOURCES += run_unittests.cc run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) # We shouldn't need to include BOTAN_LDFLAGS here, but there diff --git a/src/lib/dns/tests/character_string_unittest.cc b/src/lib/dns/tests/character_string_unittest.cc new file mode 100644 index 0000000000..5fed9eb0a3 --- /dev/null +++ b/src/lib/dns/tests/character_string_unittest.cc @@ -0,0 +1,92 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + + +#include + +#include +#include +#include + +using isc::UnitTestUtil; + +using namespace std; +using namespace isc; +using namespace isc::dns; +using namespace isc::dns::characterstr; +using namespace isc::dns::rdata; + +namespace { + +class CharacterString { +public: + CharacterString(const string& str){ + string::const_iterator it = str.begin(); + characterStr_ = getNextCharacterString(str, it); + } + const string& str() const { return characterStr_; } +private: + string characterStr_; +}; + +TEST(CharacterStringTest, testNormalCase) { + CharacterString cstr1("foo"); + EXPECT_EQ(string("foo"), cstr1.str()); + + // Test that separated by space + CharacterString cstr2("foo bar"); + EXPECT_EQ(string("foo"), cstr2.str()); + + // Test that separated by quotes + CharacterString cstr3("\"foo bar\""); + EXPECT_EQ(string("foo bar"), cstr3.str()); + + // Test that not separate by quotes but ended with quotes + CharacterString cstr4("foo\""); + EXPECT_EQ(string("foo\""), cstr4.str()); +} + +TEST(CharacterStringTest, testBadCase) { + // The that started with quotes should also be ended + // with quotes + EXPECT_THROW(CharacterString cstr("\"foo"), InvalidRdataText); + + // The string length cannot exceed 255 characters + string str; + for (int i = 0; i < 257; ++i) { + str += 'A'; + } + EXPECT_THROW(CharacterString cstr(str), CharStringTooLong); +} + +TEST(CharacterStringTest, testEscapeCharacter) { + CharacterString cstr1("foo\\bar"); + EXPECT_EQ(string("foobar"), cstr1.str()); + + CharacterString cstr2("foo\\\\bar"); + EXPECT_EQ(string("foo\\bar"), cstr2.str()); + + CharacterString cstr3("fo\\111bar"); + EXPECT_EQ(string("foobar"), cstr3.str()); + + CharacterString cstr4("fo\\1112bar"); + EXPECT_EQ(string("foo2bar"), cstr4.str()); + + // There must be at least 3 digits followed by '\' + EXPECT_THROW(CharacterString cstr("foo\\98ar"), InvalidRdataText); + EXPECT_THROW(CharacterString cstr("foo\\9ar"), InvalidRdataText); + EXPECT_THROW(CharacterString cstr("foo\\98"), InvalidRdataText); +} + +} // namespace diff --git a/src/lib/dns/tests/message_unittest.cc b/src/lib/dns/tests/message_unittest.cc index c79ea2c414..f068791d4c 100644 --- a/src/lib/dns/tests/message_unittest.cc +++ b/src/lib/dns/tests/message_unittest.cc @@ -62,7 +62,6 @@ using namespace isc::dns::rdata; // const uint16_t Message::DEFAULT_MAX_UDPSIZE; -const Name test_name("test.example.com"); namespace isc { namespace util { @@ -79,7 +78,8 @@ const uint16_t TSIGContext::DEFAULT_FUDGE; namespace { class MessageTest : public ::testing::Test { protected: - MessageTest() : obuffer(0), renderer(obuffer), + MessageTest() : test_name("test.example.com"), obuffer(0), + renderer(obuffer), message_parse(Message::PARSE), message_render(Message::RENDER), bogus_section(static_cast( @@ -103,8 +103,9 @@ protected: "FAKEFAKEFAKEFAKE")); rrset_aaaa->addRRsig(rrset_rrsig); } - + static Question factoryFromFile(const char* datafile); + const Name test_name; OutputBuffer obuffer; MessageRenderer renderer; Message message_parse; @@ -114,18 +115,23 @@ protected: RRsetPtr rrset_aaaa; // AAAA RRset with one RDATA with RRSIG RRsetPtr rrset_rrsig; // RRSIG for the AAAA RRset TSIGContext tsig_ctx; + vector received_data; vector expected_data; - static void factoryFromFile(Message& message, const char* datafile); + void factoryFromFile(Message& message, const char* datafile, + Message::ParseOptions options = + Message::PARSE_DEFAULT); }; void -MessageTest::factoryFromFile(Message& message, const char* datafile) { - std::vector data; - UnitTestUtil::readWireData(datafile, data); +MessageTest::factoryFromFile(Message& message, const char* datafile, + Message::ParseOptions options) +{ + received_data.clear(); + UnitTestUtil::readWireData(datafile, received_data); - InputBuffer buffer(&data[0], data.size()); - message.fromWire(buffer); + InputBuffer buffer(&received_data[0], received_data.size()); + message.fromWire(buffer, options); } TEST_F(MessageTest, headerFlag) { @@ -173,7 +179,6 @@ TEST_F(MessageTest, headerFlag) { EXPECT_THROW(message_parse.setHeaderFlag(Message::HEADERFLAG_QR), InvalidMessageOperation); } - TEST_F(MessageTest, getEDNS) { EXPECT_FALSE(message_parse.getEDNS()); // by default EDNS isn't set @@ -530,7 +535,46 @@ TEST_F(MessageTest, appendSection) { } +TEST_F(MessageTest, parseHeader) { + received_data.clear(); + UnitTestUtil::readWireData("message_fromWire1", received_data); + + // parseHeader() isn't allowed in the render mode. + InputBuffer buffer(&received_data[0], received_data.size()); + EXPECT_THROW(message_render.parseHeader(buffer), InvalidMessageOperation); + + message_parse.parseHeader(buffer); + EXPECT_EQ(0x1035, message_parse.getQid()); + EXPECT_EQ(Opcode::QUERY(), message_parse.getOpcode()); + EXPECT_EQ(Rcode::NOERROR(), message_parse.getRcode()); + EXPECT_TRUE(message_parse.getHeaderFlag(Message::HEADERFLAG_QR)); + EXPECT_TRUE(message_parse.getHeaderFlag(Message::HEADERFLAG_AA)); + EXPECT_FALSE(message_parse.getHeaderFlag(Message::HEADERFLAG_TC)); + EXPECT_TRUE(message_parse.getHeaderFlag(Message::HEADERFLAG_RD)); + EXPECT_FALSE(message_parse.getHeaderFlag(Message::HEADERFLAG_RA)); + EXPECT_FALSE(message_parse.getHeaderFlag(Message::HEADERFLAG_AD)); + EXPECT_FALSE(message_parse.getHeaderFlag(Message::HEADERFLAG_CD)); + EXPECT_EQ(1, message_parse.getRRCount(Message::SECTION_QUESTION)); + EXPECT_EQ(2, message_parse.getRRCount(Message::SECTION_ANSWER)); + EXPECT_EQ(0, message_parse.getRRCount(Message::SECTION_AUTHORITY)); + EXPECT_EQ(0, message_parse.getRRCount(Message::SECTION_ADDITIONAL)); + + // Only the header part should have been examined. + EXPECT_EQ(12, buffer.getPosition()); // 12 = size of the header section + EXPECT_TRUE(message_parse.beginQuestion() == message_parse.endQuestion()); + EXPECT_TRUE(message_parse.beginSection(Message::SECTION_ANSWER) == + message_parse.endSection(Message::SECTION_ANSWER)); + EXPECT_TRUE(message_parse.beginSection(Message::SECTION_AUTHORITY) == + message_parse.endSection(Message::SECTION_AUTHORITY)); + EXPECT_TRUE(message_parse.beginSection(Message::SECTION_ADDITIONAL) == + message_parse.endSection(Message::SECTION_ADDITIONAL)); +} + TEST_F(MessageTest, fromWire) { + // fromWire() isn't allowed in the render mode. + EXPECT_THROW(factoryFromFile(message_render, "message_fromWire1"), + InvalidMessageOperation); + factoryFromFile(message_parse, "message_fromWire1"); EXPECT_EQ(0x1035, message_parse.getQid()); EXPECT_EQ(Opcode::QUERY(), message_parse.getOpcode()); @@ -562,6 +606,87 @@ TEST_F(MessageTest, fromWire) { EXPECT_TRUE(it->isLast()); } +TEST_F(MessageTest, fromWireShortBuffer) { + // We trim a valid message (ending with an SOA RR) for one byte. + // fromWire() should throw an exception while parsing the trimmed RR. + UnitTestUtil::readWireData("message_fromWire22.wire", received_data); + InputBuffer buffer(&received_data[0], received_data.size() - 1); + EXPECT_THROW(message_parse.fromWire(buffer), InvalidBufferPosition); +} + +TEST_F(MessageTest, fromWireCombineRRs) { + // This message contains 3 RRs in the answer section in the order of + // A, AAAA, A types. fromWire() should combine the two A RRs into a + // single RRset by default. + factoryFromFile(message_parse, "message_fromWire19.wire"); + + RRsetIterator it = message_parse.beginSection(Message::SECTION_ANSWER); + RRsetIterator it_end = message_parse.endSection(Message::SECTION_ANSWER); + ASSERT_TRUE(it != it_end); + EXPECT_EQ(RRType::A(), (*it)->getType()); + EXPECT_EQ(2, (*it)->getRdataCount()); + + ++it; + ASSERT_TRUE(it != it_end); + EXPECT_EQ(RRType::AAAA(), (*it)->getType()); + EXPECT_EQ(1, (*it)->getRdataCount()); +} + +// A helper function for a test pattern commonly used in several tests below. +void +preserveRRCheck(const Message& message, Message::Section section) { + RRsetIterator it = message.beginSection(section); + RRsetIterator it_end = message.endSection(section); + ASSERT_TRUE(it != it_end); + EXPECT_EQ(RRType::A(), (*it)->getType()); + EXPECT_EQ(1, (*it)->getRdataCount()); + EXPECT_EQ("192.0.2.1", (*it)->getRdataIterator()->getCurrent().toText()); + + ++it; + ASSERT_TRUE(it != it_end); + EXPECT_EQ(RRType::AAAA(), (*it)->getType()); + EXPECT_EQ(1, (*it)->getRdataCount()); + EXPECT_EQ("2001:db8::1", (*it)->getRdataIterator()->getCurrent().toText()); + + ++it; + ASSERT_TRUE(it != it_end); + EXPECT_EQ(RRType::A(), (*it)->getType()); + EXPECT_EQ(1, (*it)->getRdataCount()); + EXPECT_EQ("192.0.2.2", (*it)->getRdataIterator()->getCurrent().toText()); +} + +TEST_F(MessageTest, fromWirePreserveAnswer) { + // Using the same data as the previous test, but specify the PRESERVE_ORDER + // option. The received order of RRs should be preserved, and each RR + // should be stored in a single RRset. + factoryFromFile(message_parse, "message_fromWire19.wire", + Message::PRESERVE_ORDER); + { + SCOPED_TRACE("preserve answer RRs"); + preserveRRCheck(message_parse, Message::SECTION_ANSWER); + } +} + +TEST_F(MessageTest, fromWirePreserveAuthority) { + // Same for the previous test, but for the authority section. + factoryFromFile(message_parse, "message_fromWire20.wire", + Message::PRESERVE_ORDER); + { + SCOPED_TRACE("preserve authority RRs"); + preserveRRCheck(message_parse, Message::SECTION_AUTHORITY); + } +} + +TEST_F(MessageTest, fromWirePreserveAdditional) { + // Same for the previous test, but for the additional section. + factoryFromFile(message_parse, "message_fromWire21.wire", + Message::PRESERVE_ORDER); + { + SCOPED_TRACE("preserve additional RRs"); + preserveRRCheck(message_parse, Message::SECTION_ADDITIONAL); + } +} + TEST_F(MessageTest, EDNS0ExtRcode) { // Extended Rcode = BADVERS factoryFromFile(message_parse, "message_fromWire10.wire"); @@ -618,15 +743,43 @@ testGetTime() { return (NOW); } +// bit-wise constant flags to configure DNS header flags for test +// messages. +const unsigned int QR_FLAG = 0x1; +const unsigned int AA_FLAG = 0x2; +const unsigned int RD_FLAG = 0x4; + void commonTSIGToWireCheck(Message& message, MessageRenderer& renderer, - TSIGContext& tsig_ctx, const char* const expected_file) + TSIGContext& tsig_ctx, const char* const expected_file, + unsigned int message_flags = RD_FLAG, + RRType qtype = RRType::A(), + const vector* answer_data = NULL) { message.setOpcode(Opcode::QUERY()); message.setRcode(Rcode::NOERROR()); - message.setHeaderFlag(Message::HEADERFLAG_RD, true); + if ((message_flags & QR_FLAG) != 0) { + message.setHeaderFlag(Message::HEADERFLAG_QR); + } + if ((message_flags & AA_FLAG) != 0) { + message.setHeaderFlag(Message::HEADERFLAG_AA); + } + if ((message_flags & RD_FLAG) != 0) { + message.setHeaderFlag(Message::HEADERFLAG_RD); + } message.addQuestion(Question(Name("www.example.com"), RRClass::IN(), - RRType::A())); + qtype)); + + if (answer_data != NULL) { + RRsetPtr ans_rrset(new RRset(Name("www.example.com"), RRClass::IN(), + qtype, RRTTL(86400))); + for (vector::const_iterator it = answer_data->begin(); + it != answer_data->end(); + ++it) { + ans_rrset->addRdata(createRdata(qtype, RRClass::IN(), *it)); + } + message.addRRset(Message::SECTION_ANSWER, ans_rrset); + } message.toWire(renderer, tsig_ctx); vector expected_data; @@ -670,6 +823,182 @@ TEST_F(MessageTest, toWireWithEDNSAndTSIG) { } } +// Some of the following tests involve truncation. We use the query name +// "www.example.com" and some TXT question/answers. The length of the +// header and question will be 33 bytes. If we also try to include a +// TSIG of the same key name (not compressed) with HMAC-MD5, the TSIG RR +// will be 85 bytes. + +// A long TXT RDATA. With a fully compressed owner name, the corresponding +// RR will be 268 bytes. +const char* const long_txt1 = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcde"; + +// With a fully compressed owner name, the corresponding RR will be 212 bytes. +// It should result in truncation even without TSIG (33 + 268 + 212 = 513) +const char* const long_txt2 = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456"; + +// With a fully compressed owner name, the corresponding RR will be 127 bytes. +// So, it can fit in the standard 512 bytes with txt1 and without TSIG, but +// adding a TSIG would result in truncation (33 + 268 + 127 + 85 = 513) +const char* const long_txt3 = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01"; + +// This is 1 byte shorter than txt3, which will result in a possible longest +// message containing answer RRs and TSIG. +const char* const long_txt4 = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0"; + +// Example output generated by +// "dig -y www.example.com:SFuWd/q99SzF8Yzd1QbB9g== www.example.com txt +// QID: 0x22c2 +// Time Signed: 0x00004e179212 +TEST_F(MessageTest, toWireTSIGTruncation) { + isc::util::detail::gettimeFunction = testGetTime<0x4e179212>; + + // Verify a validly signed query so that we can use the TSIG context + + factoryFromFile(message_parse, "message_fromWire17.wire"); + EXPECT_EQ(TSIGError::NOERROR(), + tsig_ctx.verify(message_parse.getTSIGRecord(), + &received_data[0], received_data.size())); + + message_render.setQid(0x22c2); + vector answer_data; + answer_data.push_back(long_txt1); + answer_data.push_back(long_txt2); + { + SCOPED_TRACE("Message sign with TSIG and TC bit on"); + commonTSIGToWireCheck(message_render, renderer, tsig_ctx, + "message_toWire4.wire", + QR_FLAG|AA_FLAG|RD_FLAG, + RRType::TXT(), &answer_data); + } +} + +TEST_F(MessageTest, toWireTSIGTruncation2) { + // Similar to the previous test, but without TSIG it wouldn't cause + // truncation. + isc::util::detail::gettimeFunction = testGetTime<0x4e179212>; + factoryFromFile(message_parse, "message_fromWire17.wire"); + EXPECT_EQ(TSIGError::NOERROR(), + tsig_ctx.verify(message_parse.getTSIGRecord(), + &received_data[0], received_data.size())); + + message_render.setQid(0x22c2); + vector answer_data; + answer_data.push_back(long_txt1); + answer_data.push_back(long_txt3); + { + SCOPED_TRACE("Message sign with TSIG and TC bit on (2)"); + commonTSIGToWireCheck(message_render, renderer, tsig_ctx, + "message_toWire4.wire", + QR_FLAG|AA_FLAG|RD_FLAG, + RRType::TXT(), &answer_data); + } +} + +TEST_F(MessageTest, toWireTSIGTruncation3) { + // Similar to previous ones, but truncation occurs due to too many + // Questions (very unusual, but not necessarily illegal). + + // We are going to create a message starting with a standard + // header (12 bytes) and multiple questions in the Question + // section of the same owner name (changing the RRType, just so + // that it would be the form that would be accepted by the BIND 9 + // parser). The first Question is 21 bytes in length, and the subsequent + // ones are 6 bytes. We'll also use a TSIG whose size is 85 bytes. + // Up to 66 questions can fit in the standard 512-byte buffer + // (12 + 21 + 6 * 65 + 85 = 508). If we try to add one more it would + // result in truncation. + message_render.setOpcode(Opcode::QUERY()); + message_render.setRcode(Rcode::NOERROR()); + for (int i = 1; i <= 67; ++i) { + message_render.addQuestion(Question(Name("www.example.com"), + RRClass::IN(), RRType(i))); + } + message_render.toWire(renderer, tsig_ctx); + + // Check the rendered data by parsing it. We only check it has the + // TC bit on, has the correct number of questions, and has a TSIG RR. + // Checking the signature wouldn't be necessary for this rare case + // scenario. + InputBuffer buffer(renderer.getData(), renderer.getLength()); + message_parse.fromWire(buffer); + EXPECT_TRUE(message_parse.getHeaderFlag(Message::HEADERFLAG_TC)); + // Note that the number of questions are 66, not 67 as we tried to add. + EXPECT_EQ(66, message_parse.getRRCount(Message::SECTION_QUESTION)); + EXPECT_TRUE(message_parse.getTSIGRecord() != NULL); +} + +TEST_F(MessageTest, toWireTSIGNoTruncation) { + // A boundary case that shouldn't cause truncation: the resulting + // response message with a TSIG will be 512 bytes long. + isc::util::detail::gettimeFunction = testGetTime<0x4e17b38d>; + factoryFromFile(message_parse, "message_fromWire18.wire"); + EXPECT_EQ(TSIGError::NOERROR(), + tsig_ctx.verify(message_parse.getTSIGRecord(), + &received_data[0], received_data.size())); + + message_render.setQid(0xd6e2); + vector answer_data; + answer_data.push_back(long_txt1); + answer_data.push_back(long_txt4); + { + SCOPED_TRACE("Message sign with TSIG, no truncation"); + commonTSIGToWireCheck(message_render, renderer, tsig_ctx, + "message_toWire5.wire", + QR_FLAG|AA_FLAG|RD_FLAG, + RRType::TXT(), &answer_data); + } +} + +// This is a buggy renderer for testing. It behaves like the straightforward +// MessageRenderer, but once it has some data, its setLengthLimit() ignores +// the given parameter and resets the limit to the current length, making +// subsequent insertion result in truncation, which would make TSIG RR +// rendering fail unexpectedly in the test that follows. +class BadRenderer : public MessageRenderer { +public: + BadRenderer(isc::util::OutputBuffer& buffer) : + MessageRenderer(buffer) + {} + virtual void setLengthLimit(size_t len) { + if (getLength() > 0) { + MessageRenderer::setLengthLimit(getLength()); + } else { + MessageRenderer::setLengthLimit(len); + } + } +}; + +TEST_F(MessageTest, toWireTSIGLengthErrors) { + // specify an unusual short limit that wouldn't be able to hold + // the TSIG. + renderer.setLengthLimit(tsig_ctx.getTSIGLength() - 1); + // Use commonTSIGToWireCheck() only to call toWire() with otherwise valid + // conditions. The checks inside it don't matter because we expect an + // exception before any of the checks. + EXPECT_THROW(commonTSIGToWireCheck(message_render, renderer, tsig_ctx, + "message_toWire2.wire"), + InvalidParameter); + + // This one is large enough for TSIG, but the remaining limit isn't + // even enough for the Header section. + renderer.clear(); + message_render.clear(Message::RENDER); + renderer.setLengthLimit(tsig_ctx.getTSIGLength() + 1); + EXPECT_THROW(commonTSIGToWireCheck(message_render, renderer, tsig_ctx, + "message_toWire2.wire"), + InvalidParameter); + + // Trying to render a message with TSIG using a buggy renderer. + obuffer.clear(); + BadRenderer bad_renderer(obuffer); + bad_renderer.setLengthLimit(512); + message_render.clear(Message::RENDER); + EXPECT_THROW(commonTSIGToWireCheck(message_render, bad_renderer, tsig_ctx, + "message_toWire2.wire"), + Unexpected); +} + TEST_F(MessageTest, toWireWithoutOpcode) { message_render.setRcode(Rcode::NOERROR()); EXPECT_THROW(message_render.toWire(renderer), InvalidMessageOperation); diff --git a/src/lib/dns/tests/question_unittest.cc b/src/lib/dns/tests/question_unittest.cc index 25fd75b4c6..1d483f2591 100644 --- a/src/lib/dns/tests/question_unittest.cc +++ b/src/lib/dns/tests/question_unittest.cc @@ -106,6 +106,22 @@ TEST_F(QuestionTest, toWireRenderer) { obuffer.getLength(), &wiredata[0], wiredata.size()); } +TEST_F(QuestionTest, toWireTruncated) { + // If the available length in the renderer is too small, it would require + // truncation. This won't happen in normal cases, but protocol wise it + // could still happen if and when we support some (possibly future) opcode + // that allows multiple questions. + + // Set the length limit to the qname length so that the whole question + // would request truncated + renderer.setLengthLimit(example_name1.getLength()); + + EXPECT_FALSE(renderer.isTruncated()); // check pre-render condition + EXPECT_EQ(0, test_question1.toWire(renderer)); + EXPECT_TRUE(renderer.isTruncated()); + EXPECT_EQ(0, renderer.getLength()); // renderer shouldn't have any data +} + // test operator<<. We simply confirm it appends the result of toText(). TEST_F(QuestionTest, LeftShiftOperator) { ostringstream oss; diff --git a/src/lib/dns/tests/rdata_afsdb_unittest.cc b/src/lib/dns/tests/rdata_afsdb_unittest.cc new file mode 100644 index 0000000000..7df8d83659 --- /dev/null +++ b/src/lib/dns/tests/rdata_afsdb_unittest.cc @@ -0,0 +1,210 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +using isc::UnitTestUtil; +using namespace std; +using namespace isc::dns; +using namespace isc::util; +using namespace isc::dns::rdata; + +const char* const afsdb_text = "1 afsdb.example.com."; +const char* const afsdb_text2 = "0 root.example.com."; +const char* const too_long_label("012345678901234567890123456789" + "0123456789012345678901234567890123"); + +namespace { +class Rdata_AFSDB_Test : public RdataTest { +protected: + Rdata_AFSDB_Test() : + rdata_afsdb(string(afsdb_text)), rdata_afsdb2(string(afsdb_text2)) + {} + + const generic::AFSDB rdata_afsdb; + const generic::AFSDB rdata_afsdb2; + vector expected_wire; +}; + + +TEST_F(Rdata_AFSDB_Test, createFromText) { + EXPECT_EQ(1, rdata_afsdb.getSubtype()); + EXPECT_EQ(Name("afsdb.example.com."), rdata_afsdb.getServer()); + + EXPECT_EQ(0, rdata_afsdb2.getSubtype()); + EXPECT_EQ(Name("root.example.com."), rdata_afsdb2.getServer()); +} + +TEST_F(Rdata_AFSDB_Test, badText) { + // subtype is too large + EXPECT_THROW(const generic::AFSDB rdata_afsdb("99999999 afsdb.example.com."), + InvalidRdataText); + // incomplete text + EXPECT_THROW(const generic::AFSDB rdata_afsdb("10"), InvalidRdataText); + EXPECT_THROW(const generic::AFSDB rdata_afsdb("SPOON"), InvalidRdataText); + EXPECT_THROW(const generic::AFSDB rdata_afsdb("1root.example.com."), InvalidRdataText); + // number of fields (must be 2) is incorrect + EXPECT_THROW(const generic::AFSDB rdata_afsdb("10 afsdb. example.com."), + InvalidRdataText); + // bad name + EXPECT_THROW(const generic::AFSDB rdata_afsdb("1 afsdb.example.com." + + string(too_long_label)), TooLongLabel); +} + +TEST_F(Rdata_AFSDB_Test, assignment) { + generic::AFSDB copy((string(afsdb_text2))); + copy = rdata_afsdb; + EXPECT_EQ(0, copy.compare(rdata_afsdb)); + + // Check if the copied data is valid even after the original is deleted + generic::AFSDB* copy2 = new generic::AFSDB(rdata_afsdb); + generic::AFSDB copy3((string(afsdb_text2))); + copy3 = *copy2; + delete copy2; + EXPECT_EQ(0, copy3.compare(rdata_afsdb)); + + // Self assignment + copy = copy; + EXPECT_EQ(0, copy.compare(rdata_afsdb)); +} + +TEST_F(Rdata_AFSDB_Test, createFromWire) { + // uncompressed names + EXPECT_EQ(0, rdata_afsdb.compare( + *rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire1.wire"))); + // compressed name + EXPECT_EQ(0, rdata_afsdb.compare( + *rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire2.wire", 13))); + // RDLENGTH is too short + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire3.wire"), + InvalidRdataLength); + // RDLENGTH is too long + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire4.wire"), + InvalidRdataLength); + // bogus server name, the error should be detected in the name + // constructor + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire5.wire"), + DNSMessageFORMERR); +} + +TEST_F(Rdata_AFSDB_Test, toWireBuffer) { + // construct actual data + rdata_afsdb.toWire(obuffer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire1.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + &expected_wire[0], expected_wire.size()); + + // clear buffer for the next test + obuffer.clear(); + + // construct actual data + Name("example.com.").toWire(obuffer); + rdata_afsdb2.toWire(obuffer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire2.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + &expected_wire[0], expected_wire.size()); +} + +TEST_F(Rdata_AFSDB_Test, toWireRenderer) { + // similar to toWireBuffer, but names in RDATA could be compressed due to + // preceding names. Actually they must not be compressed according to + // RFC3597, and this test checks that. + + // construct actual data + rdata_afsdb.toWire(renderer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire1.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + renderer.getData(), renderer.getLength(), + &expected_wire[0], expected_wire.size()); + + // clear renderer for the next test + renderer.clear(); + + // construct actual data + Name("example.com.").toWire(obuffer); + rdata_afsdb2.toWire(renderer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire2.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + renderer.getData(), renderer.getLength(), + &expected_wire[0], expected_wire.size()); +} + +TEST_F(Rdata_AFSDB_Test, toText) { + EXPECT_EQ(afsdb_text, rdata_afsdb.toText()); + EXPECT_EQ(afsdb_text2, rdata_afsdb2.toText()); +} + +TEST_F(Rdata_AFSDB_Test, compare) { + // check reflexivity + EXPECT_EQ(0, rdata_afsdb.compare(rdata_afsdb)); + + // name must be compared in case-insensitive manner + EXPECT_EQ(0, rdata_afsdb.compare(generic::AFSDB("1 " + "AFSDB.example.com."))); + + const generic::AFSDB small1("10 afsdb.example.com"); + const generic::AFSDB large1("65535 afsdb.example.com"); + const generic::AFSDB large2("256 afsdb.example.com"); + + // confirm these are compared as unsigned values + EXPECT_GT(0, rdata_afsdb.compare(large1)); + EXPECT_LT(0, large1.compare(rdata_afsdb)); + + // confirm these are compared in network byte order + EXPECT_GT(0, small1.compare(large2)); + EXPECT_LT(0, large2.compare(small1)); + + // another AFSDB whose server name is larger than that of rdata_afsdb. + const generic::AFSDB large3("256 zzzzz.example.com"); + EXPECT_GT(0, large2.compare(large3)); + EXPECT_LT(0, large3.compare(large2)); + + // comparison attempt between incompatible RR types should be rejected + EXPECT_THROW(rdata_afsdb.compare(*rdata_nomatch), bad_cast); +} +} diff --git a/src/lib/dns/tests/rdata_ds_like_unittest.cc b/src/lib/dns/tests/rdata_ds_like_unittest.cc new file mode 100644 index 0000000000..9b294460d6 --- /dev/null +++ b/src/lib/dns/tests/rdata_ds_like_unittest.cc @@ -0,0 +1,171 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +using isc::UnitTestUtil; +using namespace std; +using namespace isc::dns; +using namespace isc::util; +using namespace isc::dns::rdata; + +namespace { +// hacks to make templates work +template +class RRTYPE : public RRType { +public: + RRTYPE(); +}; + +template<> RRTYPE::RRTYPE() : RRType(RRType::DS()) {} +template<> RRTYPE::RRTYPE() : RRType(RRType::DLV()) {} + +template +class Rdata_DS_LIKE_Test : public RdataTest { +protected: + static DS_LIKE const rdata_ds_like; +}; + +string ds_like_txt("12892 5 2 F1E184C0E1D615D20EB3C223ACED3B03C773DD952D" + "5F0EB5C777586DE18DA6B5"); + +template +DS_LIKE const Rdata_DS_LIKE_Test::rdata_ds_like(ds_like_txt); + +// The list of types we want to test. +typedef testing::Types Implementations; + +TYPED_TEST_CASE(Rdata_DS_LIKE_Test, Implementations); + +TYPED_TEST(Rdata_DS_LIKE_Test, toText_DS_LIKE) { + EXPECT_EQ(ds_like_txt, this->rdata_ds_like.toText()); +} + +TYPED_TEST(Rdata_DS_LIKE_Test, badText_DS_LIKE) { + EXPECT_THROW(const TypeParam ds_like2("99999 5 2 BEEF"), InvalidRdataText); + EXPECT_THROW(const TypeParam ds_like2("11111 555 2 BEEF"), + InvalidRdataText); + EXPECT_THROW(const TypeParam ds_like2("11111 5 22222 BEEF"), + InvalidRdataText); + EXPECT_THROW(const TypeParam ds_like2("11111 5 2"), InvalidRdataText); + EXPECT_THROW(const TypeParam ds_like2("GARBAGE IN"), InvalidRdataText); + // no space between the digest type and the digest. + EXPECT_THROW(const TypeParam ds_like2( + "12892 5 2F1E184C0E1D615D20EB3C223ACED3B03C773DD952D" + "5F0EB5C777586DE18DA6B5"), InvalidRdataText); +} + +TYPED_TEST(Rdata_DS_LIKE_Test, createFromWire_DS_LIKE) { + EXPECT_EQ(0, this->rdata_ds_like.compare( + *this->rdataFactoryFromFile(RRTYPE(), RRClass::IN(), + "rdata_ds_fromWire"))); +} + +TYPED_TEST(Rdata_DS_LIKE_Test, assignment_DS_LIKE) { + TypeParam copy((string(ds_like_txt))); + copy = this->rdata_ds_like; + EXPECT_EQ(0, copy.compare(this->rdata_ds_like)); + + // Check if the copied data is valid even after the original is deleted + TypeParam* copy2 = new TypeParam(this->rdata_ds_like); + TypeParam copy3((string(ds_like_txt))); + copy3 = *copy2; + delete copy2; + EXPECT_EQ(0, copy3.compare(this->rdata_ds_like)); + + // Self assignment + copy = copy; + EXPECT_EQ(0, copy.compare(this->rdata_ds_like)); +} + +TYPED_TEST(Rdata_DS_LIKE_Test, getTag_DS_LIKE) { + EXPECT_EQ(12892, this->rdata_ds_like.getTag()); +} + +TYPED_TEST(Rdata_DS_LIKE_Test, toWireRenderer) { + Rdata_DS_LIKE_Test::renderer.skip(2); + TypeParam rdata_ds_like(ds_like_txt); + rdata_ds_like.toWire(this->renderer); + + vector data; + UnitTestUtil::readWireData("rdata_ds_fromWire", data); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + static_cast + (this->obuffer.getData()) + 2, + this->obuffer.getLength() - 2, + &data[2], data.size() - 2); +} + +TYPED_TEST(Rdata_DS_LIKE_Test, toWireBuffer) { + TypeParam rdata_ds_like(ds_like_txt); + rdata_ds_like.toWire(this->obuffer); +} + +string ds_like_txt1("12892 5 2 F1E184C0E1D615D20EB3C223ACED3B03C773DD952D" + "5F0EB5C777586DE18DA6B5"); +// different tag +string ds_like_txt2("12893 5 2 F1E184C0E1D615D20EB3C223ACED3B03C773DD952D" + "5F0EB5C777586DE18DA6B5"); +// different algorithm +string ds_like_txt3("12892 6 2 F1E184C0E1D615D20EB3C223ACED3B03C773DD952D" + "5F0EB5C777586DE18DA6B5"); +// different digest type +string ds_like_txt4("12892 5 3 F1E184C0E1D615D20EB3C223ACED3B03C773DD952D" + "5F0EB5C777586DE18DA6B5"); +// different digest +string ds_like_txt5("12892 5 2 F2E184C0E1D615D20EB3C223ACED3B03C773DD952D" + "5F0EB5C777586DE18DA6B5"); +// different digest length +string ds_like_txt6("12892 5 2 F2E184C0E1D615D20EB3C223ACED3B03C773DD952D" + "5F0EB5C777586DE18DA6B555"); + +TYPED_TEST(Rdata_DS_LIKE_Test, compare) { + // trivial case: self equivalence + EXPECT_EQ(0, TypeParam(ds_like_txt).compare(TypeParam(ds_like_txt))); + + // non-equivalence tests + EXPECT_LT(TypeParam(ds_like_txt1).compare(TypeParam(ds_like_txt2)), 0); + EXPECT_GT(TypeParam(ds_like_txt2).compare(TypeParam(ds_like_txt1)), 0); + + EXPECT_LT(TypeParam(ds_like_txt1).compare(TypeParam(ds_like_txt3)), 0); + EXPECT_GT(TypeParam(ds_like_txt3).compare(TypeParam(ds_like_txt1)), 0); + + EXPECT_LT(TypeParam(ds_like_txt1).compare(TypeParam(ds_like_txt4)), 0); + EXPECT_GT(TypeParam(ds_like_txt4).compare(TypeParam(ds_like_txt1)), 0); + + EXPECT_LT(TypeParam(ds_like_txt1).compare(TypeParam(ds_like_txt5)), 0); + EXPECT_GT(TypeParam(ds_like_txt5).compare(TypeParam(ds_like_txt1)), 0); + + EXPECT_LT(TypeParam(ds_like_txt1).compare(TypeParam(ds_like_txt6)), 0); + EXPECT_GT(TypeParam(ds_like_txt6).compare(TypeParam(ds_like_txt1)), 0); + + // comparison attempt between incompatible RR types should be rejected + EXPECT_THROW(this->rdata_ds_like.compare(*this->rdata_nomatch), + bad_cast); +} + +} diff --git a/src/lib/dns/tests/rdata_ds_unittest.cc b/src/lib/dns/tests/rdata_ds_unittest.cc deleted file mode 100644 index 59886208cf..0000000000 --- a/src/lib/dns/tests/rdata_ds_unittest.cc +++ /dev/null @@ -1,99 +0,0 @@ -// Copyright (C) 2010 Internet Systems Consortium, Inc. ("ISC") -// -// Permission to use, copy, modify, and/or distribute this software for any -// purpose with or without fee is hereby granted, provided that the above -// copyright notice and this permission notice appear in all copies. -// -// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH -// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY -// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, -// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM -// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE -// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR -// PERFORMANCE OF THIS SOFTWARE. - -#include - -#include -#include -#include -#include -#include -#include - -#include - -#include -#include - -using isc::UnitTestUtil; -using namespace std; -using namespace isc::dns; -using namespace isc::util; -using namespace isc::dns::rdata; - -namespace { -class Rdata_DS_Test : public RdataTest { - // there's nothing to specialize -}; - -string ds_txt("12892 5 2 F1E184C0E1D615D20EB3C223ACED3B03C773DD952D" - "5F0EB5C777586DE18DA6B5"); -const generic::DS rdata_ds(ds_txt); - -TEST_F(Rdata_DS_Test, toText_DS) { - EXPECT_EQ(ds_txt, rdata_ds.toText()); -} - -TEST_F(Rdata_DS_Test, badText_DS) { - EXPECT_THROW(const generic::DS ds2("99999 5 2 BEEF"), InvalidRdataText); - EXPECT_THROW(const generic::DS ds2("11111 555 2 BEEF"), InvalidRdataText); - EXPECT_THROW(const generic::DS ds2("11111 5 22222 BEEF"), InvalidRdataText); - EXPECT_THROW(const generic::DS ds2("11111 5 2"), InvalidRdataText); - EXPECT_THROW(const generic::DS ds2("GARBAGE IN"), InvalidRdataText); -} - -// this test currently fails; we must fix it, and then migrate the test to -// badText_DS -TEST_F(Rdata_DS_Test, DISABLED_badText_DS) { - // no space between the digest type and the digest. - EXPECT_THROW(const generic::DS ds2( - "12892 5 2F1E184C0E1D615D20EB3C223ACED3B03C773DD952D" - "5F0EB5C777586DE18DA6B5"), InvalidRdataText); -} - -TEST_F(Rdata_DS_Test, createFromWire_DS) { - EXPECT_EQ(0, rdata_ds.compare( - *rdataFactoryFromFile(RRType::DS(), RRClass::IN(), - "rdata_ds_fromWire"))); -} - -TEST_F(Rdata_DS_Test, getTag_DS) { - EXPECT_EQ(12892, rdata_ds.getTag()); -} - -TEST_F(Rdata_DS_Test, toWireRenderer) { - renderer.skip(2); - generic::DS rdata_ds(ds_txt); - rdata_ds.toWire(renderer); - - vector data; - UnitTestUtil::readWireData("rdata_ds_fromWire", data); - EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, - static_cast(obuffer.getData()) + 2, - obuffer.getLength() - 2, &data[2], data.size() - 2); -} - -TEST_F(Rdata_DS_Test, toWireBuffer) { - generic::DS rdata_ds(ds_txt); - rdata_ds.toWire(obuffer); -} - -TEST_F(Rdata_DS_Test, compare) { - // trivial case: self equivalence - EXPECT_EQ(0, generic::DS(ds_txt).compare(generic::DS(ds_txt))); - - // TODO: need more tests -} - -} diff --git a/src/lib/dns/tests/rdata_hinfo_unittest.cc b/src/lib/dns/tests/rdata_hinfo_unittest.cc new file mode 100644 index 0000000000..c52b2a05ed --- /dev/null +++ b/src/lib/dns/tests/rdata_hinfo_unittest.cc @@ -0,0 +1,115 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +using isc::UnitTestUtil; +using namespace std; +using namespace isc::dns; +using namespace isc::util; +using namespace isc::dns::rdata; +using namespace isc::dns::rdata::generic; + +namespace { +class Rdata_HINFO_Test : public RdataTest { +}; + +static uint8_t hinfo_rdata[] = {0x07,0x50,0x65,0x6e,0x74,0x69,0x75,0x6d,0x05, + 0x4c,0x69,0x6e,0x75,0x78}; +static const char *hinfo_str = "\"Pentium\" \"Linux\""; +static const char *hinfo_str1 = "\"Pen\\\"tium\" \"Linux\""; + +static const char *hinfo_str_small1 = "\"Lentium\" \"Linux\""; +static const char *hinfo_str_small2 = "\"Pentium\" \"Kinux\""; +static const char *hinfo_str_large1 = "\"Qentium\" \"Linux\""; +static const char *hinfo_str_large2 = "\"Pentium\" \"UNIX\""; + +TEST_F(Rdata_HINFO_Test, createFromText) { + HINFO hinfo(hinfo_str); + EXPECT_EQ(string("Pentium"), hinfo.getCPU()); + EXPECT_EQ(string("Linux"), hinfo.getOS()); + + // Test the text with double quotes in the middle of string + HINFO hinfo1(hinfo_str1); + EXPECT_EQ(string("Pen\"tium"), hinfo1.getCPU()); +} + +TEST_F(Rdata_HINFO_Test, badText) { + // Fields must be seperated by spaces + EXPECT_THROW(const HINFO hinfo("\"Pentium\"\"Linux\""), InvalidRdataText); + // Field cannot be missing + EXPECT_THROW(const HINFO hinfo("Pentium"), InvalidRdataText); + // The cannot exceed 255 characters + string hinfo_str; + for (int i = 0; i < 257; ++i) { + hinfo_str += 'A'; + } + hinfo_str += " Linux"; + EXPECT_THROW(const HINFO hinfo(hinfo_str), CharStringTooLong); +} + +TEST_F(Rdata_HINFO_Test, createFromWire) { + InputBuffer input_buffer(hinfo_rdata, sizeof(hinfo_rdata)); + HINFO hinfo(input_buffer, sizeof(hinfo_rdata)); + EXPECT_EQ(string("Pentium"), hinfo.getCPU()); + EXPECT_EQ(string("Linux"), hinfo.getOS()); +} + +TEST_F(Rdata_HINFO_Test, toText) { + HINFO hinfo(hinfo_str); + EXPECT_EQ(hinfo_str, hinfo.toText()); +} + +TEST_F(Rdata_HINFO_Test, toWire) { + HINFO hinfo(hinfo_str); + hinfo.toWire(obuffer); + + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, obuffer.getData(), + obuffer.getLength(), hinfo_rdata, sizeof(hinfo_rdata)); +} + +TEST_F(Rdata_HINFO_Test, toWireRenderer) { + HINFO hinfo(hinfo_str); + + hinfo.toWire(renderer); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, obuffer.getData(), + obuffer.getLength(), hinfo_rdata, sizeof(hinfo_rdata)); +} + +TEST_F(Rdata_HINFO_Test, compare) { + HINFO hinfo(hinfo_str); + HINFO hinfo_small1(hinfo_str_small1); + HINFO hinfo_small2(hinfo_str_small2); + HINFO hinfo_large1(hinfo_str_large1); + HINFO hinfo_large2(hinfo_str_large2); + + EXPECT_EQ(0, hinfo.compare(HINFO(hinfo_str))); + EXPECT_EQ(1, hinfo.compare(HINFO(hinfo_str_small1))); + EXPECT_EQ(1, hinfo.compare(HINFO(hinfo_str_small2))); + EXPECT_EQ(-1, hinfo.compare(HINFO(hinfo_str_large1))); + EXPECT_EQ(-1, hinfo.compare(HINFO(hinfo_str_large2))); +} + +} diff --git a/src/lib/dns/tests/rdata_minfo_unittest.cc b/src/lib/dns/tests/rdata_minfo_unittest.cc new file mode 100644 index 0000000000..30c7c3945f --- /dev/null +++ b/src/lib/dns/tests/rdata_minfo_unittest.cc @@ -0,0 +1,184 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for generic +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +using isc::UnitTestUtil; +using namespace std; +using namespace isc::dns; +using namespace isc::util; +using namespace isc::dns::rdata; + +// minfo text +const char* const minfo_txt = "rmailbox.example.com. emailbox.example.com."; +const char* const minfo_txt2 = "root.example.com. emailbox.example.com."; +const char* const too_long_label = "01234567890123456789012345678901234567" + "89012345678901234567890123"; + +namespace { +class Rdata_MINFO_Test : public RdataTest { +public: + Rdata_MINFO_Test(): + rdata_minfo(string(minfo_txt)), rdata_minfo2(string(minfo_txt2)) {} + + const generic::MINFO rdata_minfo; + const generic::MINFO rdata_minfo2; +}; + + +TEST_F(Rdata_MINFO_Test, createFromText) { + EXPECT_EQ(Name("rmailbox.example.com."), rdata_minfo.getRmailbox()); + EXPECT_EQ(Name("emailbox.example.com."), rdata_minfo.getEmailbox()); + + EXPECT_EQ(Name("root.example.com."), rdata_minfo2.getRmailbox()); + EXPECT_EQ(Name("emailbox.example.com."), rdata_minfo2.getEmailbox()); +} + +TEST_F(Rdata_MINFO_Test, badText) { + // incomplete text + EXPECT_THROW(generic::MINFO("root.example.com."), + InvalidRdataText); + // number of fields (must be 2) is incorrect + EXPECT_THROW(generic::MINFO("root.example.com emailbox.example.com. " + "example.com."), + InvalidRdataText); + // bad rmailbox name + EXPECT_THROW(generic::MINFO("root.example.com. emailbox.example.com." + + string(too_long_label)), + TooLongLabel); + // bad emailbox name + EXPECT_THROW(generic::MINFO("root.example.com." + + string(too_long_label) + " emailbox.example.com."), + TooLongLabel); +} + +TEST_F(Rdata_MINFO_Test, createFromWire) { + // uncompressed names + EXPECT_EQ(0, rdata_minfo.compare( + *rdataFactoryFromFile(RRType::MINFO(), RRClass::IN(), + "rdata_minfo_fromWire1.wire"))); + // compressed names + EXPECT_EQ(0, rdata_minfo.compare( + *rdataFactoryFromFile(RRType::MINFO(), RRClass::IN(), + "rdata_minfo_fromWire2.wire", 15))); + // RDLENGTH is too short + EXPECT_THROW(rdataFactoryFromFile(RRType::MINFO(), RRClass::IN(), + "rdata_minfo_fromWire3.wire"), + InvalidRdataLength); + // RDLENGTH is too long + EXPECT_THROW(rdataFactoryFromFile(RRType::MINFO(), RRClass::IN(), + "rdata_minfo_fromWire4.wire"), + InvalidRdataLength); + // bogus rmailbox name, the error should be detected in the name + // constructor + EXPECT_THROW(rdataFactoryFromFile(RRType::MINFO(), RRClass::IN(), + "rdata_minfo_fromWire5.wire"), + DNSMessageFORMERR); + // bogus emailbox name, the error should be detected in the name + // constructor + EXPECT_THROW(rdataFactoryFromFile(RRType::MINFO(), RRClass::IN(), + "rdata_minfo_fromWire6.wire"), + DNSMessageFORMERR); +} + +TEST_F(Rdata_MINFO_Test, assignment) { + generic::MINFO copy((string(minfo_txt2))); + copy = rdata_minfo; + EXPECT_EQ(0, copy.compare(rdata_minfo)); + + // Check if the copied data is valid even after the original is deleted + generic::MINFO* copy2 = new generic::MINFO(rdata_minfo); + generic::MINFO copy3((string(minfo_txt2))); + copy3 = *copy2; + delete copy2; + EXPECT_EQ(0, copy3.compare(rdata_minfo)); + + // Self assignment + copy = copy; + EXPECT_EQ(0, copy.compare(rdata_minfo)); +} + +TEST_F(Rdata_MINFO_Test, toWireBuffer) { + rdata_minfo.toWire(obuffer); + vector data; + UnitTestUtil::readWireData("rdata_minfo_toWireUncompressed1.wire", data); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + static_cast(obuffer.getData()), + obuffer.getLength(), &data[0], data.size()); + + obuffer.clear(); + rdata_minfo2.toWire(obuffer); + vector data2; + UnitTestUtil::readWireData("rdata_minfo_toWireUncompressed2.wire", data2); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + static_cast(obuffer.getData()), + obuffer.getLength(), &data2[0], data2.size()); +} + +TEST_F(Rdata_MINFO_Test, toWireRenderer) { + rdata_minfo.toWire(renderer); + vector data; + UnitTestUtil::readWireData("rdata_minfo_toWire1.wire", data); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + static_cast(obuffer.getData()), + obuffer.getLength(), &data[0], data.size()); + renderer.clear(); + rdata_minfo2.toWire(renderer); + vector data2; + UnitTestUtil::readWireData("rdata_minfo_toWire2.wire", data2); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + static_cast(obuffer.getData()), + obuffer.getLength(), &data2[0], data2.size()); +} + +TEST_F(Rdata_MINFO_Test, toText) { + EXPECT_EQ(minfo_txt, rdata_minfo.toText()); + EXPECT_EQ(minfo_txt2, rdata_minfo2.toText()); +} + +TEST_F(Rdata_MINFO_Test, compare) { + // check reflexivity + EXPECT_EQ(0, rdata_minfo.compare(rdata_minfo)); + + // names must be compared in case-insensitive manner + EXPECT_EQ(0, rdata_minfo.compare(generic::MINFO("RMAILBOX.example.com. " + "emailbox.EXAMPLE.com."))); + + // another MINFO whose rmailbox name is larger than that of rdata_minfo. + const generic::MINFO large1_minfo("zzzzzzzz.example.com. " + "emailbox.example.com."); + EXPECT_GT(0, rdata_minfo.compare(large1_minfo)); + EXPECT_LT(0, large1_minfo.compare(rdata_minfo)); + + // another MINFO whose emailbox name is larger than that of rdata_minfo. + const generic::MINFO large2_minfo("rmailbox.example.com. " + "zzzzzzzzzzz.example.com."); + EXPECT_GT(0, rdata_minfo.compare(large2_minfo)); + EXPECT_LT(0, large2_minfo.compare(rdata_minfo)); + + // comparison attempt between incompatible RR types should be rejected + EXPECT_THROW(rdata_minfo.compare(*RdataTest::rdata_nomatch), bad_cast); +} +} diff --git a/src/lib/dns/tests/rdata_naptr_unittest.cc b/src/lib/dns/tests/rdata_naptr_unittest.cc new file mode 100644 index 0000000000..f905943ec5 --- /dev/null +++ b/src/lib/dns/tests/rdata_naptr_unittest.cc @@ -0,0 +1,178 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +using isc::UnitTestUtil; +using namespace std; +using namespace isc::dns; +using namespace isc::util; +using namespace isc::dns::rdata; +using namespace isc::dns::rdata::generic; + +namespace { +class Rdata_NAPTR_Test : public RdataTest { +}; + +// 10 100 "S" "SIP+D2U" "" _sip._udp.example.com. +static uint8_t naptr_rdata[] = {0x00,0x0a,0x00,0x64,0x01,0x53,0x07,0x53,0x49, + 0x50,0x2b,0x44,0x32,0x55,0x00,0x04,0x5f,0x73,0x69,0x70,0x04,0x5f,0x75,0x64, + 0x70,0x07,0x65,0x78,0x61,0x6d,0x70,0x6c,0x65,0x03,0x63,0x6f,0x6d,0x00}; + +static const char *naptr_str = + "10 100 \"S\" \"SIP+D2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str2 = + "10 100 S SIP+D2U \"\" _sip._udp.example.com."; + +static const char *naptr_str_small1 = + "9 100 \"S\" \"SIP+D2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str_small2 = + "10 90 \"S\" \"SIP+D2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str_small3 = + "10 100 \"R\" \"SIP+D2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str_small4 = + "10 100 \"S\" \"SIP+C2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str_small5 = + "10 100 \"S\" \"SIP+D2U\" \"\" _rip._udp.example.com."; + +static const char *naptr_str_large1 = + "11 100 \"S\" \"SIP+D2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str_large2 = + "10 110 \"S\" \"SIP+D2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str_large3 = + "10 100 \"T\" \"SIP+D2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str_large4 = + "10 100 \"S\" \"SIP+E2U\" \"\" _sip._udp.example.com."; +static const char *naptr_str_large5 = + "10 100 \"S\" \"SIP+D2U\" \"\" _tip._udp.example.com."; + +TEST_F(Rdata_NAPTR_Test, createFromText) { + NAPTR naptr(naptr_str); + EXPECT_EQ(10, naptr.getOrder()); + EXPECT_EQ(100, naptr.getPreference()); + EXPECT_EQ(string("S"), naptr.getFlags()); + EXPECT_EQ(string("SIP+D2U"), naptr.getServices()); + EXPECT_EQ(string(""), naptr.getRegexp()); + EXPECT_EQ(Name("_sip._udp.example.com."), naptr.getReplacement()); + + // Test that separated by space + NAPTR naptr2(naptr_str2); + EXPECT_EQ(string("S"), naptr2.getFlags()); + EXPECT_EQ(string("SIP+D2U"), naptr2.getServices()); +} + +TEST_F(Rdata_NAPTR_Test, badText) { + // Order number cannot exceed 65535 + EXPECT_THROW(const NAPTR naptr("65536 10 S SIP \"\" _sip._udp.example.com."), + InvalidRdataText); + // Preference number cannot exceed 65535 + EXPECT_THROW(const NAPTR naptr("100 65536 S SIP \"\" _sip._udp.example.com."), + InvalidRdataText); + // No regexp given + EXPECT_THROW(const NAPTR naptr("100 10 S SIP _sip._udp.example.com."), + InvalidRdataText); + // The double quotes seperator must match + EXPECT_THROW(const NAPTR naptr("100 10 \"S SIP \"\" _sip._udp.example.com."), + InvalidRdataText); + // Order or preference cannot be missed + EXPECT_THROW(const NAPTR naptr("10 \"S\" SIP \"\" _sip._udp.example.com."), + InvalidRdataText); + // Fields must be seperated by spaces + EXPECT_THROW(const NAPTR naptr("100 10S SIP \"\" _sip._udp.example.com."), + InvalidRdataText); + EXPECT_THROW(const NAPTR naptr("100 10 \"S\"\"SIP\" \"\" _sip._udp.example.com."), + InvalidRdataText); + // Field cannot be missing + EXPECT_THROW(const NAPTR naptr("100 10 \"S\""), InvalidRdataText); + + // The cannot exceed 255 characters + string naptr_str; + naptr_str += "100 10 "; + for (int i = 0; i < 257; ++i) { + naptr_str += 'A'; + } + naptr_str += " SIP \"\" _sip._udp.example.com."; + EXPECT_THROW(const NAPTR naptr(naptr_str), CharStringTooLong); +} + +TEST_F(Rdata_NAPTR_Test, createFromWire) { + InputBuffer input_buffer(naptr_rdata, sizeof(naptr_rdata)); + NAPTR naptr(input_buffer, sizeof(naptr_rdata)); + EXPECT_EQ(10, naptr.getOrder()); + EXPECT_EQ(100, naptr.getPreference()); + EXPECT_EQ(string("S"), naptr.getFlags()); + EXPECT_EQ(string("SIP+D2U"), naptr.getServices()); + EXPECT_EQ(string(""), naptr.getRegexp()); + EXPECT_EQ(Name("_sip._udp.example.com."), naptr.getReplacement()); +} + +TEST_F(Rdata_NAPTR_Test, toWire) { + NAPTR naptr(naptr_str); + naptr.toWire(obuffer); + + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, obuffer.getData(), + obuffer.getLength(), naptr_rdata, sizeof(naptr_rdata)); +} + +TEST_F(Rdata_NAPTR_Test, toWireRenderer) { + NAPTR naptr(naptr_str); + + naptr.toWire(renderer); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, obuffer.getData(), + obuffer.getLength(), naptr_rdata, sizeof(naptr_rdata)); +} + +TEST_F(Rdata_NAPTR_Test, toText) { + NAPTR naptr(naptr_str); + EXPECT_EQ(naptr_str, naptr.toText()); +} + +TEST_F(Rdata_NAPTR_Test, compare) { + NAPTR naptr(naptr_str); + NAPTR naptr_small1(naptr_str_small1); + NAPTR naptr_small2(naptr_str_small2); + NAPTR naptr_small3(naptr_str_small3); + NAPTR naptr_small4(naptr_str_small4); + NAPTR naptr_small5(naptr_str_small5); + NAPTR naptr_large1(naptr_str_large1); + NAPTR naptr_large2(naptr_str_large2); + NAPTR naptr_large3(naptr_str_large3); + NAPTR naptr_large4(naptr_str_large4); + NAPTR naptr_large5(naptr_str_large5); + + EXPECT_EQ(0, naptr.compare(NAPTR(naptr_str))); + EXPECT_EQ(1, naptr.compare(NAPTR(naptr_str_small1))); + EXPECT_EQ(1, naptr.compare(NAPTR(naptr_str_small2))); + EXPECT_EQ(1, naptr.compare(NAPTR(naptr_str_small3))); + EXPECT_EQ(1, naptr.compare(NAPTR(naptr_str_small4))); + EXPECT_EQ(1, naptr.compare(NAPTR(naptr_str_small5))); + EXPECT_EQ(-1, naptr.compare(NAPTR(naptr_str_large1))); + EXPECT_EQ(-1, naptr.compare(NAPTR(naptr_str_large2))); + EXPECT_EQ(-1, naptr.compare(NAPTR(naptr_str_large3))); + EXPECT_EQ(-1, naptr.compare(NAPTR(naptr_str_large4))); + EXPECT_EQ(-1, naptr.compare(NAPTR(naptr_str_large5))); +} + +} diff --git a/src/lib/dns/tests/rdata_rrsig_unittest.cc b/src/lib/dns/tests/rdata_rrsig_unittest.cc index 903021fb5e..3324b99de1 100644 --- a/src/lib/dns/tests/rdata_rrsig_unittest.cc +++ b/src/lib/dns/tests/rdata_rrsig_unittest.cc @@ -47,7 +47,7 @@ TEST_F(Rdata_RRSIG_Test, fromText) { "f49t+sXKPzbipN9g+s1ZPiIyofc="); generic::RRSIG rdata_rrsig(rrsig_txt); EXPECT_EQ(rrsig_txt, rdata_rrsig.toText()); - + EXPECT_EQ(isc::dns::RRType::A(), rdata_rrsig.typeCovered()); } TEST_F(Rdata_RRSIG_Test, badText) { diff --git a/src/lib/dns/tests/rdata_srv_unittest.cc b/src/lib/dns/tests/rdata_srv_unittest.cc new file mode 100644 index 0000000000..3394f43aef --- /dev/null +++ b/src/lib/dns/tests/rdata_srv_unittest.cc @@ -0,0 +1,173 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for generic +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +using isc::UnitTestUtil; +using namespace std; +using namespace isc::dns; +using namespace isc::util; +using namespace isc::dns::rdata; + +namespace { +class Rdata_SRV_Test : public RdataTest { + // there's nothing to specialize +}; + +string srv_txt("1 5 1500 a.example.com."); +string srv_txt2("1 5 1400 example.com."); +string too_long_label("012345678901234567890123456789" + "0123456789012345678901234567890123"); + +// 1 5 1500 a.example.com. +const uint8_t wiredata_srv[] = { + 0x00, 0x01, 0x00, 0x05, 0x05, 0xdc, 0x01, 0x61, 0x07, 0x65, 0x78, + 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x03, 0x63, 0x6f, 0x6d, 0x00}; +// 1 5 1400 example.com. +const uint8_t wiredata_srv2[] = { + 0x00, 0x01, 0x00, 0x05, 0x05, 0x78, 0x07, 0x65, 0x78, 0x61, 0x6d, + 0x70, 0x6c, 0x65, 0x03, 0x63, 0x6f, 0x6d, 0x00}; + +const in::SRV rdata_srv(srv_txt); +const in::SRV rdata_srv2(srv_txt2); + +TEST_F(Rdata_SRV_Test, createFromText) { + EXPECT_EQ(1, rdata_srv.getPriority()); + EXPECT_EQ(5, rdata_srv.getWeight()); + EXPECT_EQ(1500, rdata_srv.getPort()); + EXPECT_EQ(Name("a.example.com."), rdata_srv.getTarget()); +} + +TEST_F(Rdata_SRV_Test, badText) { + // priority is too large (2814...6 is 2^48) + EXPECT_THROW(in::SRV("281474976710656 5 1500 a.example.com."), + InvalidRdataText); + // weight is too large + EXPECT_THROW(in::SRV("1 281474976710656 1500 a.example.com."), + InvalidRdataText); + // port is too large + EXPECT_THROW(in::SRV("1 5 281474976710656 a.example.com."), + InvalidRdataText); + // incomplete text + EXPECT_THROW(in::SRV("1 5 a.example.com."), + InvalidRdataText); + EXPECT_THROW(in::SRV("1 5 1500a.example.com."), + InvalidRdataText); + // bad name + EXPECT_THROW(in::SRV("1 5 1500 a.example.com." + too_long_label), + TooLongLabel); +} + +TEST_F(Rdata_SRV_Test, assignment) { + in::SRV copy((string(srv_txt2))); + copy = rdata_srv; + EXPECT_EQ(0, copy.compare(rdata_srv)); + + // Check if the copied data is valid even after the original is deleted + in::SRV* copy2 = new in::SRV(rdata_srv); + in::SRV copy3((string(srv_txt2))); + copy3 = *copy2; + delete copy2; + EXPECT_EQ(0, copy3.compare(rdata_srv)); + + // Self assignment + copy = copy; + EXPECT_EQ(0, copy.compare(rdata_srv)); +} + +TEST_F(Rdata_SRV_Test, createFromWire) { + EXPECT_EQ(0, rdata_srv.compare( + *rdataFactoryFromFile(RRType("SRV"), RRClass("IN"), + "rdata_srv_fromWire"))); + // RDLENGTH is too short + EXPECT_THROW(rdataFactoryFromFile(RRType("SRV"), RRClass("IN"), + "rdata_srv_fromWire", 23), + InvalidRdataLength); + // RDLENGTH is too long + EXPECT_THROW(rdataFactoryFromFile(RRType("SRV"), RRClass("IN"), + "rdata_srv_fromWire", 46), + InvalidRdataLength); + // incomplete name. the error should be detected in the name constructor + EXPECT_THROW(rdataFactoryFromFile(RRType("SRV"), RRClass("IN"), + "rdata_cname_fromWire", 69), + DNSMessageFORMERR); + // parse compressed target name + EXPECT_EQ(0, rdata_srv.compare( + *rdataFactoryFromFile(RRType("SRV"), RRClass("IN"), + "rdata_srv_fromWire", 89))); +} + +TEST_F(Rdata_SRV_Test, toWireBuffer) { + rdata_srv.toWire(obuffer); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + wiredata_srv, sizeof(wiredata_srv)); + obuffer.clear(); + rdata_srv2.toWire(obuffer); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + wiredata_srv2, sizeof(wiredata_srv2)); +} + +TEST_F(Rdata_SRV_Test, toWireRenderer) { + rdata_srv.toWire(renderer); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + wiredata_srv, sizeof(wiredata_srv)); + renderer.clear(); + rdata_srv2.toWire(renderer); + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + wiredata_srv2, sizeof(wiredata_srv2)); +} + +TEST_F(Rdata_SRV_Test, toText) { + EXPECT_EQ(srv_txt, rdata_srv.toText()); + EXPECT_EQ(srv_txt2, rdata_srv2.toText()); +} + +TEST_F(Rdata_SRV_Test, compare) { + // test RDATAs, sorted in the ascendent order. + vector compare_set; + compare_set.push_back(in::SRV("1 5 1500 a.example.com.")); + compare_set.push_back(in::SRV("2 5 1500 a.example.com.")); + compare_set.push_back(in::SRV("2 6 1500 a.example.com.")); + compare_set.push_back(in::SRV("2 6 1600 a.example.com.")); + compare_set.push_back(in::SRV("2 6 1600 example.com.")); + + EXPECT_EQ(0, compare_set[0].compare( + in::SRV("1 5 1500 a.example.com."))); + + vector::const_iterator it; + vector::const_iterator it_end = compare_set.end(); + for (it = compare_set.begin(); it != it_end - 1; ++it) { + EXPECT_GT(0, (*it).compare(*(it + 1))); + EXPECT_LT(0, (*(it + 1)).compare(*it)); + } + + // comparison attempt between incompatible RR types should be rejected + EXPECT_THROW(rdata_srv.compare(*RdataTest::rdata_nomatch), bad_cast); +} +} diff --git a/src/lib/dns/tests/testdata/Makefile.am b/src/lib/dns/tests/testdata/Makefile.am index cb1bb1cad6..d8f0d1c298 100644 --- a/src/lib/dns/tests/testdata/Makefile.am +++ b/src/lib/dns/tests/testdata/Makefile.am @@ -5,8 +5,12 @@ BUILT_SOURCES += edns_toWire4.wire BUILT_SOURCES += message_fromWire10.wire message_fromWire11.wire BUILT_SOURCES += message_fromWire12.wire message_fromWire13.wire BUILT_SOURCES += message_fromWire14.wire message_fromWire15.wire -BUILT_SOURCES += message_fromWire16.wire +BUILT_SOURCES += message_fromWire16.wire message_fromWire17.wire +BUILT_SOURCES += message_fromWire18.wire message_fromWire19.wire +BUILT_SOURCES += message_fromWire20.wire message_fromWire21.wire +BUILT_SOURCES += message_fromWire22.wire BUILT_SOURCES += message_toWire2.wire message_toWire3.wire +BUILT_SOURCES += message_toWire4.wire message_toWire5.wire BUILT_SOURCES += message_toText1.wire message_toText2.wire BUILT_SOURCES += message_toText3.wire BUILT_SOURCES += name_toWire5.wire name_toWire6.wire @@ -24,10 +28,20 @@ BUILT_SOURCES += rdata_nsec3_fromWire10.wire rdata_nsec3_fromWire11.wire BUILT_SOURCES += rdata_nsec3_fromWire12.wire rdata_nsec3_fromWire13.wire BUILT_SOURCES += rdata_nsec3_fromWire14.wire rdata_nsec3_fromWire15.wire BUILT_SOURCES += rdata_rrsig_fromWire2.wire +BUILT_SOURCES += rdata_minfo_fromWire1.wire rdata_minfo_fromWire2.wire +BUILT_SOURCES += rdata_minfo_fromWire3.wire rdata_minfo_fromWire4.wire +BUILT_SOURCES += rdata_minfo_fromWire5.wire rdata_minfo_fromWire6.wire +BUILT_SOURCES += rdata_minfo_toWire1.wire rdata_minfo_toWire2.wire +BUILT_SOURCES += rdata_minfo_toWireUncompressed1.wire +BUILT_SOURCES += rdata_minfo_toWireUncompressed2.wire BUILT_SOURCES += rdata_rp_fromWire1.wire rdata_rp_fromWire2.wire BUILT_SOURCES += rdata_rp_fromWire3.wire rdata_rp_fromWire4.wire BUILT_SOURCES += rdata_rp_fromWire5.wire rdata_rp_fromWire6.wire BUILT_SOURCES += rdata_rp_toWire1.wire rdata_rp_toWire2.wire +BUILT_SOURCES += rdata_afsdb_fromWire1.wire rdata_afsdb_fromWire2.wire +BUILT_SOURCES += rdata_afsdb_fromWire3.wire rdata_afsdb_fromWire4.wire +BUILT_SOURCES += rdata_afsdb_fromWire5.wire +BUILT_SOURCES += rdata_afsdb_toWire1.wire rdata_afsdb_toWire2.wire BUILT_SOURCES += rdata_soa_toWireUncompressed.wire BUILT_SOURCES += rdata_txt_fromWire2.wire rdata_txt_fromWire3.wire BUILT_SOURCES += rdata_txt_fromWire4.wire rdata_txt_fromWire5.wire @@ -47,8 +61,7 @@ BUILT_SOURCES += tsig_verify10.wire # NOTE: keep this in sync with real file listing # so is included in tarball -EXTRA_DIST = gen-wiredata.py.in -EXTRA_DIST += edns_toWire1.spec edns_toWire2.spec +EXTRA_DIST = edns_toWire1.spec edns_toWire2.spec EXTRA_DIST += edns_toWire3.spec edns_toWire4.spec EXTRA_DIST += masterload.txt EXTRA_DIST += message_fromWire1 message_fromWire2 @@ -59,7 +72,11 @@ EXTRA_DIST += message_fromWire9 message_fromWire10.spec EXTRA_DIST += message_fromWire11.spec message_fromWire12.spec EXTRA_DIST += message_fromWire13.spec message_fromWire14.spec EXTRA_DIST += message_fromWire15.spec message_fromWire16.spec +EXTRA_DIST += message_fromWire17.spec message_fromWire18.spec +EXTRA_DIST += message_fromWire19.spec message_fromWire20.spec +EXTRA_DIST += message_fromWire21.spec message_fromWire22.spec EXTRA_DIST += message_toWire1 message_toWire2.spec message_toWire3.spec +EXTRA_DIST += message_toWire4.spec message_toWire5.spec EXTRA_DIST += message_toText1.txt message_toText1.spec EXTRA_DIST += message_toText2.txt message_toText2.spec EXTRA_DIST += message_toText3.txt message_toText3.spec @@ -96,7 +113,18 @@ EXTRA_DIST += rdata_rp_fromWire1.spec rdata_rp_fromWire2.spec EXTRA_DIST += rdata_rp_fromWire3.spec rdata_rp_fromWire4.spec EXTRA_DIST += rdata_rp_fromWire5.spec rdata_rp_fromWire6.spec EXTRA_DIST += rdata_rp_toWire1.spec rdata_rp_toWire2.spec +EXTRA_DIST += rdata_afsdb_fromWire1.spec rdata_afsdb_fromWire2.spec +EXTRA_DIST += rdata_afsdb_fromWire3.spec rdata_afsdb_fromWire4.spec +EXTRA_DIST += rdata_afsdb_fromWire5.spec +EXTRA_DIST += rdata_afsdb_toWire1.spec rdata_afsdb_toWire2.spec EXTRA_DIST += rdata_soa_fromWire rdata_soa_toWireUncompressed.spec +EXTRA_DIST += rdata_srv_fromWire +EXTRA_DIST += rdata_minfo_fromWire1.spec rdata_minfo_fromWire2.spec +EXTRA_DIST += rdata_minfo_fromWire3.spec rdata_minfo_fromWire4.spec +EXTRA_DIST += rdata_minfo_fromWire5.spec rdata_minfo_fromWire6.spec +EXTRA_DIST += rdata_minfo_toWire1.spec rdata_minfo_toWire2.spec +EXTRA_DIST += rdata_minfo_toWireUncompressed1.spec +EXTRA_DIST += rdata_minfo_toWireUncompressed2.spec EXTRA_DIST += rdata_txt_fromWire1 rdata_txt_fromWire2.spec EXTRA_DIST += rdata_txt_fromWire3.spec rdata_txt_fromWire4.spec EXTRA_DIST += rdata_txt_fromWire5.spec rdata_unknown_fromWire @@ -118,4 +146,4 @@ EXTRA_DIST += tsig_verify7.spec tsig_verify8.spec tsig_verify9.spec EXTRA_DIST += tsig_verify10.spec .spec.wire: - ./gen-wiredata.py -o $@ $< + $(PYTHON) $(top_builddir)/src/lib/util/python/gen_wiredata.py -o $@ $< diff --git a/src/lib/dns/tests/testdata/gen-wiredata.py.in b/src/lib/dns/tests/testdata/gen-wiredata.py.in deleted file mode 100755 index fd98c6eb4b..0000000000 --- a/src/lib/dns/tests/testdata/gen-wiredata.py.in +++ /dev/null @@ -1,612 +0,0 @@ -#!@PYTHON@ - -# Copyright (C) 2010 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import configparser, re, time, socket, sys -from datetime import datetime -from optparse import OptionParser - -re_hex = re.compile(r'^0x[0-9a-fA-F]+') -re_decimal = re.compile(r'^\d+$') -re_string = re.compile(r"\'(.*)\'$") - -dnssec_timefmt = '%Y%m%d%H%M%S' - -dict_qr = { 'query' : 0, 'response' : 1 } -dict_opcode = { 'query' : 0, 'iquery' : 1, 'status' : 2, 'notify' : 4, - 'update' : 5 } -rdict_opcode = dict([(dict_opcode[k], k.upper()) for k in dict_opcode.keys()]) -dict_rcode = { 'noerror' : 0, 'formerr' : 1, 'servfail' : 2, 'nxdomain' : 3, - 'notimp' : 4, 'refused' : 5, 'yxdomain' : 6, 'yxrrset' : 7, - 'nxrrset' : 8, 'notauth' : 9, 'notzone' : 10 } -rdict_rcode = dict([(dict_rcode[k], k.upper()) for k in dict_rcode.keys()]) -dict_rrtype = { 'none' : 0, 'a' : 1, 'ns' : 2, 'md' : 3, 'mf' : 4, 'cname' : 5, - 'soa' : 6, 'mb' : 7, 'mg' : 8, 'mr' : 9, 'null' : 10, - 'wks' : 11, 'ptr' : 12, 'hinfo' : 13, 'minfo' : 14, 'mx' : 15, - 'txt' : 16, 'rp' : 17, 'afsdb' : 18, 'x25' : 19, 'isdn' : 20, - 'rt' : 21, 'nsap' : 22, 'nsap_tr' : 23, 'sig' : 24, 'key' : 25, - 'px' : 26, 'gpos' : 27, 'aaaa' : 28, 'loc' : 29, 'nxt' : 30, - 'srv' : 33, 'naptr' : 35, 'kx' : 36, 'cert' : 37, 'a6' : 38, - 'dname' : 39, 'opt' : 41, 'apl' : 42, 'ds' : 43, 'sshfp' : 44, - 'ipseckey' : 45, 'rrsig' : 46, 'nsec' : 47, 'dnskey' : 48, - 'dhcid' : 49, 'nsec3' : 50, 'nsec3param' : 51, 'hip' : 55, - 'spf' : 99, 'unspec' : 103, 'tkey' : 249, 'tsig' : 250, - 'dlv' : 32769, 'ixfr' : 251, 'axfr' : 252, 'mailb' : 253, - 'maila' : 254, 'any' : 255 } -rdict_rrtype = dict([(dict_rrtype[k], k.upper()) for k in dict_rrtype.keys()]) -dict_rrclass = { 'in' : 1, 'ch' : 3, 'hs' : 4, 'any' : 255 } -rdict_rrclass = dict([(dict_rrclass[k], k.upper()) for k in \ - dict_rrclass.keys()]) -dict_algorithm = { 'rsamd5' : 1, 'dh' : 2, 'dsa' : 3, 'ecc' : 4, - 'rsasha1' : 5 } -dict_nsec3_algorithm = { 'reserved' : 0, 'sha1' : 1 } -rdict_algorithm = dict([(dict_algorithm[k], k.upper()) for k in \ - dict_algorithm.keys()]) -rdict_nsec3_algorithm = dict([(dict_nsec3_algorithm[k], k.upper()) for k in \ - dict_nsec3_algorithm.keys()]) - -header_xtables = { 'qr' : dict_qr, 'opcode' : dict_opcode, - 'rcode' : dict_rcode } -question_xtables = { 'rrtype' : dict_rrtype, 'rrclass' : dict_rrclass } -rrsig_xtables = { 'algorithm' : dict_algorithm } - -def parse_value(value, xtable = {}): - if re.search(re_hex, value): - return int(value, 16) - if re.search(re_decimal, value): - return int(value) - m = re.match(re_string, value) - if m: - return m.group(1) - lovalue = value.lower() - if lovalue in xtable: - return xtable[lovalue] - return value - -def code_totext(code, dict): - if code in dict.keys(): - return dict[code] + '(' + str(code) + ')' - return str(code) - -def encode_name(name, absolute=True): - # make sure the name is dot-terminated. duplicate dots will be ignored - # below. - name += '.' - labels = name.split('.') - wire = '' - for l in labels: - if len(l) > 4 and l[0:4] == 'ptr=': - # special meta-syntax for compression pointer - wire += '%04x' % (0xc000 | int(l[4:])) - break - if absolute or len(l) > 0: - wire += '%02x' % len(l) - wire += ''.join(['%02x' % ord(ch) for ch in l]) - if len(l) == 0: - break - return wire - -def encode_string(name, len=None): - if type(name) is int and len is not None: - return '%0.*x' % (len * 2, name) - return ''.join(['%02x' % ord(ch) for ch in name]) - -def count_namelabels(name): - if name == '.': # special case - return 0 - m = re.match('^(.*)\.$', name) - if m: - name = m.group(1) - return len(name.split('.')) - -def get_config(config, section, configobj, xtables = {}): - try: - for field in config.options(section): - value = config.get(section, field) - if field in xtables.keys(): - xtable = xtables[field] - else: - xtable = {} - configobj.__dict__[field] = parse_value(value, xtable) - except configparser.NoSectionError: - return False - return True - -def print_header(f, input_file): - f.write('''### -### This data file was auto-generated from ''' + input_file + ''' -### -''') - -class Name: - name = 'example.com' - pointer = None # no compression by default - def dump(self, f): - name = self.name - if self.pointer is not None: - if len(name) > 0 and name[-1] != '.': - name += '.' - name += 'ptr=%d' % self.pointer - name_wire = encode_name(name) - f.write('\n# DNS Name: %s' % self.name) - if self.pointer is not None: - f.write(' + compression pointer: %d' % self.pointer) - f.write('\n') - f.write('%s' % name_wire) - f.write('\n') - -class DNSHeader: - id = 0x1035 - (qr, aa, tc, rd, ra, ad, cd) = 0, 0, 0, 0, 0, 0, 0 - mbz = 0 - rcode = 0 # noerror - opcode = 0 # query - (qdcount, ancount, nscount, arcount) = 1, 0, 0, 0 - def dump(self, f): - f.write('\n# Header Section\n') - f.write('# ID=' + str(self.id)) - f.write(' QR=' + ('Response' if self.qr else 'Query')) - f.write(' Opcode=' + code_totext(self.opcode, rdict_opcode)) - f.write(' Rcode=' + code_totext(self.rcode, rdict_rcode)) - f.write('%s' % (' AA' if self.aa else '')) - f.write('%s' % (' TC' if self.tc else '')) - f.write('%s' % (' RD' if self.rd else '')) - f.write('%s' % (' AD' if self.ad else '')) - f.write('%s' % (' CD' if self.cd else '')) - f.write('\n') - f.write('%04x ' % self.id) - flag_and_code = 0 - flag_and_code |= (self.qr << 15 | self.opcode << 14 | self.aa << 10 | - self.tc << 9 | self.rd << 8 | self.ra << 7 | - self.mbz << 6 | self.ad << 5 | self.cd << 4 | - self.rcode) - f.write('%04x\n' % flag_and_code) - f.write('# QDCNT=%d, ANCNT=%d, NSCNT=%d, ARCNT=%d\n' % - (self.qdcount, self.ancount, self.nscount, self.arcount)) - f.write('%04x %04x %04x %04x\n' % (self.qdcount, self.ancount, - self.nscount, self.arcount)) - -class DNSQuestion: - name = 'example.com.' - rrtype = parse_value('A', dict_rrtype) - rrclass = parse_value('IN', dict_rrclass) - def dump(self, f): - f.write('\n# Question Section\n') - f.write('# QNAME=%s QTYPE=%s QCLASS=%s\n' % - (self.name, - code_totext(self.rrtype, rdict_rrtype), - code_totext(self.rrclass, rdict_rrclass))) - f.write(encode_name(self.name)) - f.write(' %04x %04x\n' % (self.rrtype, self.rrclass)) - -class EDNS: - name = '.' - udpsize = 4096 - extrcode = 0 - version = 0 - do = 0 - mbz = 0 - rdlen = 0 - def dump(self, f): - f.write('\n# EDNS OPT RR\n') - f.write('# NAME=%s TYPE=%s UDPSize=%d ExtRcode=%s Version=%s DO=%d\n' % - (self.name, code_totext(dict_rrtype['opt'], rdict_rrtype), - self.udpsize, self.extrcode, self.version, - 1 if self.do else 0)) - - code_vers = (self.extrcode << 8) | (self.version & 0x00ff) - extflags = (self.do << 15) | (self.mbz & 0x8000) - f.write('%s %04x %04x %04x %04x\n' % - (encode_name(self.name), dict_rrtype['opt'], self.udpsize, - code_vers, extflags)) - f.write('# RDLEN=%d\n' % self.rdlen) - f.write('%04x\n' % self.rdlen) - -class RR: - '''This is a base class for various types of RR test data. - For each RR type (A, AAAA, NS, etc), we define a derived class of RR - to dump type specific RDATA parameters. This class defines parameters - common to all types of RDATA, namely the owner name, RR class and TTL. - The dump() method of derived classes are expected to call dump_header(), - whose default implementation is provided in this class. This method - decides whether to dump the test data as an RR (with name, type, class) - or only as RDATA (with its length), and dumps the corresponding data - via the specified file object. - - By convention we assume derived classes are named after the common - standard mnemonic of the corresponding RR types. For example, the - derived class for the RR type SOA should be named "SOA". - - Configurable parameters are as follows: - - as_rr (bool): Whether or not the data is to be dumped as an RR. False - by default. - - rr_class (string): The RR class of the data. Only meaningful when the - data is dumped as an RR. Default is 'IN'. - - rr_ttl (integer): The TTL value of the RR. Only meaningful when the - data is dumped as an RR. Default is 86400 (1 day). - ''' - - def __init__(self): - self.as_rr = False - # only when as_rr is True, same for class/TTL: - self.rr_name = 'example.com' - self.rr_class = 'IN' - self.rr_ttl = 86400 - def dump_header(self, f, rdlen): - type_txt = self.__class__.__name__ - type_code = parse_value(type_txt, dict_rrtype) - if self.as_rr: - rrclass = parse_value(self.rr_class, dict_rrclass) - f.write('\n# %s RR (QNAME=%s Class=%s TTL=%d RDLEN=%d)\n' % - (type_txt, self.rr_name, - code_totext(rrclass, rdict_rrclass), self.rr_ttl, rdlen)) - f.write('%s %04x %04x %08x %04x\n' % - (encode_name(self.rr_name), type_code, rrclass, - self.rr_ttl, rdlen)) - else: - f.write('\n# %s RDATA (RDLEN=%d)\n' % (type_txt, rdlen)) - f.write('%04x\n' % rdlen) - -class A(RR): - rdlen = 4 # fixed by default - address = '192.0.2.1' - - def dump(self, f): - self.dump_header(f, self.rdlen) - f.write('# Address=%s\n' % (self.address)) - bin_address = socket.inet_aton(self.address) - f.write('%02x%02x%02x%02x\n' % (bin_address[0], bin_address[1], - bin_address[2], bin_address[3])) - -class NS(RR): - rdlen = None # auto calculate - nsname = 'ns.example.com' - - def dump(self, f): - nsname_wire = encode_name(self.nsname) - if self.rdlen is None: - self.rdlen = len(nsname_wire) / 2 - self.dump_header(f, self.rdlen) - f.write('# NS name=%s\n' % (self.nsname)) - f.write('%s\n' % nsname_wire) - -class SOA(RR): - rdlen = None # auto-calculate - mname = 'ns.example.com' - rname = 'root.example.com' - serial = 2010012601 - refresh = 3600 - retry = 300 - expire = 3600000 - minimum = 1200 - def dump(self, f): - mname_wire = encode_name(self.mname) - rname_wire = encode_name(self.rname) - if self.rdlen is None: - self.rdlen = int(20 + len(mname_wire) / 2 + len(str(rname_wire)) / 2) - self.dump_header(f, self.rdlen) - f.write('# NNAME=%s RNAME=%s\n' % (self.mname, self.rname)) - f.write('%s %s\n' % (mname_wire, rname_wire)) - f.write('# SERIAL(%d) REFRESH(%d) RETRY(%d) EXPIRE(%d) MINIMUM(%d)\n' % - (self.serial, self.refresh, self.retry, self.expire, - self.minimum)) - f.write('%08x %08x %08x %08x %08x\n' % (self.serial, self.refresh, - self.retry, self.expire, - self.minimum)) - -class TXT: - rdlen = -1 # auto-calculate - nstring = 1 # number of character-strings - stringlen = -1 # default string length, auto-calculate - string = 'Test String' # default string - def dump(self, f): - stringlen_list = [] - string_list = [] - wirestring_list = [] - for i in range(0, self.nstring): - key_string = 'string' + str(i) - if key_string in self.__dict__: - string_list.append(self.__dict__[key_string]) - else: - string_list.append(self.string) - wirestring_list.append(encode_string(string_list[-1])) - key_stringlen = 'stringlen' + str(i) - if key_stringlen in self.__dict__: - stringlen_list.append(self.__dict__[key_stringlen]) - else: - stringlen_list.append(self.stringlen) - if stringlen_list[-1] < 0: - stringlen_list[-1] = int(len(wirestring_list[-1]) / 2) - rdlen = self.rdlen - if rdlen < 0: - rdlen = int(len(''.join(wirestring_list)) / 2) + self.nstring - f.write('\n# TXT RDATA (RDLEN=%d)\n' % rdlen) - f.write('%04x\n' % rdlen); - for i in range(0, self.nstring): - f.write('# String Len=%d, String=\"%s\"\n' % - (stringlen_list[i], string_list[i])) - f.write('%02x%s%s\n' % (stringlen_list[i], - ' ' if len(wirestring_list[i]) > 0 else '', - wirestring_list[i])) - -class RP: - '''Implements rendering RP RDATA in the wire format. - Configurable parameters are as follows: - - rdlen: 16-bit RDATA length. If omitted, the accurate value is auto - calculated and used; if negative, the RDLEN field will be omitted from - the output data. - - mailbox: The mailbox field. - - text: The text field. - All of these parameters have the default values and can be omitted. - ''' - rdlen = None # auto-calculate - mailbox = 'root.example.com' - text = 'rp-text.example.com' - def dump(self, f): - mailbox_wire = encode_name(self.mailbox) - text_wire = encode_name(self.text) - if self.rdlen is None: - self.rdlen = (len(mailbox_wire) + len(text_wire)) / 2 - else: - self.rdlen = int(self.rdlen) - if self.rdlen >= 0: - f.write('\n# RP RDATA (RDLEN=%d)\n' % self.rdlen) - f.write('%04x\n' % self.rdlen) - else: - f.write('\n# RP RDATA (RDLEN omitted)\n') - f.write('# MAILBOX=%s TEXT=%s\n' % (self.mailbox, self.text)) - f.write('%s %s\n' % (mailbox_wire, text_wire)) - -class NSECBASE: - '''Implements rendering NSEC/NSEC3 type bitmaps commonly used for - these RRs. The NSEC and NSEC3 classes will be inherited from this - class.''' - nbitmap = 1 # number of bitmaps - block = 0 - maplen = None # default bitmap length, auto-calculate - bitmap = '040000000003' # an arbtrarily chosen bitmap sample - def dump(self, f): - # first, construct the bitmpa data - block_list = [] - maplen_list = [] - bitmap_list = [] - for i in range(0, self.nbitmap): - key_bitmap = 'bitmap' + str(i) - if key_bitmap in self.__dict__: - bitmap_list.append(self.__dict__[key_bitmap]) - else: - bitmap_list.append(self.bitmap) - key_maplen = 'maplen' + str(i) - if key_maplen in self.__dict__: - maplen_list.append(self.__dict__[key_maplen]) - else: - maplen_list.append(self.maplen) - if maplen_list[-1] is None: # calculate it if not specified - maplen_list[-1] = int(len(bitmap_list[-1]) / 2) - key_block = 'block' + str(i) - if key_block in self.__dict__: - block_list.append(self.__dict__[key_block]) - else: - block_list.append(self.block) - - # dump RR-type specific part (NSEC or NSEC3) - self.dump_fixedpart(f, 2 * self.nbitmap + \ - int(len(''.join(bitmap_list)) / 2)) - - # dump the bitmap - for i in range(0, self.nbitmap): - f.write('# Bitmap: Block=%d, Length=%d\n' % - (block_list[i], maplen_list[i])) - f.write('%02x %02x %s\n' % - (block_list[i], maplen_list[i], bitmap_list[i])) - -class NSEC(NSECBASE): - rdlen = None # auto-calculate - nextname = 'next.example.com' - def dump_fixedpart(self, f, bitmap_totallen): - name_wire = encode_name(self.nextname) - if self.rdlen is None: - # if rdlen needs to be calculated, it must be based on the bitmap - # length, because the configured maplen can be fake. - self.rdlen = int(len(name_wire) / 2) + bitmap_totallen - f.write('\n# NSEC RDATA (RDLEN=%d)\n' % self.rdlen) - f.write('%04x\n' % self.rdlen); - f.write('# Next Name=%s (%d bytes)\n' % (self.nextname, - int(len(name_wire) / 2))) - f.write('%s\n' % name_wire) - -class NSEC3(NSECBASE): - rdlen = None # auto-calculate - hashalg = 1 # SHA-1 - optout = False # opt-out flag - mbz = 0 # other flag fields (none defined yet) - iterations = 1 - saltlen = 5 - salt = 's' * saltlen - hashlen = 20 - hash = 'h' * hashlen - def dump_fixedpart(self, f, bitmap_totallen): - if self.rdlen is None: - # if rdlen needs to be calculated, it must be based on the bitmap - # length, because the configured maplen can be fake. - self.rdlen = 4 + 1 + len(self.salt) + 1 + len(self.hash) \ - + bitmap_totallen - f.write('\n# NSEC3 RDATA (RDLEN=%d)\n' % self.rdlen) - f.write('%04x\n' % self.rdlen) - optout_val = 1 if self.optout else 0 - f.write('# Hash Alg=%s, Opt-Out=%d, Other Flags=%0x, Iterations=%d\n' % - (code_totext(self.hashalg, rdict_nsec3_algorithm), - optout_val, self.mbz, self.iterations)) - f.write('%02x %02x %04x\n' % - (self.hashalg, (self.mbz << 1) | optout_val, self.iterations)) - f.write("# Salt Len=%d, Salt='%s'\n" % (self.saltlen, self.salt)) - f.write('%02x%s%s\n' % (self.saltlen, - ' ' if len(self.salt) > 0 else '', - encode_string(self.salt))) - f.write("# Hash Len=%d, Hash='%s'\n" % (self.hashlen, self.hash)) - f.write('%02x%s%s\n' % (self.hashlen, - ' ' if len(self.hash) > 0 else '', - encode_string(self.hash))) - -class RRSIG: - rdlen = -1 # auto-calculate - covered = 1 # A - algorithm = 5 # RSA-SHA1 - labels = -1 # auto-calculate (#labels of signer) - originalttl = 3600 - expiration = int(time.mktime(datetime.strptime('20100131120000', - dnssec_timefmt).timetuple())) - inception = int(time.mktime(datetime.strptime('20100101120000', - dnssec_timefmt).timetuple())) - tag = 0x1035 - signer = 'example.com' - signature = 0x123456789abcdef123456789abcdef - def dump(self, f): - name_wire = encode_name(self.signer) - sig_wire = '%x' % self.signature - rdlen = self.rdlen - if rdlen < 0: - rdlen = int(18 + len(name_wire) / 2 + len(str(sig_wire)) / 2) - labels = self.labels - if labels < 0: - labels = count_namelabels(self.signer) - f.write('\n# RRSIG RDATA (RDLEN=%d)\n' % rdlen) - f.write('%04x\n' % rdlen); - f.write('# Covered=%s Algorithm=%s Labels=%d OrigTTL=%d\n' % - (code_totext(self.covered, rdict_rrtype), - code_totext(self.algorithm, rdict_algorithm), labels, - self.originalttl)) - f.write('%04x %02x %02x %08x\n' % (self.covered, self.algorithm, - labels, self.originalttl)) - f.write('# Expiration=%s, Inception=%s\n' % - (str(self.expiration), str(self.inception))) - f.write('%08x %08x\n' % (self.expiration, self.inception)) - f.write('# Tag=%d Signer=%s and Signature\n' % (self.tag, self.signer)) - f.write('%04x %s %s\n' % (self.tag, name_wire, sig_wire)) - -class TSIG(RR): - rdlen = None # auto-calculate - algorithm = 'hmac-sha256' - time_signed = 1286978795 # arbitrarily chosen default - fudge = 300 - mac_size = None # use a common value for the algorithm - mac = None # use 'x' * mac_size - original_id = 2845 # arbitrarily chosen default - error = 0 - other_len = None # 6 if error is BADTIME; otherwise 0 - other_data = None # use time_signed + fudge + 1 for BADTIME - dict_macsize = { 'hmac-md5' : 16, 'hmac-sha1' : 20, 'hmac-sha256' : 32 } - - # TSIG has some special defaults - def __init__(self): - super().__init__() - self.rr_class = 'ANY' - self.rr_ttl = 0 - - def dump(self, f): - if str(self.algorithm) == 'hmac-md5': - name_wire = encode_name('hmac-md5.sig-alg.reg.int') - else: - name_wire = encode_name(self.algorithm) - mac_size = self.mac_size - if mac_size is None: - if self.algorithm in self.dict_macsize.keys(): - mac_size = self.dict_macsize[self.algorithm] - else: - raise RuntimeError('TSIG Mac Size cannot be determined') - mac = encode_string('x' * mac_size) if self.mac is None else \ - encode_string(self.mac, mac_size) - other_len = self.other_len - if other_len is None: - # 18 = BADTIME - other_len = 6 if self.error == 18 else 0 - other_data = self.other_data - if other_data is None: - other_data = '%012x' % (self.time_signed + self.fudge + 1) \ - if self.error == 18 else '' - else: - other_data = encode_string(self.other_data, other_len) - if self.rdlen is None: - self.rdlen = int(len(name_wire) / 2 + 16 + len(mac) / 2 + \ - len(other_data) / 2) - self.dump_header(f, self.rdlen) - f.write('# Algorithm=%s Time-Signed=%d Fudge=%d\n' % - (self.algorithm, self.time_signed, self.fudge)) - f.write('%s %012x %04x\n' % (name_wire, self.time_signed, self.fudge)) - f.write('# MAC Size=%d MAC=(see hex)\n' % mac_size) - f.write('%04x%s\n' % (mac_size, ' ' + mac if len(mac) > 0 else '')) - f.write('# Original-ID=%d Error=%d\n' % (self.original_id, self.error)) - f.write('%04x %04x\n' % (self.original_id, self.error)) - f.write('# Other-Len=%d Other-Data=(see hex)\n' % other_len) - f.write('%04x%s\n' % (other_len, - ' ' + other_data if len(other_data) > 0 else '')) - -def get_config_param(section): - config_param = {'name' : (Name, {}), - 'header' : (DNSHeader, header_xtables), - 'question' : (DNSQuestion, question_xtables), - 'edns' : (EDNS, {}), 'a' : (A, {}), 'ns' : (NS, {}), - 'soa' : (SOA, {}), 'txt' : (TXT, {}), - 'rp' : (RP, {}), 'rrsig' : (RRSIG, {}), - 'nsec' : (NSEC, {}), 'nsec3' : (NSEC3, {}), - 'tsig' : (TSIG, {}) } - s = section - m = re.match('^([^:]+)/\d+$', section) - if m: - s = m.group(1) - return config_param[s] - -usage = '''usage: %prog [options] input_file''' - -if __name__ == "__main__": - parser = OptionParser(usage=usage) - parser.add_option('-o', '--output', action='store', dest='output', - default=None, metavar='FILE', - help='output file name [default: prefix of input_file]') - (options, args) = parser.parse_args() - - if len(args) == 0: - parser.error('input file is missing') - configfile = args[0] - - outputfile = options.output - if not outputfile: - m = re.match('(.*)\.[^.]+$', configfile) - if m: - outputfile = m.group(1) - else: - raise ValueError('output file is not specified and input file is not in the form of "output_file.suffix"') - - config = configparser.SafeConfigParser() - config.read(configfile) - - output = open(outputfile, 'w') - - print_header(output, configfile) - - # First try the 'custom' mode; if it fails assume the standard mode. - try: - sections = config.get('custom', 'sections').split(':') - except configparser.NoSectionError: - sections = ['header', 'question', 'edns'] - - for s in sections: - section_param = get_config_param(s) - (obj, xtables) = (section_param[0](), section_param[1]) - if get_config(config, s, obj, xtables): - obj.dump(output) - - output.close() diff --git a/src/lib/dns/tests/testdata/message_fromWire17.spec b/src/lib/dns/tests/testdata/message_fromWire17.spec new file mode 100644 index 0000000000..366cf051f1 --- /dev/null +++ b/src/lib/dns/tests/testdata/message_fromWire17.spec @@ -0,0 +1,22 @@ +# +# A simple DNS query message with TSIG signed +# + +[custom] +sections: header:question:tsig +[header] +id: 0x22c2 +rd: 1 +arcount: 1 +[question] +name: www.example.com +rrtype: TXT +[tsig] +as_rr: True +# TSIG QNAME won't be compressed +rr_name: www.example.com +algorithm: hmac-md5 +time_signed: 0x4e179212 +mac_size: 16 +mac: 0x8214b04634e32323d651ac60b08e6388 +original_id: 0x22c2 diff --git a/src/lib/dns/tests/testdata/message_fromWire18.spec b/src/lib/dns/tests/testdata/message_fromWire18.spec new file mode 100644 index 0000000000..0b2592a46b --- /dev/null +++ b/src/lib/dns/tests/testdata/message_fromWire18.spec @@ -0,0 +1,23 @@ +# +# Another simple DNS query message with TSIG signed. Only ID and time signed +# (and MAC as a result) are different. +# + +[custom] +sections: header:question:tsig +[header] +id: 0xd6e2 +rd: 1 +arcount: 1 +[question] +name: www.example.com +rrtype: TXT +[tsig] +as_rr: True +# TSIG QNAME won't be compressed +rr_name: www.example.com +algorithm: hmac-md5 +time_signed: 0x4e17b38d +mac_size: 16 +mac: 0x903b5b194a799b03a37718820c2404f2 +original_id: 0xd6e2 diff --git a/src/lib/dns/tests/testdata/message_fromWire19.spec b/src/lib/dns/tests/testdata/message_fromWire19.spec new file mode 100644 index 0000000000..8212dbfa9f --- /dev/null +++ b/src/lib/dns/tests/testdata/message_fromWire19.spec @@ -0,0 +1,20 @@ +# +# A non realistic DNS response message containing mixed types of RRs in the +# answer section in a mixed order. +# + +[custom] +sections: header:question:a/1:aaaa:a/2 +[header] +qr: 1 +ancount: 3 +[question] +name: www.example.com +rrtype: A +[a/1] +as_rr: True +[aaaa] +as_rr: True +[a/2] +as_rr: True +address: 192.0.2.2 diff --git a/src/lib/dns/tests/testdata/message_fromWire20.spec b/src/lib/dns/tests/testdata/message_fromWire20.spec new file mode 100644 index 0000000000..91986e4818 --- /dev/null +++ b/src/lib/dns/tests/testdata/message_fromWire20.spec @@ -0,0 +1,20 @@ +# +# A non realistic DNS response message containing mixed types of RRs in the +# authority section in a mixed order. +# + +[custom] +sections: header:question:a/1:aaaa:a/2 +[header] +qr: 1 +nscount: 3 +[question] +name: www.example.com +rrtype: A +[a/1] +as_rr: True +[aaaa] +as_rr: True +[a/2] +as_rr: True +address: 192.0.2.2 diff --git a/src/lib/dns/tests/testdata/message_fromWire21.spec b/src/lib/dns/tests/testdata/message_fromWire21.spec new file mode 100644 index 0000000000..cd6aac9b42 --- /dev/null +++ b/src/lib/dns/tests/testdata/message_fromWire21.spec @@ -0,0 +1,20 @@ +# +# A non realistic DNS response message containing mixed types of RRs in the +# additional section in a mixed order. +# + +[custom] +sections: header:question:a/1:aaaa:a/2 +[header] +qr: 1 +arcount: 3 +[question] +name: www.example.com +rrtype: A +[a/1] +as_rr: True +[aaaa] +as_rr: True +[a/2] +as_rr: True +address: 192.0.2.2 diff --git a/src/lib/dns/tests/testdata/message_fromWire22.spec b/src/lib/dns/tests/testdata/message_fromWire22.spec new file mode 100644 index 0000000000..a52523b1ab --- /dev/null +++ b/src/lib/dns/tests/testdata/message_fromWire22.spec @@ -0,0 +1,14 @@ +# +# A simple DNS message containing one SOA RR in the answer section. This is +# intended to be trimmed to emulate a bogus message. +# + +[custom] +sections: header:question:soa +[header] +qr: 1 +ancount: 1 +[question] +rrtype: SOA +[soa] +as_rr: True diff --git a/src/lib/dns/tests/testdata/message_toWire4.spec b/src/lib/dns/tests/testdata/message_toWire4.spec new file mode 100644 index 0000000000..aab7e10813 --- /dev/null +++ b/src/lib/dns/tests/testdata/message_toWire4.spec @@ -0,0 +1,27 @@ +# +# Truncated DNS response with TSIG signed +# This is expected to be a response to "fromWire17" +# + +[custom] +sections: header:question:tsig +[header] +id: 0x22c2 +rd: 1 +qr: 1 +aa: 1 +# It's "truncated": +tc: 1 +arcount: 1 +[question] +name: www.example.com +rrtype: TXT +[tsig] +as_rr: True +# TSIG QNAME won't be compressed +rr_name: www.example.com +algorithm: hmac-md5 +time_signed: 0x4e179212 +mac_size: 16 +mac: 0x88adc3811d1d6bec7c684438906fc694 +original_id: 0x22c2 diff --git a/src/lib/dns/tests/testdata/message_toWire5.spec b/src/lib/dns/tests/testdata/message_toWire5.spec new file mode 100644 index 0000000000..e97fb43ce0 --- /dev/null +++ b/src/lib/dns/tests/testdata/message_toWire5.spec @@ -0,0 +1,36 @@ +# +# A longest possible (without EDNS) DNS response with TSIG, i.e. totatl +# length should be 512 bytes. +# + +[custom] +sections: header:question:txt/1:txt/2:tsig +[header] +id: 0xd6e2 +rd: 1 +qr: 1 +aa: 1 +ancount: 2 +arcount: 1 +[question] +name: www.example.com +rrtype: TXT +[txt/1] +as_rr: True +# QNAME is fully compressed +rr_name: ptr=12 +string: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcde +[txt/2] +as_rr: True +# QNAME is fully compressed +rr_name: ptr=12 +string: 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0 +[tsig] +as_rr: True +# TSIG QNAME won't be compressed +rr_name: www.example.com +algorithm: hmac-md5 +time_signed: 0x4e17b38d +mac_size: 16 +mac: 0xbe2ba477373d2496891e2fda240ee4ec +original_id: 0xd6e2 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec new file mode 100644 index 0000000000..f831313827 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec @@ -0,0 +1,3 @@ +[custom] +sections: afsdb +[afsdb] diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec new file mode 100644 index 0000000000..f33e768589 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec @@ -0,0 +1,6 @@ +[custom] +sections: name:afsdb +[name] +name: example.com +[afsdb] +server: afsdb.ptr=0 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec new file mode 100644 index 0000000000..993032f605 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: 3 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec new file mode 100644 index 0000000000..37abf134c5 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: 80 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec new file mode 100644 index 0000000000..0ea79dd173 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +server: "01234567890123456789012345678901234567890123456789012345678901234" diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec b/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec new file mode 100644 index 0000000000..19464589e1 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: -1 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec b/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec new file mode 100644 index 0000000000..c80011a488 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec @@ -0,0 +1,8 @@ +[custom] +sections: name:afsdb +[name] +name: example.com. +[afsdb] +subtype: 0 +server: root.example.com +rdlen: -1 diff --git a/src/lib/dns/tests/testdata/rdata_minfo_fromWire1.spec b/src/lib/dns/tests/testdata/rdata_minfo_fromWire1.spec new file mode 100644 index 0000000000..2c43db0727 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_fromWire1.spec @@ -0,0 +1,3 @@ +[custom] +sections: minfo +[minfo] diff --git a/src/lib/dns/tests/testdata/rdata_minfo_fromWire2.spec b/src/lib/dns/tests/testdata/rdata_minfo_fromWire2.spec new file mode 100644 index 0000000000..d781cac71d --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_fromWire2.spec @@ -0,0 +1,7 @@ +[custom] +sections: name:minfo +[name] +name: a.example.com. +[minfo] +rmailbox: rmailbox.ptr=02 +emailbox: emailbox.ptr=02 diff --git a/src/lib/dns/tests/testdata/rdata_minfo_fromWire3.spec b/src/lib/dns/tests/testdata/rdata_minfo_fromWire3.spec new file mode 100644 index 0000000000..a1d4b769d9 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_fromWire3.spec @@ -0,0 +1,6 @@ +[custom] +sections: minfo +# rdlength too short +[minfo] +emailbox: emailbox.ptr=11 +rdlen: 3 diff --git a/src/lib/dns/tests/testdata/rdata_minfo_fromWire4.spec b/src/lib/dns/tests/testdata/rdata_minfo_fromWire4.spec new file mode 100644 index 0000000000..269a6ce7e2 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_fromWire4.spec @@ -0,0 +1,6 @@ +[custom] +sections: minfo +# rdlength too long +[minfo] +emailbox: emailbox.ptr=11 +rdlen: 80 diff --git a/src/lib/dns/tests/testdata/rdata_minfo_fromWire5.spec b/src/lib/dns/tests/testdata/rdata_minfo_fromWire5.spec new file mode 100644 index 0000000000..3a888e3c20 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_fromWire5.spec @@ -0,0 +1,5 @@ +[custom] +sections: minfo +# bogus rmailbox name +[minfo] +rmailbox: "01234567890123456789012345678901234567890123456789012345678901234" diff --git a/src/lib/dns/tests/testdata/rdata_minfo_fromWire6.spec b/src/lib/dns/tests/testdata/rdata_minfo_fromWire6.spec new file mode 100644 index 0000000000..c75ed8e214 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_fromWire6.spec @@ -0,0 +1,5 @@ +[custom] +sections: minfo +# bogus emailbox name +[minfo] +emailbox: "01234567890123456789012345678901234567890123456789012345678901234" diff --git a/src/lib/dns/tests/testdata/rdata_minfo_toWire1.spec b/src/lib/dns/tests/testdata/rdata_minfo_toWire1.spec new file mode 100644 index 0000000000..7b340a3904 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_toWire1.spec @@ -0,0 +1,5 @@ +[custom] +sections: minfo +[minfo] +emailbox: emailbox.ptr=09 +rdlen: -1 diff --git a/src/lib/dns/tests/testdata/rdata_minfo_toWire2.spec b/src/lib/dns/tests/testdata/rdata_minfo_toWire2.spec new file mode 100644 index 0000000000..132f11839f --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_toWire2.spec @@ -0,0 +1,6 @@ +[custom] +sections: minfo +[minfo] +rmailbox: root.example.com. +emailbox: emailbox.ptr=05 +rdlen: -1 diff --git a/src/lib/dns/tests/testdata/rdata_minfo_toWireUncompressed1.spec b/src/lib/dns/tests/testdata/rdata_minfo_toWireUncompressed1.spec new file mode 100644 index 0000000000..d99a3813ca --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_toWireUncompressed1.spec @@ -0,0 +1,7 @@ +# +# A simplest form of MINFO: all default parameters +# +[custom] +sections: minfo +[minfo] +rdlen: -1 diff --git a/src/lib/dns/tests/testdata/rdata_minfo_toWireUncompressed2.spec b/src/lib/dns/tests/testdata/rdata_minfo_toWireUncompressed2.spec new file mode 100644 index 0000000000..0f78fcc63b --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_minfo_toWireUncompressed2.spec @@ -0,0 +1,8 @@ +# +# A simplest form of MINFO: custom rmailbox and default emailbox +# +[custom] +sections: minfo +[minfo] +rmailbox: root.example.com. +rdlen: -1 diff --git a/src/lib/dns/tests/testdata/rdata_srv_fromWire b/src/lib/dns/tests/testdata/rdata_srv_fromWire new file mode 100644 index 0000000000..dac87e9144 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_srv_fromWire @@ -0,0 +1,36 @@ +# +# various kinds of SRV RDATA stored in an input buffer +# +# RDLENGHT=21 bytes +# 0 1 + 00 15 +# 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 20 1 2(bytes) + 00 01 00 05 05 dc 01 61 07 65 78 61 6d 70 6c 65 03 63 6f 6d 00 +# +# short length +# 3 4 + 00 12 +# 5 6 7 8 9 30 1 2 3 4 5 6 7 8 9 40 1 2 3 4 5 + 00 01 00 05 05 dc 01 61 07 65 78 61 6d 70 6c 65 03 63 6f 6d 00 +# +# length too long +# 6 7 + 00 19 +# +# 8 9 50 1 2 3 4 5 6 7 8 9 60 1 2 3 4 5 6 7 8 + 00 01 00 05 05 dc 01 61 07 65 78 61 6d 70 6c 65 03 63 6f 6d 00 +# +# +# incomplete target name +# 9 70 + 00 12 +# 1 2 3 4 5 6 7 8 9 80 1 2 3 4 5 6 7 8 + 00 01 00 05 05 dc 01 61 07 65 78 61 6d 70 6c 65 03 63 +# +# +# Valid compressed target name: 'a' + pointer +# 9 90 + 00 0a +# +# 1 2 3 4 5 6 7 8 9 100 + 00 01 00 05 05 dc 01 61 c0 0a diff --git a/src/lib/dns/tests/tsig_unittest.cc b/src/lib/dns/tests/tsig_unittest.cc index ba17e70b27..7944b2939e 100644 --- a/src/lib/dns/tests/tsig_unittest.cc +++ b/src/lib/dns/tests/tsig_unittest.cc @@ -440,7 +440,7 @@ TEST_F(TSIGTest, signUsingHMACSHA224) { 0xef, 0x33, 0xa2, 0xda, 0xa1, 0x48, 0x71, 0xd3 }; { - SCOPED_TRACE("Sign test using HMAC-SHA1"); + SCOPED_TRACE("Sign test using HMAC-SHA224"); commonSignChecks(createMessageAndSign(sha1_qid, test_name, &sha1_ctx), sha1_qid, 0x4dae7d5f, expected_mac, sizeof(expected_mac), 0, 0, NULL, @@ -927,4 +927,76 @@ TEST_F(TSIGTest, tooShortMAC) { } } +TEST_F(TSIGTest, getTSIGLength) { + // Check for the most common case with various algorithms + // See the comment in TSIGContext::getTSIGLength() for calculation and + // parameter notation. + // The key name (www.example.com) is the same for most cases, where n1=17 + + // hmac-md5.sig-alg.reg.int.: n2=26, x=16 + EXPECT_EQ(85, tsig_ctx->getTSIGLength()); + + // hmac-sha1: n2=11, x=20 + tsig_ctx.reset(new TSIGContext(TSIGKey(test_name, TSIGKey::HMACSHA1_NAME(), + &dummy_data[0], 20))); + EXPECT_EQ(74, tsig_ctx->getTSIGLength()); + + // hmac-sha256: n2=13, x=32 + tsig_ctx.reset(new TSIGContext(TSIGKey(test_name, + TSIGKey::HMACSHA256_NAME(), + &dummy_data[0], 32))); + EXPECT_EQ(88, tsig_ctx->getTSIGLength()); + + // hmac-sha224: n2=13, x=28 + tsig_ctx.reset(new TSIGContext(TSIGKey(test_name, + TSIGKey::HMACSHA224_NAME(), + &dummy_data[0], 28))); + EXPECT_EQ(84, tsig_ctx->getTSIGLength()); + + // hmac-sha384: n2=13, x=48 + tsig_ctx.reset(new TSIGContext(TSIGKey(test_name, + TSIGKey::HMACSHA384_NAME(), + &dummy_data[0], 48))); + EXPECT_EQ(104, tsig_ctx->getTSIGLength()); + + // hmac-sha512: n2=13, x=64 + tsig_ctx.reset(new TSIGContext(TSIGKey(test_name, + TSIGKey::HMACSHA512_NAME(), + &dummy_data[0], 64))); + EXPECT_EQ(120, tsig_ctx->getTSIGLength()); + + // bad key case: n1=len(badkey.example.com)=20, n2=26, x=0 + tsig_ctx.reset(new TSIGContext(badkey_name, TSIGKey::HMACMD5_NAME(), + keyring)); + EXPECT_EQ(72, tsig_ctx->getTSIGLength()); + + // bad sig case: n1=17, n2=26, x=0 + isc::util::detail::gettimeFunction = testGetTime<0x4da8877a>; + createMessageFromFile("message_toWire2.wire"); + tsig_ctx.reset(new TSIGContext(TSIGKey(test_name, TSIGKey::HMACMD5_NAME(), + &dummy_data[0], + dummy_data.size()))); + { + SCOPED_TRACE("Verify resulting in BADSIG"); + commonVerifyChecks(*tsig_ctx, message.getTSIGRecord(), + &received_data[0], received_data.size(), + TSIGError::BAD_SIG(), TSIGContext::RECEIVED_REQUEST); + } + EXPECT_EQ(69, tsig_ctx->getTSIGLength()); + + // bad time case: n1=17, n2=26, x=16, y=6 + isc::util::detail::gettimeFunction = testGetTime<0x4da8877a - 1000>; + tsig_ctx.reset(new TSIGContext(TSIGKey(test_name, TSIGKey::HMACMD5_NAME(), + &dummy_data[0], + dummy_data.size()))); + { + SCOPED_TRACE("Verify resulting in BADTIME"); + commonVerifyChecks(*tsig_ctx, message.getTSIGRecord(), + &received_data[0], received_data.size(), + TSIGError::BAD_TIME(), + TSIGContext::RECEIVED_REQUEST); + } + EXPECT_EQ(91, tsig_ctx->getTSIGLength()); +} + } // end namespace diff --git a/src/lib/dns/tsig.cc b/src/lib/dns/tsig.cc index 714b2a596e..1bda02105a 100644 --- a/src/lib/dns/tsig.cc +++ b/src/lib/dns/tsig.cc @@ -58,10 +58,32 @@ getTSIGTime() { } struct TSIGContext::TSIGContextImpl { - TSIGContextImpl(const TSIGKey& key) : - state_(INIT), key_(key), error_(Rcode::NOERROR()), - previous_timesigned_(0) - {} + TSIGContextImpl(const TSIGKey& key, + TSIGError error = TSIGError::NOERROR()) : + state_(INIT), key_(key), error_(error), + previous_timesigned_(0), digest_len_(0) + { + if (error == TSIGError::NOERROR()) { + // In normal (NOERROR) case, the key should be valid, and we + // should be able to pre-create a corresponding HMAC object, + // which will be likely to be used for sign or verify later. + // We do this in the constructor so that we can know the expected + // digest length in advance. The creation should normally succeed, + // but the key information could be still broken, which could + // trigger an exception inside the cryptolink module. We ignore + // it at this moment; a subsequent sign/verify operation will try + // to create the HMAC, which would also fail. + try { + hmac_.reset(CryptoLink::getCryptoLink().createHMAC( + key_.getSecret(), key_.getSecretLength(), + key_.getAlgorithm()), + deleteHMAC); + } catch (const Exception&) { + return; + } + digest_len_ = hmac_->getOutputLength(); + } + } // This helper method is used from verify(). It's expected to be called // just before verify() returns. It updates internal state based on @@ -85,6 +107,23 @@ struct TSIGContext::TSIGContextImpl { return (error); } + // A shortcut method to create an HMAC object for sign/verify. If one + // has been successfully created in the constructor, return it; otherwise + // create a new one and return it. In the former case, the ownership is + // transferred to the caller; the stored HMAC will be reset after the + // call. + HMACPtr createHMAC() { + if (hmac_) { + HMACPtr ret = HMACPtr(); + ret.swap(hmac_); + return (ret); + } + return (HMACPtr(CryptoLink::getCryptoLink().createHMAC( + key_.getSecret(), key_.getSecretLength(), + key_.getAlgorithm()), + deleteHMAC)); + } + // The following three are helper methods to compute the digest for // TSIG sign/verify in order to unify the common code logic for sign() // and verify() and to keep these callers concise. @@ -111,6 +150,8 @@ struct TSIGContext::TSIGContextImpl { vector previous_digest_; TSIGError error_; uint64_t previous_timesigned_; // only meaningful for response with BADTIME + size_t digest_len_; + HMACPtr hmac_; }; void @@ -221,8 +262,7 @@ TSIGContext::TSIGContext(const Name& key_name, const Name& algorithm_name, // be used in subsequent response with a TSIG indicating a BADKEY // error. impl_ = new TSIGContextImpl(TSIGKey(key_name, algorithm_name, - NULL, 0)); - impl_->error_ = TSIGError::BAD_KEY(); + NULL, 0), TSIGError::BAD_KEY()); } else { impl_ = new TSIGContextImpl(*result.key); } @@ -232,6 +272,45 @@ TSIGContext::~TSIGContext() { delete impl_; } +size_t +TSIGContext::getTSIGLength() const { + // + // The space required for an TSIG record is: + // + // n1 bytes for the (key) name + // 2 bytes for the type + // 2 bytes for the class + // 4 bytes for the ttl + // 2 bytes for the rdlength + // n2 bytes for the algorithm name + // 6 bytes for the time signed + // 2 bytes for the fudge + // 2 bytes for the MAC size + // x bytes for the MAC + // 2 bytes for the original id + // 2 bytes for the error + // 2 bytes for the other data length + // y bytes for the other data (at most) + // --------------------------------- + // 26 + n1 + n2 + x + y bytes + // + + // Normally the digest length ("x") is the length of the underlying + // hash output. If a key related error occurred, however, the + // corresponding TSIG will be "unsigned", and the digest length will be 0. + const size_t digest_len = + (impl_->error_ == TSIGError::BAD_KEY() || + impl_->error_ == TSIGError::BAD_SIG()) ? 0 : impl_->digest_len_; + + // Other Len ("y") is normally 0; if BAD_TIME error occurred, the + // subsequent TSIG will contain 48 bits of the server current time. + const size_t other_len = (impl_->error_ == TSIGError::BAD_TIME()) ? 6 : 0; + + return (26 + impl_->key_.getKeyName().getLength() + + impl_->key_.getAlgorithmName().getLength() + + digest_len + other_len); +} + TSIGContext::State TSIGContext::getState() const { return (impl_->state_); @@ -276,11 +355,7 @@ TSIGContext::sign(const uint16_t qid, const void* const data, return (tsig); } - HMACPtr hmac(CryptoLink::getCryptoLink().createHMAC( - impl_->key_.getSecret(), - impl_->key_.getSecretLength(), - impl_->key_.getAlgorithm()), - deleteHMAC); + HMACPtr hmac(impl_->createHMAC()); // If the context has previous MAC (either the Request MAC or its own // previous MAC), digest it. @@ -406,11 +481,7 @@ TSIGContext::verify(const TSIGRecord* const record, const void* const data, return (impl_->postVerifyUpdate(error, NULL, 0)); } - HMACPtr hmac(CryptoLink::getCryptoLink().createHMAC( - impl_->key_.getSecret(), - impl_->key_.getSecretLength(), - impl_->key_.getAlgorithm()), - deleteHMAC); + HMACPtr hmac(impl_->createHMAC()); // If the context has previous MAC (either the Request MAC or its own // previous MAC), digest it. diff --git a/src/lib/dns/tsig.h b/src/lib/dns/tsig.h index bceec25295..028d29586c 100644 --- a/src/lib/dns/tsig.h +++ b/src/lib/dns/tsig.h @@ -353,6 +353,27 @@ public: TSIGError verify(const TSIGRecord* const record, const void* const data, const size_t data_len); + /// Return the expected length of TSIG RR after \c sign() + /// + /// This method returns the length of the TSIG RR that would be + /// produced as a result of \c sign() with the state of the context + /// at the time of the call. The expected length can be decided + /// from the key and the algorithm (which determines the MAC size if + /// included) and the recorded TSIG error. Specifically, if a key + /// related error has been identified, the MAC will be excluded; if + /// a time error has occurred, the TSIG will include "other data". + /// + /// This method is provided mainly for the convenience of the Message + /// class, which needs to know the expected TSIG length in rendering a + /// signed DNS message so that it can handle truncated messages with TSIG + /// correctly. Normal applications wouldn't need this method. The Python + /// binding for this method won't be provided for the same reason. + /// + /// \exception None + /// + /// \return The expected TISG RR length in bytes + size_t getTSIGLength() const; + /// Return the current state of the context /// /// \note diff --git a/src/lib/exceptions/exceptions.h b/src/lib/exceptions/exceptions.h index d0f1d74748..433bb7ddcd 100644 --- a/src/lib/exceptions/exceptions.h +++ b/src/lib/exceptions/exceptions.h @@ -136,6 +136,18 @@ public: isc::Exception(file, line, what) {} }; +/// +/// \brief A generic exception that is thrown when a function is +/// not implemented. +/// +/// This may be due to unfinished implementation or in case the +/// function isn't even planned to be provided for that situation. +class NotImplemented : public Exception { +public: + NotImplemented(const char* file, size_t line, const char* what) : + isc::Exception(file, line, what) {} +}; + /// /// A shortcut macro to insert known values into exception arguments. /// diff --git a/src/lib/log/Makefile.am b/src/lib/log/Makefile.am index 63b1dfbb70..9f5272469c 100644 --- a/src/lib/log/Makefile.am +++ b/src/lib/log/Makefile.am @@ -20,6 +20,7 @@ liblog_la_SOURCES += logger_manager_impl.cc logger_manager_impl.h liblog_la_SOURCES += logger_name.cc logger_name.h liblog_la_SOURCES += logger_specification.h liblog_la_SOURCES += logger_support.cc logger_support.h +liblog_la_SOURCES += logger_unittest_support.cc logger_unittest_support.h liblog_la_SOURCES += macros.h liblog_la_SOURCES += log_messages.cc log_messages.h liblog_la_SOURCES += message_dictionary.cc message_dictionary.h diff --git a/src/lib/log/README b/src/lib/log/README index d854dce0ba..3747cb1dcf 100644 --- a/src/lib/log/README +++ b/src/lib/log/README @@ -142,13 +142,19 @@ Points to note: the error originated from the logging library and the "WRITE_ERROR" indicates that there was a problem in a write operation. - * The replacement tokens are the strings "%1", "%2" etc. When a message - is logged, these are replaced with the arguments passed to the logging - call: %1 refers to the first argument, %2 to the second etc. Within the - message text, the placeholders can appear in any order and placeholders - can be repeated. - -* Remaining lines indicate an explanation for the preceding message. These + * The rest of the line - from the first non-space character to the + last non- space character - is taken exactly for the text + of the message. There are no restrictions on what characters may + be in this text, other than they be printable. (This means that + both single-quote (') and double-quote (") characters are allowed.) + The message text may include replacement tokens (the strings "%1", + "%2" etc.). When a message is logged, these are replaced with the + arguments passed to the logging call: %1 refers to the first argument, + %2 to the second etc. Within the message text, the placeholders + can appear in any order and placeholders can be repeated. Otherwise, + the message is printed unmodified. + +* Remaining lines indicate an explanation for the preceding message. These are intended to be processed by a separate program and used to generate an error messages manual. They are ignored by the message compiler. @@ -232,8 +238,8 @@ Using the Logging - C++ ======================= 1. Build message header file and source file as describe above. -2. The main program unit should include a call to isc::log::initLogger() - (defined in logger_support.h) to set the logging severity, debug log +2. The main program unit must include a call to isc::log::initLogger() + (described in more detail below) to set the logging severity, debug log level, and external message file: a) The logging severity is one of the enum defined in logger.h, i.e. @@ -279,9 +285,9 @@ Using the Logging - Python ========================== 1. Build message module as describe above. -2. The main program unit should include a call to isc.log.init() to - set the to set the logging severity, debug log level, and external - message file: +2. The main program unit must include a call to isc.log.init() + (described in more detail below) to set the to set the logging + severity, debug log level, and external message file: a) The logging severity is one of the strings: @@ -316,6 +322,91 @@ Using the Logging - Python logger.error(LOG_WRITE_ERROR, "output.txt"); +Logging Initialization +====================== +In all cases, if an attempt is made to use a logging method before the logging +has been initialized, the program will terminate with a LoggingNotInitialized +exception. + +C++ +--- +Logging Initialization is carried out by calling initLogger(). There are two +variants to the call, one for use by production programs and one for use by +unit tests. + +Variant #1, Used by Production Programs +--------------------------------------- +void isc::log::initLogger(const std::string& root, + isc::log::Severity severity = isc::log::INFO, + int dbglevel = 0, const char* file = NULL); + +This is the call that should be used by production programs: + +root +Name of the program (e.g. "b10-auth"). This is also the name of the root +logger and is used when configuring logging. + +severity +Default severity that the program will start logging with. Although this may +be overridden when the program obtains its configuration from the configuration +database, this is the severity that it used until then. (This may be set by +a command-line parameter.) + +dbglevel +The debug level used if "severity" is set to isc::log::DEBUG. + +file +The name of a local message file. This will be read and its definitions used +to replace the compiled-in text of the messages. + + +Variant #2, Used by Unit Tests +------------------------------ + void isc::log::initLogger() + +This is the call that should be used by unit tests. In this variant, all the +options are supplied by environment variables. (It should not be used for +production programs to avoid the chance that the program operation is affected +by inadvertently-defined environment variables.) + +The environment variables are: + +B10_LOGGER_ROOT +Sets the "root" for the unit test. If not defined, the name "bind10" is used. + +B10_LOGGER_SEVERITY +The severity to set for the root logger in the unit test. Valid values are +"DEBUG", "INFO", "WARN", "ERROR", "FATAL" and "NONE". If not defined, "INFO" +is used. + +B10_LOGGER_DBGLEVEL +If B10_LOGGER_SEVERITY is set to "DEBUG", the debug level. This can be a +number between 0 and 99, and defaults to 0. + +B10_LOGGER_LOCALMSG +If defined, points to a local message file. The default is not to use a local +message file. + +B10_LOGGER_DESTINATION +The location to which log message are written. This can be one of: + + stdout Message are written to stdout + stderr Messages are written to stderr + syslog[:facility] Messages are written to syslog. If the optional + "facility" is used, the messages are written using + that facility. (This defaults to "local0" if not + specified.) + Anything else Interpreted as the name of a file to which output + is appended. If the file does not exist, a new one + is opened. + +In the case of "stdout", "stderr" and "syslog", they must be written exactly +as is - no leading or trailing spaces, and in lower-case. + +Python +------ +To be supplied + Severity Guidelines =================== diff --git a/src/lib/log/compiler/message.cc b/src/lib/log/compiler/message.cc index 68335dc35a..f74020a762 100644 --- a/src/lib/log/compiler/message.cc +++ b/src/lib/log/compiler/message.cc @@ -43,6 +43,7 @@ using namespace isc::util; static const char* VERSION = "1.0-0"; +/// \file log/compiler/message.cc /// \brief Message Compiler /// /// \b Overview
@@ -55,13 +56,16 @@ static const char* VERSION = "1.0-0"; /// \b Invocation
/// The program is invoked with the command: /// -/// message [-v | -h | \] +/// message [-v | -h | -p | -d | ] /// -/// It reads the message file and writes out two files of the same name in the -/// default directory but with extensions of .h and .cc. +/// It reads the message file and writes out two files of the same +/// name in the current working directory (unless -d is used) but +/// with extensions of .h and .cc, or .py if -p is used. /// -/// \-v causes it to print the version number and exit. \-h prints a help -/// message (and exits). +/// -v causes it to print the version number and exit. -h prints a help +/// message (and exits). -p sets the output to python. -d will make +/// it write the output file(s) to dir instead of current working +/// directory /// \brief Print Version @@ -80,11 +84,12 @@ version() { void usage() { cout << - "Usage: message [-h] [-v] [-p] \n" << + "Usage: message [-h] [-v] [-p] [-d dir] \n" << "\n" << "-h Print this message and exit\n" << "-v Print the program version and exit\n" << "-p Output python source instead of C++ ones\n" << + "-d Place output files in given directory\n" << "\n" << " is the name of the input message file.\n"; } @@ -106,7 +111,7 @@ currentTime() { // Convert to string and strip out the trailing newline string current_time = buffer; - return isc::util::str::trim(current_time); + return (isc::util::str::trim(current_time)); } @@ -127,7 +132,7 @@ sentinel(Filename& file) { string ext = file.extension(); string sentinel_text = "__" + name + "_" + ext.substr(1); isc::util::str::uppercase(sentinel_text); - return sentinel_text; + return (sentinel_text); } @@ -154,7 +159,7 @@ quoteString(const string& instring) { outstring += instring[i]; } - return outstring; + return (outstring); } @@ -177,7 +182,7 @@ sortedIdentifiers(MessageDictionary& dictionary) { } sort(ident.begin(), ident.end()); - return ident; + return (ident); } @@ -207,7 +212,7 @@ splitNamespace(string ns) { // ... and return the vector of namespace components split on the single // colon. - return isc::util::str::tokens(ns, ":"); + return (isc::util::str::tokens(ns, ":")); } @@ -249,14 +254,22 @@ writeClosingNamespace(ostream& output, const vector& ns) { /// \param file Name of the message file. The source code is written to a file /// file of the same name but with a .py suffix. /// \param dictionary The dictionary holding the message definitions. +/// \param output_directory if not null NULL, output files are written +/// to the given directory. If NULL, they are written to the current +/// working directory. /// /// \note We don't use the namespace as in C++. We don't need it, because /// python file/module works as implicit namespace as well. void -writePythonFile(const string& file, MessageDictionary& dictionary) { +writePythonFile(const string& file, MessageDictionary& dictionary, + const char* output_directory) +{ Filename message_file(file); Filename python_file(Filename(message_file.name()).useAsDefault(".py")); + if (output_directory != NULL) { + python_file.setDirectory(output_directory); + } // Open the file for writing ofstream pyfile(python_file.fullName().c_str()); @@ -291,13 +304,19 @@ writePythonFile(const string& file, MessageDictionary& dictionary) { /// \param ns Namespace in which the definitions are to be placed. An empty /// string indicates no namespace. /// \param dictionary Dictionary holding the message definitions. +/// \param output_directory if not null NULL, output files are written +/// to the given directory. If NULL, they are written to the current +/// working directory. void writeHeaderFile(const string& file, const vector& ns_components, - MessageDictionary& dictionary) + MessageDictionary& dictionary, const char* output_directory) { Filename message_file(file); Filename header_file(Filename(message_file.name()).useAsDefault(".h")); + if (output_directory != NULL) { + header_file.setDirectory(output_directory); + } // Text to use as the sentinels. string sentinel_text = sentinel(header_file); @@ -382,13 +401,25 @@ replaceNonAlphaNum(char c) { /// optimisation is done at link-time, not compiler-time. In this it _may_ /// decide to remove the initializer object because of a lack of references /// to it. But until BIND-10 is ported to Windows, we won't know. - +/// +/// \param file Name of the message file. The header file is written to a +/// file of the same name but with a .h suffix. +/// \param ns Namespace in which the definitions are to be placed. An empty +/// string indicates no namespace. +/// \param dictionary Dictionary holding the message definitions. +/// \param output_directory if not null NULL, output files are written +/// to the given directory. If NULL, they are written to the current +/// working directory. void writeProgramFile(const string& file, const vector& ns_components, - MessageDictionary& dictionary) + MessageDictionary& dictionary, + const char* output_directory) { Filename message_file(file); Filename program_file(Filename(message_file.name()).useAsDefault(".cc")); + if (output_directory) { + program_file.setDirectory(output_directory); + } // Open the output file for writing ofstream ccfile(program_file.fullName().c_str()); @@ -496,30 +527,35 @@ warnDuplicates(MessageReader& reader) { int main(int argc, char* argv[]) { - const char* soptions = "hvp"; // Short options + const char* soptions = "hvpd:"; // Short options optind = 1; // Ensure we start a new scan int opt; // Value of the option bool doPython = false; + const char *output_directory = NULL; while ((opt = getopt(argc, argv, soptions)) != -1) { switch (opt) { + case 'd': + output_directory = optarg; + break; + case 'p': doPython = true; break; case 'h': usage(); - return 0; + return (0); case 'v': version(); - return 0; + return (0); default: // A message will have already been output about the error. - return 1; + return (1); } } @@ -527,11 +563,11 @@ main(int argc, char* argv[]) { if (optind < (argc - 1)) { cout << "Error: excess arguments in command line\n"; usage(); - return 1; + return (1); } else if (optind >= argc) { cout << "Error: missing message file\n"; usage(); - return 1; + return (1); } string message_file = argv[optind]; @@ -552,7 +588,7 @@ main(int argc, char* argv[]) { } // Write the whole python file - writePythonFile(message_file, dictionary); + writePythonFile(message_file, dictionary, output_directory); } else { // Get the namespace into which the message definitions will be put and // split it into components. @@ -560,16 +596,18 @@ main(int argc, char* argv[]) { splitNamespace(reader.getNamespace()); // Write the header file. - writeHeaderFile(message_file, ns_components, dictionary); + writeHeaderFile(message_file, ns_components, dictionary, + output_directory); // Write the file that defines the message symbols and text - writeProgramFile(message_file, ns_components, dictionary); + writeProgramFile(message_file, ns_components, dictionary, + output_directory); } // Finally, warn of any duplicates encountered. warnDuplicates(reader); } - catch (MessageException& e) { + catch (const MessageException& e) { // Create an error message from the ID and the text MessageDictionary& global = MessageDictionary::globalDictionary(); string text = e.id(); @@ -583,9 +621,9 @@ main(int argc, char* argv[]) { cerr << text << "\n"; - return 1; + return (1); } - return 0; + return (0); } diff --git a/src/lib/log/log_formatter.h b/src/lib/log/log_formatter.h index c81d4ea21a..ca23844f49 100644 --- a/src/lib/log/log_formatter.h +++ b/src/lib/log/log_formatter.h @@ -18,12 +18,28 @@ #include #include #include + +#include #include #include namespace isc { namespace log { +/// \brief Format Failure +/// +/// This exception is used to wrap a bad_lexical_cast exception thrown during +/// formatting an argument. + +class FormatFailure : public isc::Exception { +public: + FormatFailure(const char* file, size_t line, const char* what) : + isc::Exception(file, line, what) + {} +}; + + +/// /// \brief The internal replacement routine /// /// This is used internally by the Formatter. Replaces a placeholder @@ -156,13 +172,29 @@ public: /// \param arg The argument to place into the placeholder. template Formatter& arg(const Arg& value) { if (logger_) { - return (arg(boost::lexical_cast(value))); + try { + return (arg(boost::lexical_cast(value))); + } catch (const boost::bad_lexical_cast& ex) { + + // A bad_lexical_cast during a conversion to a string is + // *extremely* unlikely to fail. However, there is nothing + // in the documentation that rules it out, so we need to handle + // it. As it is a potentially very serious problem, throw the + // exception detailing the problem with as much information as + // we can. (Note that this does not include 'value' - + // boost::lexical_cast failed to convert it to a string, so an + // attempt to do so here would probably fail as well.) + isc_throw(FormatFailure, "bad_lexical_cast in call to " + "Formatter::arg(): " << ex.what()); + } } else { return (*this); } } /// \brief String version of arg. + /// + /// \param arg The text to place into the placeholder. Formatter& arg(const std::string& arg) { if (logger_) { // Note that this method does a replacement and returns the @@ -179,7 +211,6 @@ public: } return (*this); } - }; } diff --git a/src/lib/log/log_messages.cc b/src/lib/log/log_messages.cc index a515959769..f60898cd85 100644 --- a/src/lib/log/log_messages.cc +++ b/src/lib/log/log_messages.cc @@ -1,4 +1,4 @@ -// File created from log_messages.mes on Wed Jun 22 11:54:57 2011 +// File created from log_messages.mes on Thu Jul 7 15:32:06 2011 #include #include diff --git a/src/lib/log/log_messages.h b/src/lib/log/log_messages.h index 476f68601b..10e150196b 100644 --- a/src/lib/log/log_messages.h +++ b/src/lib/log/log_messages.h @@ -1,4 +1,4 @@ -// File created from log_messages.mes on Wed Jun 22 11:54:57 2011 +// File created from log_messages.mes on Thu Jul 7 15:32:06 2011 #ifndef __LOG_MESSAGES_H #define __LOG_MESSAGES_H diff --git a/src/lib/log/log_messages.mes b/src/lib/log/log_messages.mes index 697ac924ef..f150f3965f 100644 --- a/src/lib/log/log_messages.mes +++ b/src/lib/log/log_messages.mes @@ -28,23 +28,23 @@ destination should be one of "console", "file", or "syslog". % LOG_BAD_SEVERITY unrecognized log severity: %1 A logger severity value was given that was not recognized. The severity -should be one of "DEBUG", "INFO", "WARN", "ERROR", or "FATAL". +should be one of "DEBUG", "INFO", "WARN", "ERROR", "FATAL" or "NONE". % LOG_BAD_STREAM bad log console output stream: %1 -A log console output stream was given that was not recognized. The output -stream should be one of "stdout", or "stderr" +Logging has been configured so that output is written to the terminal +(console) but the stream on which it is to be written is not recognised. +Allowed values are "stdout" and "stderr". % LOG_DUPLICATE_MESSAGE_ID duplicate message ID (%1) in compiled code -During start-up, BIND10 detected that the given message identification had -been defined multiple times in the BIND10 code. - -This has no ill-effects other than the possibility that an erronous -message may be logged. However, as it is indicative of a programming -error, please log a bug report. +During start-up, BIND 10 detected that the given message identification +had been defined multiple times in the BIND 10 code. This indicates a +programming error; please submit a bug report. % LOG_DUPLICATE_NAMESPACE line %1: duplicate $NAMESPACE directive found When reading a message file, more than one $NAMESPACE directive was found. -Such a condition is regarded as an error and the read will be abandoned. +(This directive is used to set a C++ namespace when generating header +files during software development.) Such a condition is regarded as an +error and the read will be abandoned. % LOG_INPUT_OPEN_FAIL unable to open message file %1 for input: %2 The program was not able to open the specified input message file for @@ -99,10 +99,10 @@ There may be several reasons why this message may appear: - The program outputting the message may not use that particular message (e.g. it originates in a module not used by the program.) -- The local file was written for an earlier version of the BIND10 software +- The local file was written for an earlier version of the BIND 10 software and the later version no longer generates that message. -Whatever the reason, there is no impact on the operation of BIND10. +Whatever the reason, there is no impact on the operation of BIND 10. % LOG_OPEN_OUTPUT_FAIL unable to open %1 for output: %2 Originating within the logging code, the program was not able to open @@ -115,7 +115,7 @@ This error is generated when the compiler finds a $PREFIX directive with more than one argument. Note: the $PREFIX directive is deprecated and will be removed in a future -version of BIND10. +version of BIND 10. % LOG_PREFIX_INVALID_ARG line %1: $PREFIX directive has an invalid argument ('%2') Within a message file, the $PREFIX directive takes a single argument, @@ -123,13 +123,13 @@ a prefix to be added to the symbol names when a C++ file is created. As such, it must adhere to restrictions on C++ symbol names (e.g. may only contain alphanumeric characters or underscores, and may nor start with a digit). A $PREFIX directive was found with an argument (given -in the message) that violates those restictions. +in the message) that violates those restrictions. Note: the $PREFIX directive is deprecated and will be removed in a future -version of BIND10. +version of BIND 10. % LOG_READING_LOCAL_FILE reading local message file %1 -This is an informational message output by BIND10 when it starts to read +This is an informational message output by BIND 10 when it starts to read a local message file. (A local message file may replace the text of one of more messages; the ID of the message will not be changed though.) diff --git a/src/lib/log/logger_support.cc b/src/lib/log/logger_support.cc index 73323a03f7..2097136228 100644 --- a/src/lib/log/logger_support.cc +++ b/src/lib/log/logger_support.cc @@ -12,26 +12,9 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE -/// \brief Temporary Logger Support -/// -/// Performs run-time initialization of the logger system. In particular, it -/// is passed information from the command line and: -/// -/// a) Sets the severity of the messages being logged (and debug level if -/// appropriate). -/// b) Reads in the local message file is one has been supplied. -/// -/// These functions will be replaced once the code has been written to obtain -/// the logging parameters from the configuration database. - -#include -#include -#include #include - -#include -#include #include +#include using namespace std; @@ -67,60 +50,5 @@ initLogger(const string& root, isc::log::Severity severity, int dbglevel, LoggerManager::init(root, severity, dbglevel, file); } -// Logger Run-Time Initialization via Environment Variables -void initLogger(isc::log::Severity severity, int dbglevel) { - - // Root logger name is defined by the environment variable B10_LOGGER_ROOT. - // If not present, the name is "bind10". - const char* DEFAULT_ROOT = "bind10"; - const char* root = getenv("B10_LOGGER_ROOT"); - if (! root) { - root = DEFAULT_ROOT; - } - - // Set the logging severity. The environment variable is - // B10_LOGGER_SEVERITY, and can be one of "DEBUG", "INFO", "WARN", "ERROR" - // of "FATAL". Note that the string must be in upper case with no leading - // of trailing blanks. - const char* sev_char = getenv("B10_LOGGER_SEVERITY"); - if (sev_char) { - severity = isc::log::getSeverity(sev_char); - } - - // If the severity is debug, get the debug level (environment variable - // B10_LOGGER_DBGLEVEL), which should be in the range 0 to 99. - if (severity == isc::log::DEBUG) { - const char* dbg_char = getenv("B10_LOGGER_DBGLEVEL"); - if (dbg_char) { - int level = 0; - try { - level = boost::lexical_cast(dbg_char); - if (level < MIN_DEBUG_LEVEL) { - cerr << "**ERROR** debug level of " << level - << " is invalid - a value of " << MIN_DEBUG_LEVEL - << " will be used\n"; - level = MIN_DEBUG_LEVEL; - } else if (level > MAX_DEBUG_LEVEL) { - cerr << "**ERROR** debug level of " << level - << " is invalid - a value of " << MAX_DEBUG_LEVEL - << " will be used\n"; - level = MAX_DEBUG_LEVEL; - } - } catch (...) { - // Error, but not fatal to the test - cerr << "**ERROR** Unable to translate " - "B10_LOGGER_DBGLEVEL - a value of 0 will be used\n"; - } - dbglevel = level; - } - } - - /// Set the local message file - const char* localfile = getenv("B10_LOGGER_LOCALMSG"); - - // Initialize logging - initLogger(root, severity, dbglevel, localfile); -} - } // namespace log } // namespace isc diff --git a/src/lib/log/logger_support.h b/src/lib/log/logger_support.h index 4bc8acc195..4ce3cedcd0 100644 --- a/src/lib/log/logger_support.h +++ b/src/lib/log/logger_support.h @@ -19,6 +19,13 @@ #include #include +#include + +/// \file +/// \brief Logging initialization functions +/// +/// Contains a set of functions relating to logging initialization that are +/// used by the production code. namespace isc { namespace log { @@ -33,17 +40,13 @@ namespace log { /// \return true if logging has been initialized, false if not bool isLoggingInitialized(); -/// \brief Set "logging initialized" flag -/// -/// Sets the state of the "logging initialized" flag. +/// \brief Set state of "logging initialized" flag /// /// \param state State to set the flag to. (This is expected to be "true" - the /// default - for all code apart from specific unit tests.) void setLoggingInitialized(bool state = true); - - -/// \brief Run-Time Initialization +/// \brief Run-time initialization /// /// Performs run-time initialization of the logger in particular supplying: /// @@ -62,43 +65,7 @@ void initLogger(const std::string& root, isc::log::Severity severity = isc::log::INFO, int dbglevel = 0, const char* file = NULL); - -/// \brief Run-Time Initialization from Environment -/// -/// Performs run-time initialization of the logger via the setting of -/// environment variables. These are: -/// -/// B10_LOGGER_ROOT -/// Name of the root logger. If not given, the string "bind10" will be used. -/// -/// B10_LOGGER_SEVERITY -/// Severity of messages that will be logged. This must be one of the strings -/// "DEBUG", "INFO", "WARN", "ERROR", "FATAL" or "NONE". (Must be upper case -/// and must not contain leading or trailing spaces.) If not specified (or if -/// specified but incorrect), the default passed as argument to this function -/// (currently INFO) will be used. -/// -/// B10_LOGGER_DBGLEVEL -/// Ignored if the level is not DEBUG, this should be a number between 0 and -/// 99 indicating the logging severity. The default is 0. If outside these -/// limits or if not a number, The value passed to this function (default -/// of 0) is used. -/// -/// B10_LOGGER_LOCALMSG -/// If defined, the path specification of a file that contains message -/// definitions replacing ones in the default dictionary. -/// -/// Any errors in the settings cause messages to be output to stderr. -/// -/// This function is aimed at test programs, allowing the default settings to -/// be overridden by the tester. It is not intended for use in production -/// code. - -void initLogger(isc::log::Severity severity = isc::log::INFO, - int dbglevel = 0); - } // namespace log } // namespace isc - #endif // __LOGGER_SUPPORT_H diff --git a/src/lib/log/logger_unittest_support.cc b/src/lib/log/logger_unittest_support.cc new file mode 100644 index 0000000000..a0969be6bc --- /dev/null +++ b/src/lib/log/logger_unittest_support.cc @@ -0,0 +1,175 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +using namespace std; + +namespace isc { +namespace log { + +// Get the logging severity. This is defined by the environment variable +// B10_LOGGER_SEVERITY, and can be one of "DEBUG", "INFO", "WARN", "ERROR" +// of "FATAL". (Note that the string must be in upper case with no leading +// of trailing blanks.) If not present, the default severity passed to the +// function is returned. +isc::log::Severity +b10LoggerSeverity(isc::log::Severity defseverity) { + const char* sev_char = getenv("B10_LOGGER_SEVERITY"); + if (sev_char) { + return (isc::log::getSeverity(sev_char)); + } + return (defseverity); +} + +// Get the debug level. This is defined by the envornment variable +// B10_LOGGER_DBGLEVEL. If not defined, a default value passed to the function +// is returned. +int +b10LoggerDbglevel(int defdbglevel) { + const char* dbg_char = getenv("B10_LOGGER_DBGLEVEL"); + if (dbg_char) { + int level = 0; + try { + level = boost::lexical_cast(dbg_char); + if (level < MIN_DEBUG_LEVEL) { + std::cerr << "**ERROR** debug level of " << level + << " is invalid - a value of " << MIN_DEBUG_LEVEL + << " will be used\n"; + level = MIN_DEBUG_LEVEL; + } else if (level > MAX_DEBUG_LEVEL) { + std::cerr << "**ERROR** debug level of " << level + << " is invalid - a value of " << MAX_DEBUG_LEVEL + << " will be used\n"; + level = MAX_DEBUG_LEVEL; + } + } catch (...) { + // Error, but not fatal to the test + std::cerr << "**ERROR** Unable to translate " + "B10_LOGGER_DBGLEVEL - a value of 0 will be used\n"; + } + return (level); + } + + return (defdbglevel); +} + + +// Reset characteristics of the root logger to that set by the environment +// variables B10_LOGGER_SEVERITY, B10_LOGGER_DBGLEVEL and B10_LOGGER_DESTINATION. + +void +resetUnitTestRootLogger() { + + using namespace isc::log; + + // Constants: not declared static as this is function is expected to be + // called once only + const string DEVNULL = "/dev/null"; + const string STDOUT = "stdout"; + const string STDERR = "stderr"; + const string SYSLOG = "syslog"; + const string SYSLOG_COLON = "syslog:"; + + // Get the destination. If not specified, assume /dev/null. (The default + // severity for unit tests is DEBUG, which generates a lot of output. + // Routing the logging to /dev/null will suppress that, whilst still + // ensuring that the code paths are tested.) + const char* destination = getenv("B10_LOGGER_DESTINATION"); + const string dest((destination == NULL) ? DEVNULL : destination); + + // Prepare the objects to define the logging specification + LoggerSpecification spec(getRootLoggerName(), + b10LoggerSeverity(isc::log::DEBUG), + b10LoggerDbglevel(isc::log::MAX_DEBUG_LEVEL)); + OutputOption option; + + // Set up output option according to destination specification + if (dest == STDOUT) { + option.destination = OutputOption::DEST_CONSOLE; + option.stream = OutputOption::STR_STDOUT; + + } else if (dest == STDERR) { + option.destination = OutputOption::DEST_CONSOLE; + option.stream = OutputOption::STR_STDERR; + + } else if (dest == SYSLOG) { + option.destination = OutputOption::DEST_SYSLOG; + // Use default specified in OutputOption constructor for the + // syslog destination + + } else if (dest.find(SYSLOG_COLON) == 0) { + option.destination = OutputOption::DEST_SYSLOG; + // Must take account of the string actually being "syslog:" + if (dest == SYSLOG_COLON) { + cerr << "**ERROR** value for B10_LOGGER_DESTINATION of " << + SYSLOG_COLON << " is invalid, " << SYSLOG << + " will be used instead\n"; + // Use default for logging facility + + } else { + // Everything else in the string is the facility name + option.facility = dest.substr(SYSLOG_COLON.size()); + } + + } else { + // Not a recognised destination, assume a file. + option.destination = OutputOption::DEST_FILE; + option.filename = dest; + } + + // ... and set the destination + spec.addOutputOption(option); + LoggerManager manager; + manager.process(spec); +} + + +// Logger Run-Time Initialization via Environment Variables +void initLogger(isc::log::Severity severity, int dbglevel) { + + // Root logger name is defined by the environment variable B10_LOGGER_ROOT. + // If not present, the name is "bind10". + const char* DEFAULT_ROOT = "bind10"; + const char* root = getenv("B10_LOGGER_ROOT"); + if (! root) { + root = DEFAULT_ROOT; + } + + // Set the local message file + const char* localfile = getenv("B10_LOGGER_LOCALMSG"); + + // Initialize logging + initLogger(root, isc::log::DEBUG, isc::log::MAX_DEBUG_LEVEL, localfile); + + // Now set reset the output destination of the root logger, overriding + // the default severity, debug level and destination with those specified + // in the environment variables. (The two-step approach is used as the + // setUnitTestRootLoggerCharacteristics() function is used in several + // places in the BIND 10 tests, and it avoid duplicating code.) + resetUnitTestRootLogger(); +} + +} // namespace log +} // namespace isc diff --git a/src/lib/log/logger_unittest_support.h b/src/lib/log/logger_unittest_support.h new file mode 100644 index 0000000000..ce9121b486 --- /dev/null +++ b/src/lib/log/logger_unittest_support.h @@ -0,0 +1,126 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __LOGGER_UNITTEST_SUPPORT_H +#define __LOGGER_UNITTEST_SUPPORT_H + +#include +#include + +/// \file +/// \brief Miscellaneous logging functions used by the unit tests. +/// +/// As the configuration database is unsually unavailable during unit tests, +/// the functions defined here allow a limited amount of logging configuration +/// through the use of environment variables + +namespace isc { +namespace log { + +/// \brief Run-Time Initialization for Unit Tests from Environment +/// +/// Performs run-time initialization of the logger via the setting of +/// environment variables. These are: +/// +/// - B10_LOGGER_ROOT\n +/// Name of the root logger. If not given, the string "bind10" will be used. +/// +/// - B10_LOGGER_SEVERITY\n +/// Severity of messages that will be logged. This must be one of the strings +/// "DEBUG", "INFO", "WARN", "ERROR", "FATAL" or "NONE". (Must be upper case +/// and must not contain leading or trailing spaces.) If not specified (or if +/// specified but incorrect), the default passed as argument to this function +/// (currently DEBUG) will be used. +/// +/// - B10_LOGGER_DBGLEVEL\n +/// Ignored if the level is not DEBUG, this should be a number between 0 and +/// 99 indicating the logging severity. The default is 0. If outside these +/// limits or if not a number, The value passed to this function (default +/// of MAX_DEBUG_LEVEL) is used. +/// +/// - B10_LOGGER_LOCALMSG\n +/// If defined, the path specification of a file that contains message +/// definitions replacing ones in the default dictionary. +/// +/// - B10_LOGGER_DESTINATION\n +/// If defined, the destination of the logging output. This can be one of: +/// - \c stdout Send output to stdout. +/// - \c stderr Send output to stderr +/// - \c syslog Send output to syslog using the facility local0. +/// - \c syslog:xxx Send output to syslog, using the facility xxx. ("xxx" +/// should be one of the syslog facilities such as "local0".) There must +/// be a colon between "syslog" and "xxx +/// - \c other Anything else is interpreted as the name of a file to which +/// output is appended. If the file does not exist, it is created. +/// +/// Any errors in the settings cause messages to be output to stderr. +/// +/// This function is aimed at test programs, allowing the default settings to +/// be overridden by the tester. It is not intended for use in production +/// code. +/// +/// TODO: Rename. This function overloads the initLogger() function that can +/// be used to initialize production programs. This may lead to confusion. +void initLogger(isc::log::Severity severity = isc::log::DEBUG, + int dbglevel = isc::log::MAX_DEBUG_LEVEL); + + +/// \brief Obtains logging severity from B10_LOGGER_SEVERITY +/// +/// Support function called by the unit test logging initialization code. +/// It returns the logging severity defined by B10_LOGGER_SEVERITY. If +/// not defined it returns the default passed to it. +/// +/// \param defseverity Default severity used if B10_LOGGER_SEVERITY is not +// defined. +/// +/// \return Severity to use for the logging. +isc::log::Severity b10LoggerSeverity(isc::log::Severity defseverity); + + +/// \brief Obtains logging debug level from B10_LOGGER_DBGLEVEL +/// +/// Support function called by the unit test logging initialization code. +/// It returns the logging debug level defined by B10_LOGGER_DBGLEVEL. If +/// not defined, it returns the default passed to it. +/// +/// N.B. If there is an error, a message is written to stderr and a value +/// related to the error is used. (This is because (a) logging is not yet +/// initialized, hence only the error stream is known to exist, and (b) this +/// function is only used in unit test logging initialization, so incorrect +/// selection of a level is not really an issue.) +/// +/// \param defdbglevel Default debug level to be used if B10_LOGGER_DBGLEVEL +/// is not defined. +/// +/// \return Debug level to use. +int b10LoggerDbglevel(int defdbglevel); + + +/// \brief Reset root logger characteristics +/// +/// This is a simplified interface into the resetting of the characteristics +/// of the root logger. It is aimed for use in unit tests and resets the +/// characteristics of the root logger to use a severity, debug level and +/// destination set by the environment variables B10_LOGGER_SEVERITY, +/// B10_LOGGER_DBGLEVEL and B10_LOGGER_DESTINATION. +void +resetUnitTestRootLogger(); + +} // namespace log +} // namespace isc + + + +#endif // __LOGGER_UNITTEST_SUPPORT_H diff --git a/src/lib/log/tests/Makefile.am b/src/lib/log/tests/Makefile.am index cd2ae2920f..069a7b4218 100644 --- a/src/lib/log/tests/Makefile.am +++ b/src/lib/log/tests/Makefile.am @@ -51,13 +51,26 @@ logger_example_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) logger_example_LDFLAGS = $(AM_LDFLAGS) $(LOG4CPLUS_LDFLAGS) logger_example_LDADD = $(top_builddir)/src/lib/log/liblog.la logger_example_LDADD += $(top_builddir)/src/lib/util/libutil.la +logger_example_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la + +check_PROGRAMS += init_logger_test +init_logger_test_SOURCES = init_logger_test.cc +init_logger_test_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) +init_logger_test_LDFLAGS = $(AM_LDFLAGS) $(LOG4CPLUS_LDFLAGS) +init_logger_test_LDADD = $(top_builddir)/src/lib/log/liblog.la +init_logger_test_LDADD += $(top_builddir)/src/lib/util/libutil.la +init_logger_test_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la noinst_PROGRAMS = $(TESTS) -# Additional test using the shell -PYTESTS = console_test.sh local_file_test.sh severity_test.sh +# Additional test using the shell. These are principally tests +# where the global logging environment is affected, and where the +# output needs to be compared with stored output (where "cut" and +# "diff" are useful utilities). + check-local: $(SHELL) $(abs_builddir)/console_test.sh $(SHELL) $(abs_builddir)/destination_test.sh + $(SHELL) $(abs_builddir)/init_logger_test.sh $(SHELL) $(abs_builddir)/local_file_test.sh $(SHELL) $(abs_builddir)/severity_test.sh diff --git a/src/lib/log/tests/console_test.sh.in b/src/lib/log/tests/console_test.sh.in index 7ef2684471..a16dc23187 100755 --- a/src/lib/log/tests/console_test.sh.in +++ b/src/lib/log/tests/console_test.sh.in @@ -13,8 +13,6 @@ # OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR # PERFORMANCE OF THIS SOFTWARE. -# \brief -# # The logger supports the idea of a "console" logger than logs to either stdout # or stderr. This test checks that both these options work. diff --git a/src/lib/log/tests/destination_test.sh.in b/src/lib/log/tests/destination_test.sh.in index 41a52ee9ad..1cfb9fb4f6 100755 --- a/src/lib/log/tests/destination_test.sh.in +++ b/src/lib/log/tests/destination_test.sh.in @@ -13,10 +13,7 @@ # OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR # PERFORMANCE OF THIS SOFTWARE. -# \brief Severity test -# -# Checks that the logger will limit the output of messages less severy than -# the severity/debug setting. +# Checks that the logger will route messages to the chosen destination. testname="Destination test" echo $testname diff --git a/src/lib/log/tests/init_logger_test.cc b/src/lib/log/tests/init_logger_test.cc new file mode 100644 index 0000000000..104c0780f3 --- /dev/null +++ b/src/lib/log/tests/init_logger_test.cc @@ -0,0 +1,42 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include + +using namespace isc::log; + +/// \brief Test InitLogger +/// +/// A program used in testing the logger that initializes logging using +/// initLogger(), then outputs several messages at different severities and +/// debug levels. An external script sets the environment variables and checks +/// that they have the desired effect. + +int +main(int, char**) { + initLogger(); + Logger logger("log"); + + LOG_DEBUG(logger, 0, LOG_BAD_DESTINATION).arg("debug-0"); + LOG_DEBUG(logger, 50, LOG_BAD_DESTINATION).arg("debug-50"); + LOG_DEBUG(logger, 99, LOG_BAD_DESTINATION).arg("debug-99"); + LOG_INFO(logger, LOG_BAD_SEVERITY).arg("info"); + LOG_WARN(logger, LOG_BAD_STREAM).arg("warn"); + LOG_ERROR(logger, LOG_DUPLICATE_MESSAGE_ID).arg("error"); + LOG_FATAL(logger, LOG_NO_MESSAGE_ID).arg("fatal"); + + return (0); +} diff --git a/src/lib/log/tests/init_logger_test.sh.in b/src/lib/log/tests/init_logger_test.sh.in new file mode 100755 index 0000000000..795419bf66 --- /dev/null +++ b/src/lib/log/tests/init_logger_test.sh.in @@ -0,0 +1,110 @@ +#!/bin/sh +# Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and/or distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +# PERFORMANCE OF THIS SOFTWARE. + +# Checks that the initLogger() call uses for unit tests respects the setting of +# the environment variables. + +testname="initLogger test" +echo $testname + +failcount=0 +tempfile=@abs_builddir@/init_logger_test_tempfile_$$ +destfile=@abs_builddir@/init_logger_test_destfile_$$ + +passfail() { + if [ $1 -eq 0 ]; then + echo " pass" + else + echo " FAIL" + failcount=`expr $failcount + $1` + fi +} + +echo "1. Checking that B10_LOGGER_SEVERITY/B10_LOGGER_DBGLEVEL work" + +echo -n " - severity=DEBUG, dbglevel=99: " +cat > $tempfile << . +DEBUG [bind10.log] LOG_BAD_DESTINATION unrecognized log destination: debug-0 +DEBUG [bind10.log] LOG_BAD_DESTINATION unrecognized log destination: debug-50 +DEBUG [bind10.log] LOG_BAD_DESTINATION unrecognized log destination: debug-99 +INFO [bind10.log] LOG_BAD_SEVERITY unrecognized log severity: info +WARN [bind10.log] LOG_BAD_STREAM bad log console output stream: warn +ERROR [bind10.log] LOG_DUPLICATE_MESSAGE_ID duplicate message ID (error) in compiled code +FATAL [bind10.log] LOG_NO_MESSAGE_ID line fatal: message definition line found without a message ID +. +B10_LOGGER_DESTINATION=stdout B10_LOGGER_SEVERITY=DEBUG B10_LOGGER_DBGLEVEL=99 ./init_logger_test | \ + cut -d' ' -f3- | diff $tempfile - +passfail $? + +echo -n " - severity=DEBUG, dbglevel=50: " +cat > $tempfile << . +DEBUG [bind10.log] LOG_BAD_DESTINATION unrecognized log destination: debug-0 +DEBUG [bind10.log] LOG_BAD_DESTINATION unrecognized log destination: debug-50 +INFO [bind10.log] LOG_BAD_SEVERITY unrecognized log severity: info +WARN [bind10.log] LOG_BAD_STREAM bad log console output stream: warn +ERROR [bind10.log] LOG_DUPLICATE_MESSAGE_ID duplicate message ID (error) in compiled code +FATAL [bind10.log] LOG_NO_MESSAGE_ID line fatal: message definition line found without a message ID +. +B10_LOGGER_DESTINATION=stdout B10_LOGGER_SEVERITY=DEBUG B10_LOGGER_DBGLEVEL=50 ./init_logger_test | \ + cut -d' ' -f3- | diff $tempfile - +passfail $? + +echo -n " - severity=WARN: " +cat > $tempfile << . +WARN [bind10.log] LOG_BAD_STREAM bad log console output stream: warn +ERROR [bind10.log] LOG_DUPLICATE_MESSAGE_ID duplicate message ID (error) in compiled code +FATAL [bind10.log] LOG_NO_MESSAGE_ID line fatal: message definition line found without a message ID +. +B10_LOGGER_DESTINATION=stdout B10_LOGGER_SEVERITY=WARN ./init_logger_test | \ + cut -d' ' -f3- | diff $tempfile - +passfail $? + +echo "2. Checking that B10_LOGGER_DESTINATION works" + +echo -n " - stdout: " +cat > $tempfile << . +FATAL [bind10.log] LOG_NO_MESSAGE_ID line fatal: message definition line found without a message ID +. +rm -f $destfile +B10_LOGGER_SEVERITY=FATAL B10_LOGGER_DESTINATION=stdout ./init_logger_test 1> $destfile +cut -d' ' -f3- $destfile | diff $tempfile - +passfail $? + +echo -n " - stderr: " +rm -f $destfile +B10_LOGGER_SEVERITY=FATAL B10_LOGGER_DESTINATION=stderr ./init_logger_test 2> $destfile +cut -d' ' -f3- $destfile | diff $tempfile - +passfail $? + +echo -n " - file: " +rm -f $destfile +B10_LOGGER_SEVERITY=FATAL B10_LOGGER_DESTINATION=$destfile ./init_logger_test +cut -d' ' -f3- $destfile | diff $tempfile - +passfail $? + +# Note: can't automatically test syslog output. + +if [ $failcount -eq 0 ]; then + echo "PASS: $testname" +elif [ $failcount -eq 1 ]; then + echo "FAIL: $testname - 1 test failed" +else + echo "FAIL: $testname - $failcount tests failed" +fi + +# Tidy up. +rm -f $tempfile $destfile + +exit $failcount diff --git a/src/lib/log/tests/local_file_test.sh.in b/src/lib/log/tests/local_file_test.sh.in index d76f48f619..9b898e6e21 100755 --- a/src/lib/log/tests/local_file_test.sh.in +++ b/src/lib/log/tests/local_file_test.sh.in @@ -13,8 +13,6 @@ # OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR # PERFORMANCE OF THIS SOFTWARE. -# \brief Local message file test -# # Checks that a local message file can override the definitions in the message # dictionary. diff --git a/src/lib/log/tests/logger_level_impl_unittest.cc b/src/lib/log/tests/logger_level_impl_unittest.cc index 0ded7f9c05..dacd2023d5 100644 --- a/src/lib/log/tests/logger_level_impl_unittest.cc +++ b/src/lib/log/tests/logger_level_impl_unittest.cc @@ -20,6 +20,7 @@ #include #include +#include #include using namespace isc::log; @@ -27,8 +28,10 @@ using namespace std; class LoggerLevelImplTest : public ::testing::Test { protected: - LoggerLevelImplTest() - {} + LoggerLevelImplTest() { + // Ensure logging set to default for unit tests + resetUnitTestRootLogger(); + } ~LoggerLevelImplTest() {} diff --git a/src/lib/log/tests/logger_level_unittest.cc b/src/lib/log/tests/logger_level_unittest.cc index 8c98091d5f..641a6cccb7 100644 --- a/src/lib/log/tests/logger_level_unittest.cc +++ b/src/lib/log/tests/logger_level_unittest.cc @@ -20,7 +20,7 @@ #include #include #include -#include +#include using namespace isc; using namespace isc::log; @@ -29,7 +29,9 @@ using namespace std; class LoggerLevelTest : public ::testing::Test { protected: LoggerLevelTest() { - // Logger initialization is done in main() + // Logger initialization is done in main(). As logging tests may + // alter the default logging output, it is reset here. + resetUnitTestRootLogger(); } ~LoggerLevelTest() { LoggerManager::reset(); @@ -57,7 +59,7 @@ TEST_F(LoggerLevelTest, Creation) { EXPECT_EQ(42, level3.dbglevel); } -TEST(LoggerLevel, getSeverity) { +TEST_F(LoggerLevelTest, getSeverity) { EXPECT_EQ(DEBUG, getSeverity("DEBUG")); EXPECT_EQ(DEBUG, getSeverity("debug")); EXPECT_EQ(DEBUG, getSeverity("DeBuG")); diff --git a/src/lib/log/tests/logger_support_unittest.cc b/src/lib/log/tests/logger_support_unittest.cc index 6a93652cfc..b4189061f6 100644 --- a/src/lib/log/tests/logger_support_unittest.cc +++ b/src/lib/log/tests/logger_support_unittest.cc @@ -18,12 +18,23 @@ using namespace isc::log; +class LoggerSupportTest : public ::testing::Test { +protected: + LoggerSupportTest() { + // Logger initialization is done in main(). As logging tests may + // alter the default logging output, it is reset here. + resetUnitTestRootLogger(); + } + ~LoggerSupportTest() { + } +}; + // Check that the initialized flag can be manipulated. This is a bit chicken- // -and-egg: we want to reset to the flag to the original value at the end // of the test, so use the functions to do that. But we are trying to check // that these functions in fact work. -TEST(LoggerSupportTest, InitializedFlag) { +TEST_F(LoggerSupportTest, InitializedFlag) { bool current_flag = isLoggingInitialized(); // check we can flip the flag. @@ -51,7 +62,7 @@ TEST(LoggerSupportTest, InitializedFlag) { // Check that a logger will throw an exception if logging has not been // initialized. -TEST(LoggerSupportTest, LoggingInitializationCheck) { +TEST_F(LoggerSupportTest, LoggingInitializationCheck) { // Assert that logging has been initialized (it should be in main()). bool current_flag = isLoggingInitialized(); diff --git a/src/lib/log/tests/severity_test.sh.in b/src/lib/log/tests/severity_test.sh.in index 124f36af5d..78d5050734 100755 --- a/src/lib/log/tests/severity_test.sh.in +++ b/src/lib/log/tests/severity_test.sh.in @@ -13,9 +13,7 @@ # OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR # PERFORMANCE OF THIS SOFTWARE. -# \brief Severity test -# -# Checks that the logger will limit the output of messages less severy than +# Checks that the logger will limit the output of messages less severe than # the severity/debug setting. testname="Severity test" @@ -33,7 +31,7 @@ passfail() { fi } -echo -n "1. runInitTest default parameters:" +echo -n "1. Default parameters:" cat > $tempfile << . FATAL [example] LOG_WRITE_ERROR error writing to test1: 42 ERROR [example] LOG_READING_LOCAL_FILE reading local message file dummy/file diff --git a/src/lib/python/isc/Makefile.am b/src/lib/python/isc/Makefile.am index bfc5a912cc..a3e74c5ff7 100644 --- a/src/lib/python/isc/Makefile.am +++ b/src/lib/python/isc/Makefile.am @@ -1,4 +1,5 @@ -SUBDIRS = datasrc cc config log net notify util testutils +SUBDIRS = datasrc cc config dns log net notify util testutils acl bind10 +SUBDIRS += xfrin log_messages python_PYTHON = __init__.py diff --git a/src/lib/python/isc/__init__.py b/src/lib/python/isc/__init__.py index 8fcbf4256d..029f110b31 100644 --- a/src/lib/python/isc/__init__.py +++ b/src/lib/python/isc/__init__.py @@ -1,4 +1,7 @@ -import isc.datasrc +# On some systems, it appears the dynamic linker gets +# confused if the order is not right here +# There is probably a solution for this, but for now: +# order is important here! import isc.cc import isc.config -#import isc.dns +import isc.datasrc diff --git a/src/lib/python/isc/acl/Makefile.am b/src/lib/python/isc/acl/Makefile.am new file mode 100644 index 0000000000..b1afa155f0 --- /dev/null +++ b/src/lib/python/isc/acl/Makefile.am @@ -0,0 +1,45 @@ +SUBDIRS = . tests + +AM_CPPFLAGS = -I$(top_srcdir)/src/lib -I$(top_builddir)/src/lib +AM_CPPFLAGS += $(BOOST_INCLUDES) +AM_CXXFLAGS = $(B10_CXXFLAGS) + +python_PYTHON = __init__.py dns.py +pythondir = $(PYTHON_SITEPKG_DIR)/isc/acl + +pyexec_LTLIBRARIES = acl.la _dns.la +pyexecdir = $(PYTHON_SITEPKG_DIR)/isc/acl + +acl_la_SOURCES = acl.cc +acl_la_CPPFLAGS = $(AM_CPPFLAGS) $(PYTHON_INCLUDES) +acl_la_LDFLAGS = $(PYTHON_LDFLAGS) +acl_la_CXXFLAGS = $(AM_CXXFLAGS) $(PYTHON_CXXFLAGS) + +_dns_la_SOURCES = dns.h dns.cc dns_requestacl_python.h dns_requestacl_python.cc +_dns_la_SOURCES += dns_requestcontext_python.h dns_requestcontext_python.cc +_dns_la_SOURCES += dns_requestloader_python.h dns_requestloader_python.cc +_dns_la_CPPFLAGS = $(AM_CPPFLAGS) $(PYTHON_INCLUDES) +_dns_la_LDFLAGS = $(PYTHON_LDFLAGS) +# Note: PYTHON_CXXFLAGS may have some -Wno... workaround, which must be +# placed after -Wextra defined in AM_CXXFLAGS +_dns_la_CXXFLAGS = $(AM_CXXFLAGS) $(PYTHON_CXXFLAGS) + +# Python prefers .so, while some OSes (specifically MacOS) use a different +# suffix for dynamic objects. -module is necessary to work this around. +acl_la_LDFLAGS += -module +acl_la_LIBADD = $(top_builddir)/src/lib/acl/libacl.la +acl_la_LIBADD += $(PYTHON_LIB) + +_dns_la_LDFLAGS += -module +_dns_la_LIBADD = $(top_builddir)/src/lib/acl/libdnsacl.la +_dns_la_LIBADD += $(PYTHON_LIB) + +EXTRA_DIST = acl.py _dns.py +EXTRA_DIST += acl_inc.cc +EXTRA_DIST += dnsacl_inc.cc dns_requestacl_inc.cc dns_requestcontext_inc.cc +EXTRA_DIST += dns_requestloader_inc.cc + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/lib/python/isc/acl/__init__.py b/src/lib/python/isc/acl/__init__.py new file mode 100644 index 0000000000..d9b283892f --- /dev/null +++ b/src/lib/python/isc/acl/__init__.py @@ -0,0 +1,11 @@ +""" +Here are function and classes for manipulating access control lists. +""" + +# The DNS ACL loader would need the json module. Make sure it's imported +# beforehand. +import json + +# Other ACL modules highly depends on the main acl sub module, so it's +# explicitly imported here. +import isc.acl.acl diff --git a/src/bin/stats/tests/fake_select.py b/src/lib/python/isc/acl/_dns.py similarity index 50% rename from src/bin/stats/tests/fake_select.py rename to src/lib/python/isc/acl/_dns.py index ca0ca82619..a645a7bc18 100644 --- a/src/bin/stats/tests/fake_select.py +++ b/src/lib/python/isc/acl/_dns.py @@ -13,31 +13,17 @@ # NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -""" -A mock-up module of select +# This file is not installed; The .so version will be installed into the right +# place at installation time. +# This helper script is only to find it in the .libs directory when we run +# as a test or from the build directory. -*** NOTE *** -It is only for testing stats_httpd module and not reusable for -external module. -""" +import os +import sys -import fake_socket -import errno +for base in sys.path[:]: + bindingdir = os.path.join(base, 'isc/acl/.libs') + if os.path.exists(bindingdir): + sys.path.insert(0, bindingdir) -class error(Exception): - pass - -def select(rlst, wlst, xlst, timeout): - if type(timeout) != int and type(timeout) != float: - raise TypeError("Error: %s must be integer or float" - % timeout.__class__.__name__) - for s in rlst + wlst + xlst: - if type(s) != fake_socket.socket: - raise TypeError("Error: %s must be a dummy socket" - % s.__class__.__name__) - s._called = s._called + 1 - if s._called > 3: - raise error("Something is happened!") - elif s._called > 2: - raise error(errno.EINTR) - return (rlst, wlst, xlst) +from _dns import * diff --git a/src/lib/python/isc/acl/acl.cc b/src/lib/python/isc/acl/acl.cc new file mode 100644 index 0000000000..6517a1256a --- /dev/null +++ b/src/lib/python/isc/acl/acl.cc @@ -0,0 +1,80 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +#include + +using namespace isc::util::python; + +#include "acl_inc.cc" + +namespace { +// Commonly used Python exception objects. Right now the acl module consists +// of only one .cc file, so we hide them in an unnamed namespace. If and when +// we extend this module with multiple .cc files, we should move them to +// a named namespace, say isc::acl::python, and declare them in a separate +// header file. +PyObject* po_ACLError; +PyObject* po_LoaderError; +} + +namespace { +PyModuleDef acl = { + { PyObject_HEAD_INIT(NULL) NULL, 0, NULL}, + "isc.acl.acl", + acl_doc, + -1, + NULL, + NULL, + NULL, + NULL, + NULL +}; +} // end of unnamed namespace + +PyMODINIT_FUNC +PyInit_acl(void) { + PyObject* mod = PyModule_Create(&acl); + if (mod == NULL) { + return (NULL); + } + + try { + po_ACLError = PyErr_NewException("isc.acl.Error", NULL, NULL); + PyObjectContainer(po_ACLError).installToModule(mod, "Error"); + + po_LoaderError = PyErr_NewException("isc.acl.LoaderError", NULL, NULL); + PyObjectContainer(po_LoaderError).installToModule(mod, "LoaderError"); + + // Install module constants. Note that we can let Py_BuildValue + // "steal" the references to these object (by specifying false to + // installToModule), because, unlike the exception cases above, + // we don't have corresponding C++ variables (see the note in + // pycppwrapper_util for more details). + PyObjectContainer(Py_BuildValue("I", isc::acl::ACCEPT)). + installToModule(mod, "ACCEPT", false); + PyObjectContainer(Py_BuildValue("I", isc::acl::REJECT)). + installToModule(mod, "REJECT", false); + PyObjectContainer(Py_BuildValue("I", isc::acl::DROP)). + installToModule(mod, "DROP", false); + } catch (...) { + Py_DECREF(mod); + return (NULL); + } + + return (mod); +} diff --git a/src/lib/python/isc/acl/acl.py b/src/lib/python/isc/acl/acl.py new file mode 100644 index 0000000000..804d78bba9 --- /dev/null +++ b/src/lib/python/isc/acl/acl.py @@ -0,0 +1,29 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +# This file is not installed; The .so version will be installed into the right +# place at installation time. +# This helper script is only to find it in the .libs directory when we run +# as a test or from the build directory. + +import os +import sys + +for base in sys.path[:]: + bindingdir = os.path.join(base, 'isc/acl/.libs') + if os.path.exists(bindingdir): + sys.path.insert(0, bindingdir) + +from acl import * diff --git a/src/lib/python/isc/acl/acl_inc.cc b/src/lib/python/isc/acl/acl_inc.cc new file mode 100644 index 0000000000..a9f7c9da1b --- /dev/null +++ b/src/lib/python/isc/acl/acl_inc.cc @@ -0,0 +1,16 @@ +namespace { +const char* const acl_doc = "\ +Implementation module for ACL operations\n\n\ +This module provides Python bindings for the C++ classes in the\n\ +isc::acl namespace.\n\ +\n\ +Integer constants:\n\ +\n\ +ACCEPT, REJECT, DROP -- Default actions an ACL could perform.\n\ + These are the commonly used actions in specific ACLs.\n\ + It is possible to specify any other values, as the ACL class does\n\ + nothing about them, but these look reasonable, so they are provided\n\ + for convenience. It is not specified what exactly these mean and it's\n\ + up to whoever uses them.\n\ +"; +} // unnamed namespace diff --git a/src/lib/python/isc/acl/dns.cc b/src/lib/python/isc/acl/dns.cc new file mode 100644 index 0000000000..eb3b57b780 --- /dev/null +++ b/src/lib/python/isc/acl/dns.cc @@ -0,0 +1,135 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include +#include + +#include + +#include + +#include +#include + +#include "dns.h" +#include "dns_requestcontext_python.h" +#include "dns_requestacl_python.h" +#include "dns_requestloader_python.h" + +using namespace std; +using boost::shared_ptr; +using namespace isc::util::python; +using namespace isc::data; +using namespace isc::acl::dns; +using namespace isc::acl::dns::python; + +#include "dnsacl_inc.cc" + +namespace { +// This is a Python binding object corresponding to the singleton loader used +// in the C++ version of the library. +// We can define it as a pure object rather than through an accessor function, +// because in Python we can ensure it has been created and initialized +// in the module initializer by the time it's actually used. +s_RequestLoader* po_REQUEST_LOADER; + +PyMethodDef methods[] = { + { NULL, NULL, 0, NULL } +}; + +PyModuleDef dnsacl = { + { PyObject_HEAD_INIT(NULL) NULL, 0, NULL}, + "isc.acl._dns", + dnsacl_doc, + -1, + methods, + NULL, + NULL, + NULL, + NULL +}; +} // end of unnamed namespace + +namespace isc { +namespace acl { +namespace dns { +namespace python { +PyObject* +getACLException(const char* ex_name) { + PyObject* ex_obj = NULL; + + PyObject* acl_module = PyImport_AddModule("isc.acl.acl"); + if (acl_module != NULL) { + PyObject* acl_dict = PyModule_GetDict(acl_module); + if (acl_dict != NULL) { + ex_obj = PyDict_GetItemString(acl_dict, ex_name); + } + } + + if (ex_obj == NULL) { + ex_obj = PyExc_RuntimeError; + } + return (ex_obj); +} +} +} +} +} + +PyMODINIT_FUNC +PyInit__dns(void) { + PyObject* mod = PyModule_Create(&dnsacl); + if (mod == NULL) { + return (NULL); + } + + if (!initModulePart_RequestContext(mod)) { + Py_DECREF(mod); + return (NULL); + } + if (!initModulePart_RequestACL(mod)) { + Py_DECREF(mod); + return (NULL); + } + if (!initModulePart_RequestLoader(mod)) { + Py_DECREF(mod); + return (NULL); + } + + // Module constants + try { + if (po_REQUEST_LOADER == NULL) { + po_REQUEST_LOADER = static_cast( + requestloader_type.tp_alloc(&requestloader_type, 0)); + } + if (po_REQUEST_LOADER != NULL) { + // We gain and keep our own reference to the singleton object + // for the same reason as that for exception objects (see comments + // in pycppwrapper_util for more details). Note also that we don't + // bother to release the reference even if exception is thrown + // below (in fact, we cannot delete the singleton loader). + po_REQUEST_LOADER->cppobj = &getRequestLoader(); + Py_INCREF(po_REQUEST_LOADER); + } + PyObjectContainer(po_REQUEST_LOADER).installToModule(mod, + "REQUEST_LOADER"); + } catch (...) { + Py_DECREF(mod); + return (NULL); + } + + return (mod); +} diff --git a/src/lib/python/isc/acl/dns.h b/src/lib/python/isc/acl/dns.h new file mode 100644 index 0000000000..76849c5a3a --- /dev/null +++ b/src/lib/python/isc/acl/dns.h @@ -0,0 +1,52 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_ACL_DNS_H +#define __PYTHON_ACL_DNS_H 1 + +#include + +namespace isc { +namespace acl { +namespace dns { +namespace python { + +// Return a Python exception object of the given name (ex_name) defined in +// the isc.acl.acl loadable module. +// +// Since the acl module is a different binary image and is loaded separately +// from the dns module, it would be very tricky to directly access to +// C/C++ symbols defined in that module. So we get access to these object +// using the Python interpretor through this wrapper function. +// +// The __init__.py file should ensure isc.acl.acl has been loaded by the time +// whenever this function is called, and there shouldn't be any operation +// within this function that can fail (such as dynamic memory allocation), +// so this function should always succeed. Yet there may be an overlooked +// failure mode, perhaps due to a bug in the binding implementation, or +// due to invalid usage. As a last resort for such cases, this function +// returns PyExc_RuntimeError (a C binding of Python's RuntimeError) should +// it encounters an unexpected failure. +extern PyObject* getACLException(const char* ex_name); + +} // namespace python +} // namespace dns +} // namespace acl +} // namespace isc + +#endif // __PYTHON_ACL_DNS_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/acl/dns.py b/src/lib/python/isc/acl/dns.py new file mode 100644 index 0000000000..0733bc3ce5 --- /dev/null +++ b/src/lib/python/isc/acl/dns.py @@ -0,0 +1,73 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +"""\ +This module provides Python bindings for the C++ classes in the +isc::acl::dns namespace. Specifically, it defines Python interfaces of +handling access control lists (ACLs) with DNS related contexts. +The actual binding is implemented in an effectively hidden module, +isc.acl._dns; this frontend module is in terms of implementation so that +the C++ binding code doesn't have to deal with complicated operations +that could be done in a more straightforward way in native Python. + +For further details of the actual module, see the documentation of the +_dns module. +""" + +import pydnspp + +import isc.acl._dns +from isc.acl._dns import * + +class RequestACL(isc.acl._dns.RequestACL): + """A straightforward wrapper subclass of isc.acl._dns.RequestACL. + + See the base class documentation for more implementation. + """ + pass + +class RequestLoader(isc.acl._dns.RequestLoader): + """A straightforward wrapper subclass of isc.acl._dns.RequestLoader. + + See the base class documentation for more implementation. + """ + pass + +class RequestContext(isc.acl._dns.RequestContext): + """A straightforward wrapper subclass of isc.acl._dns.RequestContext. + + See the base class documentation for more implementation. + """ + + def __init__(self, remote_address, tsig=None): + """Wrapper for the RequestContext constructor. + + Internal implementation details that the users don't have to + worry about: To avoid dealing with pydnspp bindings in the C++ code, + this wrapper converts the TSIG record in its wire format in the form + of byte data, and has the binding re-construct the record from it. + """ + tsig_wire = b'' + if tsig is not None: + if not isinstance(tsig, pydnspp.TSIGRecord): + raise TypeError("tsig must be a TSIGRecord, not %s" % + tsig.__class__.__name__) + tsig_wire = tsig.to_wire(tsig_wire) + isc.acl._dns.RequestContext.__init__(self, remote_address, tsig_wire) + + def __str__(self): + """Wrap __str__() to convert the module name.""" + s = isc.acl._dns.RequestContext.__str__(self) + return s.replace(' + +#include +#include + +#include + +#include +#include + +#include "dns.h" +#include "dns_requestacl_python.h" +#include "dns_requestcontext_python.h" + +using namespace std; +using namespace isc::util::python; +using namespace isc::acl; +using namespace isc::acl::dns; +using namespace isc::acl::dns::python; + +// +// Definition of the classes +// + +// For each class, we need a struct, a helper functions (init, destroy, +// and static wrappers around the methods we export), a list of methods, +// and a type description + +// +// RequestACL +// + +// Trivial constructor. +s_RequestACL::s_RequestACL() {} + +// Import pydoc text +#include "dns_requestacl_inc.cc" + +namespace { +int +RequestACL_init(PyObject*, PyObject*, PyObject*) { + PyErr_SetString(getACLException("Error"), + "RequestACL cannot be directly constructed"); + return (-1); +} + +void +RequestACL_destroy(PyObject* po_self) { + s_RequestACL* const self = static_cast(po_self); + self->cppobj.reset(); + Py_TYPE(self)->tp_free(self); +} + +PyObject* +RequestACL_execute(PyObject* po_self, PyObject* args) { + s_RequestACL* const self = static_cast(po_self); + + try { + const s_RequestContext* po_context; + if (PyArg_ParseTuple(args, "O!", &requestcontext_type, &po_context)) { + const BasicAction action = + self->cppobj->execute(*po_context->cppobj); + return (Py_BuildValue("I", action)); + } + } catch (const exception& ex) { + const string ex_what = "Failed to execute ACL: " + string(ex.what()); + PyErr_SetString(getACLException("Error"), ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_RuntimeError, + "Unexpected exception in executing ACL"); + } + + return (NULL); +} + +// This list contains the actual set of functions we have in +// python. Each entry has +// 1. Python method name +// 2. Our static function here +// 3. Argument type +// 4. Documentation +PyMethodDef RequestACL_methods[] = { + { "execute", RequestACL_execute, METH_VARARGS, RequestACL_execute_doc }, + { NULL, NULL, 0, NULL } +}; +} // end of unnamed namespace + +namespace isc { +namespace acl { +namespace dns { +namespace python { +// This defines the complete type for reflection in python and +// parsing of PyObject* to s_RequestACL +// Most of the functions are not actually implemented and NULL here. +PyTypeObject requestacl_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "isc.acl._dns.RequestACL", + sizeof(s_RequestACL), // tp_basicsize + 0, // tp_itemsize + RequestACL_destroy, // tp_dealloc + NULL, // tp_print + NULL, // tp_getattr + NULL, // tp_setattr + NULL, // tp_reserved + NULL, // tp_repr + NULL, // tp_as_number + NULL, // tp_as_sequence + NULL, // tp_as_mapping + NULL, // tp_hash + NULL, // tp_call + NULL, // tp_str + NULL, // tp_getattro + NULL, // tp_setattro + NULL, // tp_as_buffer + Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, // tp_flags + RequestACL_doc, + NULL, // tp_traverse + NULL, // tp_clear + NULL, // tp_richcompare + 0, // tp_weaklistoffset + NULL, // tp_iter + NULL, // tp_iternext + RequestACL_methods, // tp_methods + NULL, // tp_members + NULL, // tp_getset + NULL, // tp_base + NULL, // tp_dict + NULL, // tp_descr_get + NULL, // tp_descr_set + 0, // tp_dictoffset + RequestACL_init, // tp_init + NULL, // tp_alloc + PyType_GenericNew, // tp_new + NULL, // tp_free + NULL, // tp_is_gc + NULL, // tp_bases + NULL, // tp_mro + NULL, // tp_cache + NULL, // tp_subclasses + NULL, // tp_weaklist + NULL, // tp_del + 0 // tp_version_tag +}; + +bool +initModulePart_RequestACL(PyObject* mod) { + // We initialize the static description object with PyType_Ready(), + // then add it to the module. This is not just a check! (leaving + // this out results in segmentation faults) + if (PyType_Ready(&requestacl_type) < 0) { + return (false); + } + void* p = &requestacl_type; + if (PyModule_AddObject(mod, "RequestACL", static_cast(p)) < 0) { + return (false); + } + Py_INCREF(&requestacl_type); + + return (true); +} +} // namespace python +} // namespace dns +} // namespace acl +} // namespace isc diff --git a/src/lib/python/isc/acl/dns_requestacl_python.h b/src/lib/python/isc/acl/dns_requestacl_python.h new file mode 100644 index 0000000000..8f7ad8a097 --- /dev/null +++ b/src/lib/python/isc/acl/dns_requestacl_python.h @@ -0,0 +1,53 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_REQUESTACL_H +#define __PYTHON_REQUESTACL_H 1 + +#include + +#include + +#include + +namespace isc { +namespace acl { +namespace dns { +namespace python { + +// The s_* Class simply covers one instantiation of the object +class s_RequestACL : public PyObject { +public: + s_RequestACL(); + + // We don't have to use a shared pointer for its original purposes as + // the python object maintains reference counters itself. But the + // underlying C++ API only exposes a shared pointer for the ACL objects, + // so we store it in that form. + boost::shared_ptr cppobj; +}; + +extern PyTypeObject requestacl_type; + +bool initModulePart_RequestACL(PyObject* mod); + +} // namespace python +} // namespace dns +} // namespace acl +} // namespace isc +#endif // __PYTHON_REQUESTACL_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/acl/dns_requestcontext_inc.cc b/src/lib/python/isc/acl/dns_requestcontext_inc.cc new file mode 100644 index 0000000000..f71bc599ee --- /dev/null +++ b/src/lib/python/isc/acl/dns_requestcontext_inc.cc @@ -0,0 +1,33 @@ +namespace { +const char* const RequestContext_doc = "\ +DNS request to be checked.\n\ +\n\ +This plays the role of ACL context for the RequestACL object.\n\ +\n\ +Based on the minimalist philosophy, the initial implementation only\n\ +maintains the remote (source) IP address of the request and\n\ +(optionally) the TSIG record included in the request. We may add more\n\ +parameters of the request as we see the need for them. Possible\n\ +additional parameters are the local (destination) IP address, the\n\ +remote and local port numbers, various fields of the DNS request (e.g.\n\ +a particular header flag value).\n\ +\n\ +RequestContext(remote_address, tsig)\n\ +\n\ + In this initial implementation, the constructor only takes a\n\ + remote IP address in the form of a socket address as used in the\n\ + Python socket module, and optionally a pydnspp.TSIGRecord object.\n\ +\n\ + Exceptions:\n\ + isc.acl.ACLError Normally shouldn't happen, but still possible\n\ + for unexpected errors such as memory allocation\n\ + failure or an invalid address text being passed.\n\ +\n\ + Parameters:\n\ + remote_address The remote IP address\n\ + tsig The TSIG record included in the request message, if any.\n\ + If the request doesn't include a TSIG, this will be None.\n\ + If this parameter is omitted None will be assumed.\n\ +\n\ +"; +} // unnamed namespace diff --git a/src/lib/python/isc/acl/dns_requestcontext_python.cc b/src/lib/python/isc/acl/dns_requestcontext_python.cc new file mode 100644 index 0000000000..7f33f59592 --- /dev/null +++ b/src/lib/python/isc/acl/dns_requestcontext_python.cc @@ -0,0 +1,382 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// Enable this if you use s# variants with PyArg_ParseTuple(), see +// http://docs.python.org/py3k/c-api/arg.html#strings-and-buffers +#define PY_SSIZE_T_CLEAN + +// Python.h needs to be placed at the head of the program file, see: +// http://docs.python.org/py3k/extending/extending.html#a-simple-example +#include + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include + +#include + +#include +#include + +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "dns.h" +#include "dns_requestcontext_python.h" + +using namespace std; +using boost::scoped_ptr; +using boost::lexical_cast; +using namespace isc; +using namespace isc::dns; +using namespace isc::dns::rdata; +using namespace isc::util::python; +using namespace isc::acl::dns; +using namespace isc::acl::dns::python; + +namespace isc { +namespace acl { +namespace dns { +namespace python { + +struct s_RequestContext::Data { + // The constructor. + Data(const char* const remote_addr, const unsigned short remote_port, + const char* tsig_data, const Py_ssize_t tsig_len) + { + createRemoteAddr(remote_addr, remote_port); + createTSIGRecord(tsig_data, tsig_len); + } + + // A convenient type converter from sockaddr_storage to sockaddr + const struct sockaddr& getRemoteSockaddr() const { + const void* p = &remote_ss; + return (*static_cast(p)); + } + + // The remote (source) IP address of the request. Note that it needs + // a reference to remote_ss. That's why the latter is stored within + // this structure. + scoped_ptr remote_ipaddr; + + // The effective length of remote_ss. It's necessary for getnameinfo() + // called from sockaddrToText (__str__ backend). + socklen_t remote_salen; + + // The TSIG record included in the request, if any. If the request + // doesn't contain a TSIG, this will be NULL. + scoped_ptr tsig_record; + +private: + // A helper method for the constructor that is responsible for constructing + // the remote address. + void createRemoteAddr(const char* const remote_addr, + const unsigned short remote_port) + { + struct addrinfo hints, *res; + memset(&hints, 0, sizeof(hints)); + hints.ai_family = AF_UNSPEC; + hints.ai_socktype = SOCK_DGRAM; + hints.ai_protocol = IPPROTO_UDP; + hints.ai_flags = AI_NUMERICHOST | AI_NUMERICSERV; + const int error(getaddrinfo(remote_addr, + lexical_cast(remote_port).c_str(), + &hints, &res)); + if (error != 0) { + isc_throw(InvalidParameter, "Failed to convert [" << remote_addr + << "]:" << remote_port << ", " << gai_strerror(error)); + } + assert(sizeof(remote_ss) > res->ai_addrlen); + memcpy(&remote_ss, res->ai_addr, res->ai_addrlen); + remote_salen = res->ai_addrlen; + freeaddrinfo(res); + + remote_ipaddr.reset(new IPAddress(getRemoteSockaddr())); + } + + // A helper method for the constructor that is responsible for constructing + // the request TSIG. + void createTSIGRecord(const char* tsig_data, const Py_ssize_t tsig_len) { + if (tsig_len == 0) { + return; + } + + // Re-construct the TSIG record from the passed binary. This should + // normally succeed because we are generally expected to be called + // from the frontend .py, which converts a valid TSIGRecord in its + // wire format. If some evil or buggy python program directly calls + // us with bogus data, validation in libdns++ will trigger an + // exception, which will be caught and converted to a Python exception + // in RequestContext_init(). + isc::util::InputBuffer b(tsig_data, tsig_len); + const Name key_name(b); + const RRType tsig_type(b.readUint16()); + const RRClass tsig_class(b.readUint16()); + const RRTTL ttl(b.readUint32()); + const size_t rdlen(b.readUint16()); + const ConstRdataPtr rdata = createRdata(tsig_type, tsig_class, b, + rdlen); + tsig_record.reset(new TSIGRecord(key_name, tsig_class, ttl, + *rdata, 0)); + } + +private: + struct sockaddr_storage remote_ss; +}; + +} // namespace python +} // namespace dns +} // namespace acl +} // namespace isc + + +// +// Definition of the classes +// + +// For each class, we need a struct, a helper functions (init, destroy, +// and static wrappers around the methods we export), a list of methods, +// and a type description + +// +// RequestContext +// + +// Trivial constructor. +s_RequestContext::s_RequestContext() : cppobj(NULL), data_(NULL) { +} + +// Import pydoc text +#include "dns_requestcontext_inc.cc" + +namespace { +// This list contains the actual set of functions we have in +// python. Each entry has +// 1. Python method name +// 2. Our static function here +// 3. Argument type +// 4. Documentation +PyMethodDef RequestContext_methods[] = { + { NULL, NULL, 0, NULL } +}; + +int +RequestContext_init(PyObject* po_self, PyObject* args, PyObject*) { + s_RequestContext* const self = static_cast(po_self); + + try { + // In this initial implementation, the constructor is simple: It + // takes two parameters. The first parameter should be a Python + // socket address object. + // For IPv4, it's ('address test', numeric_port); for IPv6, + // it's ('address text', num_port, num_flowid, num_zoneid). + // The second parameter is wire-format TSIG record in the form of + // Python byte data. If the TSIG isn't included in the request, + // its length will be 0. + // Below, we parse the argument in the most straightforward way. + // As the constructor becomes more complicated, we should probably + // make it more structural (for example, we should first retrieve + // the python objects, and parse them recursively) + + const char* remote_addr; + unsigned short remote_port; + unsigned int remote_flowinfo; // IPv6 only, unused here + unsigned int remote_zoneid; // IPv6 only, unused here + const char* tsig_data; + Py_ssize_t tsig_len; + + if (PyArg_ParseTuple(args, "(sH)y#", &remote_addr, &remote_port, + &tsig_data, &tsig_len) || + PyArg_ParseTuple(args, "(sHII)y#", &remote_addr, &remote_port, + &remote_flowinfo, &remote_zoneid, + &tsig_data, &tsig_len)) + { + // We need to clear the error in case the first call to ParseTuple + // fails. + PyErr_Clear(); + + auto_ptr dataptr( + new s_RequestContext::Data(remote_addr, remote_port, + tsig_data, tsig_len)); + self->cppobj = new RequestContext(*dataptr->remote_ipaddr, + dataptr->tsig_record.get()); + self->data_ = dataptr.release(); + return (0); + } + } catch (const exception& ex) { + const string ex_what = "Failed to construct RequestContext object: " + + string(ex.what()); + PyErr_SetString(getACLException("Error"), ex_what.c_str()); + return (-1); + } catch (...) { + PyErr_SetString(PyExc_RuntimeError, + "Unexpected exception in constructing RequestContext"); + return (-1); + } + + PyErr_SetString(PyExc_TypeError, + "Invalid arguments to RequestContext constructor"); + + return (-1); +} + +void +RequestContext_destroy(PyObject* po_self) { + s_RequestContext* const self = static_cast(po_self); + + delete self->cppobj; + delete self->data_; + Py_TYPE(self)->tp_free(self); +} + +// A helper function for __str__() +string +sockaddrToText(const struct sockaddr& sa, socklen_t sa_len) { + char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV]; + if (getnameinfo(&sa, sa_len, hbuf, sizeof(hbuf), sbuf, sizeof(sbuf), + NI_NUMERICHOST | NI_NUMERICSERV)) { + // In this context this should never fail. + isc_throw(Unexpected, "Unexpected failure in getnameinfo"); + } + + return ("[" + string(hbuf) + "]:" + string(sbuf)); +} + +// for the __str__() method. This method is provided mainly for internal +// testing. +PyObject* +RequestContext_str(PyObject* po_self) { + const s_RequestContext* const self = + static_cast(po_self); + + try { + stringstream objss; + objss << "<" << requestcontext_type.tp_name << " object, " + << "remote_addr=" + << sockaddrToText(self->data_->getRemoteSockaddr(), + self->data_->remote_salen); + if (self->data_->tsig_record) { + objss << ", key=" << self->data_->tsig_record->getName(); + } + objss << ">"; + return (Py_BuildValue("s", objss.str().c_str())); + } catch (const exception& ex) { + const string ex_what = + "Failed to convert RequestContext object to text: " + + string(ex.what()); + PyErr_SetString(PyExc_RuntimeError, ex_what.c_str()); + } catch (...) { + PyErr_SetString(PyExc_SystemError, "Unexpected failure in " + "converting RequestContext object to text"); + } + return (NULL); +} +} // end of unnamed namespace + +namespace isc { +namespace acl { +namespace dns { +namespace python { +// This defines the complete type for reflection in python and +// parsing of PyObject* to s_RequestContext +// Most of the functions are not actually implemented and NULL here. +PyTypeObject requestcontext_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "isc.acl._dns.RequestContext", + sizeof(s_RequestContext), // tp_basicsize + 0, // tp_itemsize + RequestContext_destroy, // tp_dealloc + NULL, // tp_print + NULL, // tp_getattr + NULL, // tp_setattr + NULL, // tp_reserved + NULL, // tp_repr + NULL, // tp_as_number + NULL, // tp_as_sequence + NULL, // tp_as_mapping + NULL, // tp_hash + NULL, // tp_call + RequestContext_str, // tp_str + NULL, // tp_getattro + NULL, // tp_setattro + NULL, // tp_as_buffer + Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, // tp_flags + RequestContext_doc, + NULL, // tp_traverse + NULL, // tp_clear + NULL, // tp_richcompare + 0, // tp_weaklistoffset + NULL, // tp_iter + NULL, // tp_iternext + RequestContext_methods, // tp_methods + NULL, // tp_members + NULL, // tp_getset + NULL, // tp_base + NULL, // tp_dict + NULL, // tp_descr_get + NULL, // tp_descr_set + 0, // tp_dictoffset + RequestContext_init, // tp_init + NULL, // tp_alloc + PyType_GenericNew, // tp_new + NULL, // tp_free + NULL, // tp_is_gc + NULL, // tp_bases + NULL, // tp_mro + NULL, // tp_cache + NULL, // tp_subclasses + NULL, // tp_weaklist + NULL, // tp_del + 0 // tp_version_tag +}; + +bool +initModulePart_RequestContext(PyObject* mod) { + // We initialize the static description object with PyType_Ready(), + // then add it to the module. This is not just a check! (leaving + // this out results in segmentation faults) + if (PyType_Ready(&requestcontext_type) < 0) { + return (false); + } + void* p = &requestcontext_type; + if (PyModule_AddObject(mod, "RequestContext", + static_cast(p)) < 0) { + return (false); + } + Py_INCREF(&requestcontext_type); + + return (true); +} +} // namespace python +} // namespace dns +} // namespace acl +} // namespace isc diff --git a/src/lib/python/isc/acl/dns_requestcontext_python.h b/src/lib/python/isc/acl/dns_requestcontext_python.h new file mode 100644 index 0000000000..766133b38d --- /dev/null +++ b/src/lib/python/isc/acl/dns_requestcontext_python.h @@ -0,0 +1,54 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_REQUESTCONTEXT_H +#define __PYTHON_REQUESTCONTEXT_H 1 + +#include + +#include + +namespace isc { +namespace acl { +namespace dns { +namespace python { + +// The s_* Class simply covers one instantiation of the object +class s_RequestContext : public PyObject { +public: + s_RequestContext(); + RequestContext* cppobj; + + // This object needs to maintain some source data to construct the + // underlying RequestContext object throughout its lifetime. + // These are "public" so that it can be accessed in the python wrapper + // implementation, but essentially they should be private, and the + // implementation details are hidden. + struct Data; + Data* data_; +}; + +extern PyTypeObject requestcontext_type; + +bool initModulePart_RequestContext(PyObject* mod); + +} // namespace python +} // namespace dns +} // namespace acl +} // namespace isc +#endif // __PYTHON_REQUESTCONTEXT_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/acl/dns_requestloader_inc.cc b/src/lib/python/isc/acl/dns_requestloader_inc.cc new file mode 100644 index 0000000000..a911275382 --- /dev/null +++ b/src/lib/python/isc/acl/dns_requestloader_inc.cc @@ -0,0 +1,87 @@ +namespace { + +// Note: this is derived from the generic Loader class of the C++ +// implementation, but is slightly different from the original. +// Be careful when you make further merge from the C++ document. +const char* const RequestLoader_doc = "\ +Loader of DNS Request ACLs.\n\ +\n\ +The goal of this class is to convert JSON description of an ACL to\n\ +object of the ACL class (including the checks inside it).\n\ +\n\ +To allow any kind of checks to exist in the application, creators are\n\ +registered for the names of the checks (this feature is not yet\n\ +available for the python API).\n\ +\n\ +An ACL definition looks like this: [\n\ + {\n\ + \"action\": \"ACCEPT\",\n\ + \"match-type\": \n\ + },\n\ + {\n\ + \"action\": \"REJECT\",\n\ + \"match-type\": ,\n\ + \"another-match-type\": [, ]\n\ + },\n\ + {\n\ + \"action\": \"DROP\"\n\ + }\n\ + ]\n\ + \n\ +\n\ +This is a list of elements. Each element must have an \"action\"\n\ +entry/keyword. That one specifies which action is returned if this\n\ +element matches (the value of the key is passed to the action loader\n\ +(see the constructor), which is one of ACCEPT,\n\ +REJECT, or DROP, as defined in the isc.acl.acl module.\n\ +\n\ +The rest of the element are matches. The left side is the name of the\n\ +match type (for example \"from\" to match for source IP address).\n\ +The is whatever is needed to describe the\n\ +match and depends on the match type, the loader passes it verbatim to\n\ +creator of that match type.\n\ +\n\ +There may be multiple match types in single element. In such case, all\n\ +of the matches must match for the element to take action (so, in the\n\ +second element, both \"match-type\" and \"another-match-type\" must be\n\ +satisfied). If there's no match in the element, the action is\n\ +taken/returned without conditions, every time (makes sense as the last\n\ +entry, as the ACL will never get past it).\n\ +\n\ +The second entry shows another thing - if there's a list as the value\n\ +for some match and the match itself is not expecting a list, it is\n\ +taken as an \"or\" - a match for at last one of the choices in the\n\ +list must match. So, for the second entry, both \"match-type\" and\n\ +\"another-match-type\" must be satisfied, but the another one is\n\ +satisfied by either parameter1 or parameter2.\n\ +\n\ +Currently, a RequestLoader object cannot be constructed directly;\n\ +an application must use the singleton loader defined in the\n\ +isc.acl.dns module, i.e., isc.acl.dns.REQUEST_LOADER.\n\ +A future version of this implementation may be extended to give\n\ +applications full flexibility of creating arbitrary loader, when\n\ +this restriction may be removed.\n\ +"; + +const char* const RequestLoader_load_doc = "\ +load(description) -> RequestACL\n\ +\n\ +Load a DNS (Request) ACL.\n\ +\n\ +This parses an ACL list, creates internal data for each rule\n\ +and returns a RequestACl object that contains all given rules.\n\ +\n\ +Exceptions:\n\ + LoaderError Load failed. The most likely cause of this is a syntax\n\ + error in the description. Other internal errors such as\n\ + memory allocation failure is also converted to this\n\ + exception.\n\ +\n\ +Parameters:\n\ + description String or Python representation of the JSON list of\n\ + ACL. The Python representation is ones accepted by the\n\ + standard json module.\n\ +\n\ +Return Value(s): The newly created RequestACL object\n\ +"; +} // unnamed namespace diff --git a/src/lib/python/isc/acl/dns_requestloader_python.cc b/src/lib/python/isc/acl/dns_requestloader_python.cc new file mode 100644 index 0000000000..ab421c5839 --- /dev/null +++ b/src/lib/python/isc/acl/dns_requestloader_python.cc @@ -0,0 +1,270 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// Enable this if you use s# variants with PyArg_ParseTuple(), see +// http://docs.python.org/py3k/c-api/arg.html#strings-and-buffers +//#define PY_SSIZE_T_CLEAN + +// Python.h needs to be placed at the head of the program file, see: +// http://docs.python.org/py3k/extending/extending.html#a-simple-example +#include + +#include +#include + +#include + +#include + +#include + +#include + +#include "dns.h" +#include "dns_requestacl_python.h" +#include "dns_requestloader_python.h" + +using namespace std; +using boost::shared_ptr; +using namespace isc::util::python; +using namespace isc::data; +using namespace isc::acl::dns; +using namespace isc::acl::dns::python; + +// +// Definition of the classes +// + +// For each class, we need a struct, a helper functions (init, destroy, +// and static wrappers around the methods we export), a list of methods, +// and a type description + +// +// RequestLoader +// + +// Trivial constructor. +s_RequestLoader::s_RequestLoader() : cppobj(NULL) { +} + +// Import pydoc text +#include "dns_requestloader_inc.cc" + +namespace { +// +// We declare the functions here, the definitions are below +// the type definition of the object, since both can use the other +// + +int +RequestLoader_init(PyObject*, PyObject*, PyObject*) { + PyErr_SetString(getACLException("Error"), + "RequestLoader cannot be directly constructed"); + return (-1); +} + +void +RequestLoader_destroy(PyObject* po_self) { + s_RequestLoader* const self = static_cast(po_self); + delete self->cppobj; + self->cppobj = NULL; + Py_TYPE(self)->tp_free(self); +} + +// This C structure corresponds to a Python callable object for json.dumps(). +// This is initialized at the class initialization time (in +// initModulePart_RequestLoader() below) and it's ensured to be non NULL and +// valid in the rest of the class implementation. +// Getting access to the json module this way and call one of its functions +// via PyObject_CallObject() may exceed the reasonably acceptable level for +// straightforward bindings. But the alternative would be to write a Python +// frontend for the entire module only for this conversion, which would also +// be too much. So, right now, we implement everything within the binding +// implementation. If future extensions require more such non trivial +// wrappers, we should consider the frontend approach more seriously. +PyObject* json_dumps_obj = NULL; + +PyObject* +RequestLoader_load(PyObject* po_self, PyObject* args) { + s_RequestLoader* const self = static_cast(po_self); + + try { + PyObjectContainer c1, c2; // placeholder for temporary py objects + const char* acl_config; + + // First, try string + int py_result = PyArg_ParseTuple(args, "s", &acl_config); + if (!py_result) { + PyErr_Clear(); // need to clear the error from ParseTuple + + // If that fails, confirm the argument is a single Python object, + // and pass the argument to json.dumps() without conversion. + // Note that we should pass 'args', not 'json_obj' to + // PyObject_CallObject(), since this function expects a form of + // tuple as its argument parameter, just like ParseTuple. + PyObject* json_obj; + if (PyArg_ParseTuple(args, "O", &json_obj)) { + c1.reset(PyObject_CallObject(json_dumps_obj, args)); + c2.reset(Py_BuildValue("(O)", c1.get())); + py_result = PyArg_ParseTuple(c2.get(), "s", &acl_config); + } + } + if (py_result) { + shared_ptr acl( + self->cppobj->load(Element::fromJSON(acl_config))); + s_RequestACL* py_acl = static_cast( + requestacl_type.tp_alloc(&requestacl_type, 0)); + if (py_acl != NULL) { + py_acl->cppobj = acl; + } + return (py_acl); + } + } catch (const PyCPPWrapperException&) { + // If the wrapper utility throws, it's most likely because an invalid + // type of argument is passed (and the call to json.dumps() failed + // above), rather than a rare case of system errors such as memory + // allocation failure. So we fall through to the end of this function + // and raise a TypeError. + ; + } catch (const exception& ex) { + PyErr_SetString(getACLException("LoaderError"), ex.what()); + return (NULL); + } catch (...) { + PyErr_SetString(PyExc_SystemError, "Unexpected C++ exception"); + return (NULL); + } + + PyErr_SetString(PyExc_TypeError, "RequestLoader.load() " + "expects str or python representation of JSON"); + return (NULL); +} + +// This list contains the actual set of functions we have in +// python. Each entry has +// 1. Python method name +// 2. Our static function here +// 3. Argument type +// 4. Documentation +PyMethodDef RequestLoader_methods[] = { + { "load", RequestLoader_load, METH_VARARGS, RequestLoader_load_doc }, + { NULL, NULL, 0, NULL } +}; +} // end of unnamed namespace + +namespace isc { +namespace acl { +namespace dns { +namespace python { +// This defines the complete type for reflection in python and +// parsing of PyObject* to s_RequestLoader +// Most of the functions are not actually implemented and NULL here. +PyTypeObject requestloader_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "isc.acl._dns.RequestLoader", + sizeof(s_RequestLoader), // tp_basicsize + 0, // tp_itemsize + RequestLoader_destroy, // tp_dealloc + NULL, // tp_print + NULL, // tp_getattr + NULL, // tp_setattr + NULL, // tp_reserved + NULL, // tp_repr + NULL, // tp_as_number + NULL, // tp_as_sequence + NULL, // tp_as_mapping + NULL, // tp_hash + NULL, // tp_call + NULL, // tp_str + NULL, // tp_getattro + NULL, // tp_setattro + NULL, // tp_as_buffer + Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, // tp_flags + RequestLoader_doc, + NULL, // tp_traverse + NULL, // tp_clear + NULL, // tp_richcompare + 0, // tp_weaklistoffset + NULL, // tp_iter + NULL, // tp_iternext + RequestLoader_methods, // tp_methods + NULL, // tp_members + NULL, // tp_getset + NULL, // tp_base + NULL, // tp_dict + NULL, // tp_descr_get + NULL, // tp_descr_set + 0, // tp_dictoffset + RequestLoader_init, // tp_init + NULL, // tp_alloc + PyType_GenericNew, // tp_new + NULL, // tp_free + NULL, // tp_is_gc + NULL, // tp_bases + NULL, // tp_mro + NULL, // tp_cache + NULL, // tp_subclasses + NULL, // tp_weaklist + NULL, // tp_del + 0 // tp_version_tag +}; + +bool +initModulePart_RequestLoader(PyObject* mod) { + // We initialize the static description object with PyType_Ready(), + // then add it to the module. This is not just a check! (leaving + // this out results in segmentation faults) + if (PyType_Ready(&requestloader_type) < 0) { + return (false); + } + void* p = &requestloader_type; + if (PyModule_AddObject(mod, "RequestLoader", + static_cast(p)) < 0) { + return (false); + } + + // Get and hold our own reference to json.dumps() for later use. + // Normally it should succeed as __init__.py of the isc.acl package + // explicitly imports the json module, and the code below should be + // error free (e.g. they don't require memory allocation) under this + // condition. + // This could still fail with deviant or evil Python code such as those + // that first import json and then delete the reference to it from + // sys.modules before it imports the acl.dns module. The RequestLoader + // class could still work as long as it doesn't use the JSON decoder, + // but we'd rather refuse to import the module than allowing the partially + // workable class to keep running. + PyObject* json_module = PyImport_AddModule("json"); + if (json_module != NULL) { + PyObject* json_dict = PyModule_GetDict(json_module); + if (json_dict != NULL) { + json_dumps_obj = PyDict_GetItemString(json_dict, "dumps"); + } + } + if (json_dumps_obj != NULL) { + Py_INCREF(json_dumps_obj); + } else { + PyErr_SetString(PyExc_RuntimeError, + "isc.acl.dns.RequestLoader needs the json module, but " + "it's missing"); + return (false); + } + + Py_INCREF(&requestloader_type); + + return (true); +} +} // namespace python +} // namespace dns +} // namespace acl +} // namespace isc diff --git a/src/lib/python/isc/acl/dns_requestloader_python.h b/src/lib/python/isc/acl/dns_requestloader_python.h new file mode 100644 index 0000000000..9d0b63ecee --- /dev/null +++ b/src/lib/python/isc/acl/dns_requestloader_python.h @@ -0,0 +1,46 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_REQUESTLOADER_H +#define __PYTHON_REQUESTLOADER_H 1 + +#include + +#include + +namespace isc { +namespace acl { +namespace dns { +namespace python { + +// The s_* Class simply covers one instantiation of the object +class s_RequestLoader : public PyObject { +public: + s_RequestLoader(); + RequestLoader* cppobj; +}; + +extern PyTypeObject requestloader_type; + +bool initModulePart_RequestLoader(PyObject* mod); + +} // namespace python +} // namespace dns +} // namespace acl +} // namespace isc +#endif // __PYTHON_REQUESTLOADER_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/acl/dnsacl_inc.cc b/src/lib/python/isc/acl/dnsacl_inc.cc new file mode 100644 index 0000000000..b2e733821d --- /dev/null +++ b/src/lib/python/isc/acl/dnsacl_inc.cc @@ -0,0 +1,17 @@ +namespace { +const char* const dnsacl_doc = "\ +Implementation module for DNS ACL operations\n\n\ +This module provides Python bindings for the C++ classes in the\n\ +isc::acl::dns namespace. Specifically, it defines Python interfaces of\n\ +handling access control lists (ACLs) with DNS related contexts.\n\ +These bindings are close match to the C++ API, but they are not complete\n\ +(some parts are not needed) and some are done in more python-like ways.\n\ +\n\ +Special objects:\n\ +\n\ +REQUEST_LOADER -- A singleton loader of ACLs. It is expected applications\n\ + will use this function instead of creating their own loaders, because\n\ + one is enough, this one will have registered default checks and it is\n\ + known one, so any plugins can registrer additional checks as well.\n\ +"; +} // unnamed namespace diff --git a/src/lib/python/isc/acl/tests/Makefile.am b/src/lib/python/isc/acl/tests/Makefile.am new file mode 100644 index 0000000000..e0a1895d70 --- /dev/null +++ b/src/lib/python/isc/acl/tests/Makefile.am @@ -0,0 +1,30 @@ +PYCOVERAGE_RUN = @PYCOVERAGE_RUN@ +PYTESTS = acl_test.py dns_test.py + +EXTRA_DIST = $(PYTESTS) + +# If necessary (rare cases), explicitly specify paths to dynamic libraries +# required by loadable python modules. +LIBRARY_PATH_PLACEHOLDER = +if SET_ENV_LIBRARY_PATH +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/acl/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) +endif + +# test using command-line arguments, so use check-local target instead of TESTS +check-local: +if ENABLE_PYTHON_COVERAGE + touch $(abs_top_srcdir)/.coverage + rm -f .coverage + ${LN_S} $(abs_top_srcdir)/.coverage .coverage +endif + for pytest in $(PYTESTS) ; do \ + echo Running test: $$pytest ; \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/isc/python/acl/.libs \ + $(LIBRARY_PATH_PLACEHOLDER) \ + $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ + done + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/lib/python/isc/acl/tests/acl_test.py b/src/lib/python/isc/acl/tests/acl_test.py new file mode 100644 index 0000000000..24a0c94f25 --- /dev/null +++ b/src/lib/python/isc/acl/tests/acl_test.py @@ -0,0 +1,29 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +import unittest +from isc.acl.acl import * + +class ACLTest(unittest.TestCase): + + def test_actions(self): + # These are simple tests just checking the pre defined actions have + # different values + self.assertTrue(ACCEPT != REJECT) + self.assertTrue(REJECT != DROP) + self.assertTrue(DROP != ACCEPT) + +if __name__ == '__main__': + unittest.main() diff --git a/src/lib/python/isc/acl/tests/dns_test.py b/src/lib/python/isc/acl/tests/dns_test.py new file mode 100644 index 0000000000..7ee3023454 --- /dev/null +++ b/src/lib/python/isc/acl/tests/dns_test.py @@ -0,0 +1,357 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +import unittest +import socket +from pydnspp import * +from isc.acl.acl import LoaderError, Error, ACCEPT, REJECT, DROP +from isc.acl.dns import * + +def get_sockaddr(address, port): + '''This is a simple shortcut wrapper for getaddrinfo''' + ai = socket.getaddrinfo(address, port, 0, socket.SOCK_DGRAM, + socket.IPPROTO_UDP, socket.AI_NUMERICHOST)[0] + return ai[4] + +def get_acl(prefix): + '''This is a simple shortcut for creating an ACL containing single rule + that accepts addresses for the given IP prefix (and reject any others + by default) + ''' + return REQUEST_LOADER.load('[{"action": "ACCEPT", "from": "' + \ + prefix + '"}]') + +def get_acl_json(prefix): + '''Same as get_acl, but this function passes a Python representation of + JSON to the loader, not a string.''' + json = [{"action": "ACCEPT"}] + json[0]["from"] = prefix + return REQUEST_LOADER.load(json) + +# The following two are similar to the previous two, but use a TSIG key name +# instead of IP prefix. +def get_tsig_acl(key): + return REQUEST_LOADER.load('[{"action": "ACCEPT", "key": "' + \ + key + '"}]') + +def get_tsig_acl_json(key): + json = [{"action": "ACCEPT"}] + json[0]["key"] = key + return REQUEST_LOADER.load(json) + +# commonly used TSIG RDATA. For the purpose of ACL checks only the key name +# matters; other parrameters are simply borrowed from some other tests, which +# can be anything for the purpose of the tests here. +TSIG_RDATA = TSIG("hmac-md5.sig-alg.reg.int. 1302890362 " + \ + "300 16 2tra2tra2tra2tra2tra2g== " + \ + "11621 0 0") + +def get_context(address, key_name=None): + '''This is a simple shortcut wrapper for creating a RequestContext + object with a given IP address and optionally TSIG key name. + Port number doesn't matter in the test (as of the initial implementation), + so it's fixed for simplicity. + If key_name is not None, it internally creates a (faked) TSIG record + and constructs a context with that key. Note that only the key name + matters for the purpose of ACL checks. + ''' + tsig_record = None + if key_name is not None: + tsig_record = TSIGRecord(Name(key_name), TSIG_RDATA) + return RequestContext(get_sockaddr(address, 53000), tsig_record) + +# These are commonly used RequestContext object +CONTEXT4 = get_context('192.0.2.1') +CONTEXT6 = get_context('2001:db8::1') + +class RequestContextTest(unittest.TestCase): + + def test_construct(self): + # Construct the context from IPv4/IPv6 addresses, check the object + # by printing it. + self.assertEqual('', + RequestContext(('192.0.2.1', 53001)).__str__()) + self.assertEqual('', + RequestContext(('2001:db8::1234', 53006, + 0, 0)).__str__()) + + # Construct the context from IP address and a TSIG record. + tsig_record = TSIGRecord(Name("key.example.com"), TSIG_RDATA) + self.assertEqual('', + RequestContext(('192.0.2.1', 53001), + tsig_record).__str__()) + + # same with IPv6 address, just in case. + self.assertEqual('', + RequestContext(('2001:db8::1234', 53006, + 0, 0), tsig_record).__str__()) + + # Unusual case: port number overflows (this constructor allows that, + # although it should be rare anyway; the socket address should + # normally come from the Python socket module. + self.assertEqual('', + RequestContext(('192.0.2.1', 65536)).__str__()) + + # same test using socket.getaddrinfo() to ensure it accepts the sock + # address representation used in the Python socket module. + self.assertEqual('', + RequestContext(get_sockaddr('192.0.2.1', + 53001)).__str__()) + self.assertEqual('', + RequestContext(get_sockaddr('2001:db8::1234', + 53006)).__str__()) + + # + # Invalid parameters (in our expected usage this should not happen + # because the sockaddr would come from the Python socket module, but + # validation should still be performed correctly) + # + # not a tuple + self.assertRaises(TypeError, RequestContext, 1) + # invalid number of parameters + self.assertRaises(TypeError, RequestContext, ('192.0.2.1', 53), 0, 1) + # type error for TSIG + self.assertRaises(TypeError, RequestContext, ('192.0.2.1', 53), tsig=1) + # tuple is not in the form of sockaddr + self.assertRaises(TypeError, RequestContext, (0, 53)) + self.assertRaises(TypeError, RequestContext, ('192.0.2.1', 'http')) + self.assertRaises(TypeError, RequestContext, ('::', 0, 'flow', 0)) + # invalid address + self.assertRaises(Error, RequestContext, ('example.com', 5300)) + self.assertRaises(Error, RequestContext, ('192.0.2.1.1', 5300)) + self.assertRaises(Error, RequestContext, ('2001:db8:::1', 5300)) + +class RequestACLTest(unittest.TestCase): + + def test_direct_construct(self): + self.assertRaises(Error, RequestACL) + + def test_request_loader(self): + # these shouldn't raise an exception + REQUEST_LOADER.load('[{"action": "DROP"}]') + REQUEST_LOADER.load([{"action": "DROP"}]) + REQUEST_LOADER.load('[{"action": "DROP", "from": "192.0.2.1"}]') + REQUEST_LOADER.load([{"action": "DROP", "from": "192.0.2.1"}]) + + # Invalid types (note that arguments like '1' or '[]' is of valid + # 'type' (but syntax error at a higher level)). So we need to use + # something that is not really JSON nor string. + self.assertRaises(TypeError, REQUEST_LOADER.load, b'') + + # Incorrect number of arguments + self.assertRaises(TypeError, REQUEST_LOADER.load, + '[{"action": "DROP"}]', 0) + + def test_bad_acl_syntax(self): + # the following are derived from loader_test.cc + self.assertRaises(LoaderError, REQUEST_LOADER.load, '{}'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, {}); + self.assertRaises(LoaderError, REQUEST_LOADER.load, '42'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, 42); + self.assertRaises(LoaderError, REQUEST_LOADER.load, 'true'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, True); + self.assertRaises(LoaderError, REQUEST_LOADER.load, 'null'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, None); + self.assertRaises(LoaderError, REQUEST_LOADER.load, '"hello"'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, "hello"); + self.assertRaises(LoaderError, REQUEST_LOADER.load, '[42]'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, [42]); + self.assertRaises(LoaderError, REQUEST_LOADER.load, '["hello"]'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, ["hello"]); + self.assertRaises(LoaderError, REQUEST_LOADER.load, '[[]]'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, [[]]); + self.assertRaises(LoaderError, REQUEST_LOADER.load, '[true]'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, [True]); + self.assertRaises(LoaderError, REQUEST_LOADER.load, '[null]'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, [None]); + self.assertRaises(LoaderError, REQUEST_LOADER.load, '[{}]'); + self.assertRaises(LoaderError, REQUEST_LOADER.load, [{}]); + + # the following are derived from dns_test.cc + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "ACCEPT", "bad": "192.0.2.1"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "bad": "192.0.2.1"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "ACCEPT", "from": 4}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "from": 4}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "ACCEPT", "from": []}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "from": []}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "ACCEPT", "key": 1}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "key": 1}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "ACCEPT", "key": {}}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "key": {}}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "ACCEPT", "from": "bad"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "from": "bad"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "key": "bad..name"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "key": "bad..name"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "ACCEPT", "from": null}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "ACCEPT", "from": None}]) + + def test_bad_acl_ipsyntax(self): + # this test is derived from ip_check_unittest.cc + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "192.0.2.43/-1"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "192.0.2.43/-1"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "192.0.2.43//1"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "192.0.2.43//1"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "192.0.2.43/1/"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "192.0.2.43/1/"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "/192.0.2.43/1"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "/192.0.2.43/1"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "2001:db8::/xxxx"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "2001:db8::/xxxx"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "2001:db8::/32/s"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "2001:db8::/32/s"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "1/"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "1/"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "/1"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "/1"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "192.0.2.0/33"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "192.0.2.0/33"}]) + self.assertRaises(LoaderError, REQUEST_LOADER.load, + '[{"action": "DROP", "from": "::1/129"}]') + self.assertRaises(LoaderError, REQUEST_LOADER.load, + [{"action": "DROP", "from": "::1/129"}]) + + def test_execute(self): + # tests derived from dns_test.cc. We don't directly expose checks + # in the python wrapper, so we test it via execute(). + self.assertEqual(ACCEPT, get_acl('192.0.2.1').execute(CONTEXT4)) + self.assertEqual(ACCEPT, get_acl_json('192.0.2.1').execute(CONTEXT4)) + self.assertEqual(REJECT, get_acl('192.0.2.53').execute(CONTEXT4)) + self.assertEqual(REJECT, get_acl_json('192.0.2.53').execute(CONTEXT4)) + self.assertEqual(ACCEPT, get_acl('192.0.2.0/24').execute(CONTEXT4)) + self.assertEqual(ACCEPT, get_acl_json('192.0.2.0/24').execute(CONTEXT4)) + self.assertEqual(REJECT, get_acl('192.0.1.0/24').execute(CONTEXT4)) + self.assertEqual(REJECT, get_acl_json('192.0.1.0/24').execute(CONTEXT4)) + self.assertEqual(REJECT, get_acl('192.0.1.0/24').execute(CONTEXT4)) + self.assertEqual(REJECT, get_acl_json('192.0.1.0/24').execute(CONTEXT4)) + + self.assertEqual(ACCEPT, get_acl('2001:db8::1').execute(CONTEXT6)) + self.assertEqual(ACCEPT, get_acl_json('2001:db8::1').execute(CONTEXT6)) + self.assertEqual(REJECT, get_acl('2001:db8::53').execute(CONTEXT6)) + self.assertEqual(REJECT, get_acl_json('2001:db8::53').execute(CONTEXT6)) + self.assertEqual(ACCEPT, get_acl('2001:db8::/64').execute(CONTEXT6)) + self.assertEqual(ACCEPT, + get_acl_json('2001:db8::/64').execute(CONTEXT6)) + self.assertEqual(REJECT, get_acl('2001:db8:1::/64').execute(CONTEXT6)) + self.assertEqual(REJECT, + get_acl_json('2001:db8:1::/64').execute(CONTEXT6)) + self.assertEqual(REJECT, get_acl('32.1.13.184').execute(CONTEXT6)) + self.assertEqual(REJECT, get_acl_json('32.1.13.184').execute(CONTEXT6)) + + # TSIG checks, derived from dns_test.cc + self.assertEqual(ACCEPT, get_tsig_acl('key.example.com').\ + execute(get_context('192.0.2.1', + 'key.example.com'))) + self.assertEqual(REJECT, get_tsig_acl_json('key.example.com').\ + execute(get_context('192.0.2.1', + 'badkey.example.com'))) + self.assertEqual(ACCEPT, get_tsig_acl('key.example.com').\ + execute(get_context('2001:db8::1', + 'key.example.com'))) + self.assertEqual(REJECT, get_tsig_acl_json('key.example.com').\ + execute(get_context('2001:db8::1', + 'badkey.example.com'))) + self.assertEqual(REJECT, get_tsig_acl('key.example.com').\ + execute(CONTEXT4)) + self.assertEqual(REJECT, get_tsig_acl_json('key.example.com').\ + execute(CONTEXT4)) + self.assertEqual(REJECT, get_tsig_acl('key.example.com').\ + execute(CONTEXT6)) + self.assertEqual(REJECT, get_tsig_acl_json('key.example.com').\ + execute(CONTEXT6)) + + # A bit more complicated example, derived from resolver_config_unittest + acl = REQUEST_LOADER.load('[ {"action": "ACCEPT", ' + + ' "from": "192.0.2.1"},' + + ' {"action": "REJECT",' + + ' "from": "192.0.2.0/24"},' + + ' {"action": "DROP",' + + ' "from": "2001:db8::1"},' + + '] }') + self.assertEqual(ACCEPT, acl.execute(CONTEXT4)) + self.assertEqual(REJECT, acl.execute(get_context('192.0.2.2'))) + self.assertEqual(DROP, acl.execute(get_context('2001:db8::1'))) + self.assertEqual(REJECT, acl.execute(get_context('2001:db8::2'))) + + # same test using the JSON representation + acl = REQUEST_LOADER.load([{"action": "ACCEPT", "from": "192.0.2.1"}, + {"action": "REJECT", + "from": "192.0.2.0/24"}, + {"action": "DROP", "from": "2001:db8::1"}]) + self.assertEqual(ACCEPT, acl.execute(CONTEXT4)) + self.assertEqual(REJECT, acl.execute(get_context('192.0.2.2'))) + self.assertEqual(DROP, acl.execute(get_context('2001:db8::1'))) + self.assertEqual(REJECT, acl.execute(get_context('2001:db8::2'))) + + def test_bad_execute(self): + acl = get_acl('192.0.2.1') + # missing parameter + self.assertRaises(TypeError, acl.execute) + # too many parameters + self.assertRaises(TypeError, acl.execute, get_context('192.0.2.2'), 0) + # type mismatch + self.assertRaises(TypeError, acl.execute, 'bad parameter') + +class RequestLoaderTest(unittest.TestCase): + # Note: loading ACLs is tested in other test cases. + + def test_construct(self): + # at least for now, we don't allow direct construction. + self.assertRaises(Error, RequestLoader) + +if __name__ == '__main__': + unittest.main() diff --git a/src/lib/python/isc/bind10/Makefile.am b/src/lib/python/isc/bind10/Makefile.am new file mode 100644 index 0000000000..43a7605f7d --- /dev/null +++ b/src/lib/python/isc/bind10/Makefile.am @@ -0,0 +1,4 @@ +SUBDIRS = . tests + +python_PYTHON = __init__.py sockcreator.py +pythondir = $(pyexecdir)/isc/bind10 diff --git a/src/bin/stats/tests/isc/__init__.py b/src/lib/python/isc/bind10/__init__.py similarity index 100% rename from src/bin/stats/tests/isc/__init__.py rename to src/lib/python/isc/bind10/__init__.py diff --git a/src/lib/python/isc/bind10/sockcreator.py b/src/lib/python/isc/bind10/sockcreator.py new file mode 100644 index 0000000000..8e5b019536 --- /dev/null +++ b/src/lib/python/isc/bind10/sockcreator.py @@ -0,0 +1,226 @@ +# Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +import socket +import struct +import os +import subprocess +from isc.log_messages.bind10_messages import * +from libutil_io_python import recv_fd + +logger = isc.log.Logger("boss") + +""" +Module that comunicates with the privileged socket creator (b10-sockcreator). +""" + +class CreatorError(Exception): + """ + Exception for socket creator related errors. + + It has two members: fatal and errno and they are just holding the values + passed to the __init__ function. + """ + + def __init__(self, message, fatal, errno=None): + """ + Creates the exception. The message argument is the usual string. + The fatal one tells if the error is fatal (eg. the creator crashed) + and errno is the errno value returned from socket creator, if + applicable. + """ + Exception.__init__(self, message) + self.fatal = fatal + self.errno = errno + +class Parser: + """ + This class knows the sockcreator language. It creates commands, sends them + and receives the answers and parses them. + + It does not start it, the communication channel must be provided. + + In theory, anything here can throw a fatal CreatorError exception, but it + happens only in case something like the creator process crashes. Any other + occasions are mentioned explicitly. + """ + + def __init__(self, creator_socket): + """ + Creates the parser. The creator_socket is socket to the socket creator + process that will be used for communication. However, the object must + have a read_fd() method to read the file descriptor. This slightly + unusual trick with modifying an object is used to easy up testing. + + You can use WrappedSocket in production code to add the method to any + ordinary socket. + """ + self.__socket = creator_socket + logger.info(BIND10_SOCKCREATOR_INIT) + + def terminate(self): + """ + Asks the creator process to terminate and waits for it to close the + socket. Does not return anything. Raises a CreatorError if there is + still data on the socket, if there is an error closing the socket, + or if the socket had already been closed. + """ + if self.__socket is None: + raise CreatorError('Terminated already', True) + logger.info(BIND10_SOCKCREATOR_TERMINATE) + try: + self.__socket.sendall(b'T') + # Wait for an EOF - it will return empty data + eof = self.__socket.recv(1) + if len(eof) != 0: + raise CreatorError('Protocol error - data after terminated', + True) + self.__socket = None + except socket.error as se: + self.__socket = None + raise CreatorError(str(se), True) + + def get_socket(self, address, port, socktype): + """ + Asks the socket creator process to create a socket. Pass an address + (the isc.net.IPaddr object), port number and socket type (either + string "UDP", "TCP" or constant socket.SOCK_DGRAM or + socket.SOCK_STREAM. + + Blocks until it is provided by the socket creator process (which + should be fast, as it is on localhost) and returns the file descriptor + number. It raises a CreatorError exception if the creation fails. + """ + if self.__socket is None: + raise CreatorError('Socket requested on terminated creator', True) + # First, assemble the request from parts + logger.info(BIND10_SOCKET_GET, address, port, socktype) + data = b'S' + if socktype == 'UDP' or socktype == socket.SOCK_DGRAM: + data += b'U' + elif socktype == 'TCP' or socktype == socket.SOCK_STREAM: + data += b'T' + else: + raise ValueError('Unknown socket type: ' + str(socktype)) + if address.family == socket.AF_INET: + data += b'4' + elif address.family == socket.AF_INET6: + data += b'6' + else: + raise ValueError('Unknown address family in address') + data += struct.pack('!H', port) + data += address.addr + try: + # Send the request + self.__socket.sendall(data) + answer = self.__socket.recv(1) + if answer == b'S': + # Success! + result = self.__socket.read_fd() + logger.info(BIND10_SOCKET_CREATED, result) + return result + elif answer == b'E': + # There was an error, read the error as well + error = self.__socket.recv(1) + errno = struct.unpack('i', + self.__read_all(len(struct.pack('i', + 0)))) + if error == b'S': + cause = 'socket' + elif error == b'B': + cause = 'bind' + else: + self.__socket = None + logger.fatal(BIND10_SOCKCREATOR_BAD_CAUSE, error) + raise CreatorError('Unknown error cause' + str(answer), True) + logger.error(BIND10_SOCKET_ERROR, cause, errno[0], + os.strerror(errno[0])) + raise CreatorError('Error creating socket on ' + cause, False, + errno[0]) + else: + self.__socket = None + logger.fatal(BIND10_SOCKCREATOR_BAD_RESPONSE, answer) + raise CreatorError('Unknown response ' + str(answer), True) + except socket.error as se: + self.__socket = None + logger.fatal(BIND10_SOCKCREATOR_TRANSPORT_ERROR, str(se)) + raise CreatorError(str(se), True) + + def __read_all(self, length): + """ + Keeps reading until length data is read or EOF or error happens. + + EOF is considered error as well and throws a CreatorError. + """ + result = b'' + while len(result) < length: + data = self.__socket.recv(length - len(result)) + if len(data) == 0: + self.__socket = None + logger.fatal(BIND10_SOCKCREATOR_EOF) + raise CreatorError('Unexpected EOF', True) + result += data + return result + +class WrappedSocket: + """ + This class wraps a socket and adds a read_fd method, so it can be used + for the Parser class conveniently. It simply copies all its guts into + itself and implements the method. + """ + def __init__(self, socket): + # Copy whatever can be copied from the socket + for name in dir(socket): + if name not in ['__class__', '__weakref__']: + setattr(self, name, getattr(socket, name)) + # Keep the socket, so we can prevent it from being garbage-collected + # and closed before we are removed ourself + self.__orig_socket = socket + + def read_fd(self): + """ + Read the file descriptor from the socket. + """ + return recv_fd(self.fileno()) + +# FIXME: Any idea how to test this? Starting an external process doesn't sound +# OK +class Creator(Parser): + """ + This starts the socket creator and allows asking for the sockets. + """ + def __init__(self, path): + (local, remote) = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM) + # Popen does not like, for some reason, having the same socket for + # stdin as well as stdout, so we dup it before passing it there. + remote2 = socket.fromfd(remote.fileno(), socket.AF_UNIX, + socket.SOCK_STREAM) + env = os.environ + env['PATH'] = path + self.__process = subprocess.Popen(['b10-sockcreator'], env=env, + stdin=remote.fileno(), + stdout=remote2.fileno()) + remote.close() + remote2.close() + Parser.__init__(self, WrappedSocket(local)) + + def pid(self): + return self.__process.pid + + def kill(self): + logger.warn(BIND10_SOCKCREATOR_KILL) + if self.__process is not None: + self.__process.kill() + self.__process = None diff --git a/src/lib/python/isc/bind10/tests/Makefile.am b/src/lib/python/isc/bind10/tests/Makefile.am new file mode 100644 index 0000000000..df8ab30e21 --- /dev/null +++ b/src/lib/python/isc/bind10/tests/Makefile.am @@ -0,0 +1,29 @@ +PYCOVERAGE_RUN = @PYCOVERAGE_RUN@ +#PYTESTS = args_test.py bind10_test.py +# NOTE: this has a generated test found in the builddir +PYTESTS = sockcreator_test.py + +EXTRA_DIST = $(PYTESTS) + +# If necessary (rare cases), explicitly specify paths to dynamic libraries +# required by loadable python modules. +LIBRARY_PATH_PLACEHOLDER = +if SET_ENV_LIBRARY_PATH +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/util/io/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) +endif + +# test using command-line arguments, so use check-local target instead of TESTS +check-local: +if ENABLE_PYTHON_COVERAGE + touch $(abs_top_srcdir)/.coverage + rm -f .coverage + ${LN_S} $(abs_top_srcdir)/.coverage .coverage +endif + for pytest in $(PYTESTS) ; do \ + echo Running test: $$pytest ; \ + $(LIBRARY_PATH_PLACEHOLDER) \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_srcdir)/src/bin:$(abs_top_builddir)/src/bin/bind10:$(abs_top_builddir)/src/lib/util/io/.libs \ + BIND10_MSGQ_SOCKET_FILE=$(abs_top_builddir)/msgq_socket \ + $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ + done + diff --git a/src/lib/python/isc/bind10/tests/sockcreator_test.py b/src/lib/python/isc/bind10/tests/sockcreator_test.py new file mode 100644 index 0000000000..4453184ef5 --- /dev/null +++ b/src/lib/python/isc/bind10/tests/sockcreator_test.py @@ -0,0 +1,327 @@ +# Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +# This test file is generated .py.in -> .py just to be in the build dir, +# same as the rest of the tests. Saves a lot of stuff in makefile. + +""" +Tests for the bind10.sockcreator module. +""" + +import unittest +import struct +import socket +from isc.net.addr import IPAddr +import isc.log +from libutil_io_python import send_fd +from isc.bind10.sockcreator import Parser, CreatorError, WrappedSocket + +class FakeCreator: + """ + Class emulating the socket to the socket creator. It can be given expected + data to receive (and check) and responses to give to the Parser class + during testing. + """ + + class InvalidPlan(Exception): + """ + Raised when someone wants to recv when sending is planned or vice + versa. + """ + pass + + class InvalidData(Exception): + """ + Raises when the data passed to sendall are not the same as expected. + """ + pass + + def __init__(self, plan): + """ + Create the object. The plan variable contains list of expected actions, + in form: + + [('r', 'Data to return from recv'), ('s', 'Data expected on sendall'), + , ('d', 'File descriptor number to return from read_sock'), ('e', + None), ...] + + It modifies the array as it goes. + """ + self.__plan = plan + + def __get_plan(self, expected): + if len(self.__plan) == 0: + raise InvalidPlan('Nothing more planned') + (kind, data) = self.__plan[0] + if kind == 'e': + self.__plan.pop(0) + raise socket.error('False socket error') + if kind != expected: + raise InvalidPlan('Planned ' + kind + ', but ' + expected + + 'requested') + return data + + def recv(self, maxsize): + """ + Emulate recv. Returs maxsize bytes from the current recv plan. If + there are data left from previous recv call, it is used first. + + If no recv is planned, raises InvalidPlan. + """ + data = self.__get_plan('r') + result, rest = data[:maxsize], data[maxsize:] + if len(rest) > 0: + self.__plan[0] = ('r', rest) + else: + self.__plan.pop(0) + return result + + def read_fd(self): + """ + Emulate the reading of file descriptor. Returns one from a plan. + + It raises InvalidPlan if no socket is planned now. + """ + fd = self.__get_plan('f') + self.__plan.pop(0) + return fd + + def sendall(self, data): + """ + Checks that the data passed are correct according to plan. It raises + InvalidData if the data differs or InvalidPlan when sendall is not + expected. + """ + planned = self.__get_plan('s') + dlen = len(data) + prefix, rest = planned[:dlen], planned[dlen:] + if prefix != data: + raise InvalidData('Expected "' + str(prefix)+ '", got "' + + str(data) + '"') + if len(rest) > 0: + self.__plan[0] = ('s', rest) + else: + self.__plan.pop(0) + + def all_used(self): + """ + Returns if the whole plan was consumed. + """ + return len(self.__plan) == 0 + +class ParserTests(unittest.TestCase): + """ + Testcases for the Parser class. + + A lot of these test could be done by + `with self.assertRaises(CreatorError) as cm`. But some versions of python + take the scope wrong and don't work, so we use the primitive way of + try-except. + """ + def __terminate(self): + creator = FakeCreator([('s', b'T'), ('r', b'')]) + parser = Parser(creator) + self.assertEqual(None, parser.terminate()) + self.assertTrue(creator.all_used()) + return parser + + def test_terminate(self): + """ + Test if the command to terminate is correct and it waits for reading the + EOF. + """ + self.__terminate() + + def __terminate_raises(self, parser): + """ + Check that terminate() raises a fatal exception. + """ + try: + parser.terminate() + self.fail("Not raised") + except CreatorError as ce: + self.assertTrue(ce.fatal) + self.assertEqual(None, ce.errno) + + def test_terminate_error1(self): + """ + Test it reports an exception when there's error terminating the creator. + This one raises an error when receiving the EOF. + """ + creator = FakeCreator([('s', b'T'), ('e', None)]) + parser = Parser(creator) + self.__terminate_raises(parser) + + def test_terminate_error2(self): + """ + Test it reports an exception when there's error terminating the creator. + This one raises an error when sending data. + """ + creator = FakeCreator([('e', None)]) + parser = Parser(creator) + self.__terminate_raises(parser) + + def test_terminate_error3(self): + """ + Test it reports an exception when there's error terminating the creator. + This one sends data when it should have terminated. + """ + creator = FakeCreator([('s', b'T'), ('r', b'Extra data')]) + parser = Parser(creator) + self.__terminate_raises(parser) + + def test_terminate_twice(self): + """ + Test we can't terminate twice. + """ + parser = self.__terminate() + self.__terminate_raises(parser) + + def test_crash(self): + """ + Tests that the parser correctly raises exception when it crashes + unexpectedly. + """ + creator = FakeCreator([('s', b'SU4\0\0\0\0\0\0'), ('r', b'')]) + parser = Parser(creator) + try: + parser.get_socket(IPAddr('0.0.0.0'), 0, 'UDP') + self.fail("Not raised") + except CreatorError as ce: + self.assertTrue(creator.all_used()) + # Is the exception correct? + self.assertTrue(ce.fatal) + self.assertEqual(None, ce.errno) + + def test_error(self): + """ + Tests that the parser correctly raises non-fatal exception when + the socket can not be created. + """ + # We split the int to see if it can cope with data coming in + # different packets + intpart = struct.pack('@i', 42) + creator = FakeCreator([('s', b'SU4\0\0\0\0\0\0'), ('r', b'ES' + + intpart[:1]), ('r', intpart[1:])]) + parser = Parser(creator) + try: + parser.get_socket(IPAddr('0.0.0.0'), 0, 'UDP') + self.fail("Not raised") + except CreatorError as ce: + self.assertTrue(creator.all_used()) + # Is the exception correct? + self.assertFalse(ce.fatal) + self.assertEqual(42, ce.errno) + + def __error(self, plan): + creator = FakeCreator(plan) + parser = Parser(creator) + try: + parser.get_socket(IPAddr('0.0.0.0'), 0, socket.SOCK_DGRAM) + self.fail("Not raised") + except CreatorError as ce: + self.assertTrue(creator.all_used()) + self.assertTrue(ce.fatal) + + def test_error_send(self): + self.__error([('e', None)]) + + def test_error_recv(self): + self.__error([('s', b'SU4\0\0\0\0\0\0'), ('e', None)]) + + def test_error_read_fd(self): + self.__error([('s', b'SU4\0\0\0\0\0\0'), ('r', b'S'), ('e', None)]) + + def __create(self, addr, socktype, encoded): + creator = FakeCreator([('s', b'S' + encoded), ('r', b'S'), ('f', 42)]) + parser = Parser(creator) + self.assertEqual(42, parser.get_socket(IPAddr(addr), 42, socktype)) + + def test_create1(self): + self.__create('192.0.2.0', 'UDP', b'U4\0\x2A\xC0\0\x02\0') + + def test_create2(self): + self.__create('2001:db8::', socket.SOCK_STREAM, + b'T6\0\x2A\x20\x01\x0d\xb8\0\0\0\0\0\0\0\0\0\0\0\0') + + def test_create_terminated(self): + """ + Test we can't request sockets after it was terminated. + """ + parser = self.__terminate() + try: + parser.get_socket(IPAddr('0.0.0.0'), 0, 'UDP') + self.fail("Not raised") + except CreatorError as ce: + self.assertTrue(ce.fatal) + self.assertEqual(None, ce.errno) + + def test_invalid_socktype(self): + """ + Test invalid socket type is rejected + """ + self.assertRaises(ValueError, Parser(FakeCreator([])).get_socket, + IPAddr('0.0.0.0'), 42, 'RAW') + + def test_invalid_family(self): + """ + Test it rejects invalid address family. + """ + # Note: this produces a bad logger output, since this address + # can not be converted to string, so the original message with + # placeholders is output. This should not happen in practice, so + # it is harmless. + addr = IPAddr('0.0.0.0') + addr.family = 42 + self.assertRaises(ValueError, Parser(FakeCreator([])).get_socket, + addr, 42, socket.SOCK_DGRAM) + +class WrapTests(unittest.TestCase): + """ + Tests for the wrap_socket function. + """ + def test_wrap(self): + # We construct two pairs of socket. The receiving side of one pair will + # be wrapped. Then we send one of the other pair through this pair and + # check the received one can be used as a socket + + # The transport socket + (t1, t2) = socket.socketpair() + # The payload socket + (p1, p2) = socket.socketpair() + + t2 = WrappedSocket(t2) + + # Transfer the descriptor + send_fd(t1.fileno(), p1.fileno()) + p1 = socket.fromfd(t2.read_fd(), socket.AF_UNIX, socket.SOCK_STREAM) + + # Now, pass some data trough the socket + p1.send(b'A') + data = p2.recv(1) + self.assertEqual(b'A', data) + + # Test the wrapping didn't hurt the socket's usual methods + t1.send(b'B') + data = t2.recv(1) + self.assertEqual(b'B', data) + t2.send(b'C') + data = t1.recv(1) + self.assertEqual(b'C', data) + +if __name__ == '__main__': + isc.log.init("bind10") # FIXME Should this be needed? + isc.log.resetUnitTestRootLogger() + unittest.main() diff --git a/src/lib/python/isc/cc/data.py b/src/lib/python/isc/cc/data.py index ce1bba0aeb..76ef94226e 100644 --- a/src/lib/python/isc/cc/data.py +++ b/src/lib/python/isc/cc/data.py @@ -22,8 +22,22 @@ import json -class DataNotFoundError(Exception): pass -class DataTypeError(Exception): pass +class DataNotFoundError(Exception): + """Raised if an identifier does not exist according to a spec file, + or if an item is addressed that is not in the current (or default) + config (such as a nonexistent list or map element)""" + pass + +class DataAlreadyPresentError(Exception): + """Raised if there is an attemt to add an element to a list or a + map that is already present in that list or map (i.e. if 'add' + is used when it should be 'set')""" + pass + +class DataTypeError(Exception): + """Raised if there is an attempt to set an element that is of a + different type than the type specified in the specification.""" + pass def remove_identical(a, b): """Removes the values from dict a that are the same as in dict b. diff --git a/src/lib/python/isc/cc/message.py b/src/lib/python/isc/cc/message.py index 3601c41f5e..3ebcc438c8 100644 --- a/src/lib/python/isc/cc/message.py +++ b/src/lib/python/isc/cc/message.py @@ -35,7 +35,7 @@ def from_wire(data): Raises an AttributeError if the given object has no decode() method (which should return a string). ''' - return json.loads(data.decode('utf8')) + return json.loads(data.decode('utf8'), strict=False) if __name__ == "__main__": import doctest diff --git a/src/lib/python/isc/cc/session.py b/src/lib/python/isc/cc/session.py index fb7dd06ff0..f6b62653be 100644 --- a/src/lib/python/isc/cc/session.py +++ b/src/lib/python/isc/cc/session.py @@ -93,6 +93,19 @@ class Session: self._socket.send(msg) def recvmsg(self, nonblock = True, seq = None): + """Reads a message. If nonblock is true, and there is no + message to read, it returns (None, None). + If seq is not None, it should be a value as returned by + group_sendmsg(), in which case only the response to + that message is returned, and others will be queued until + the next call to this method. + If seq is None, only messages that are *not* responses + will be returned, and responses will be queued. + The queue is checked for relevant messages before data + is read from the socket. + Raises a SessionError if there is a JSON decode problem in + the message that is read, or if the session has been closed + prior to the call of recvmsg()""" with self._lock: if len(self._queue) > 0: i = 0; @@ -109,16 +122,22 @@ class Session: if data and len(data) > 2: header_length = struct.unpack('>H', data[0:2])[0] data_length = len(data) - 2 - header_length - if data_length > 0: - env = isc.cc.message.from_wire(data[2:header_length+2]) - msg = isc.cc.message.from_wire(data[header_length + 2:]) - if (seq == None and "reply" not in env) or (seq != None and "reply" in env and seq == env["reply"]): - return env, msg + try: + if data_length > 0: + env = isc.cc.message.from_wire(data[2:header_length+2]) + msg = isc.cc.message.from_wire(data[header_length + 2:]) + if (seq == None and "reply" not in env) or (seq != None and "reply" in env and seq == env["reply"]): + return env, msg + else: + self._queue.append((env,msg)) + return self.recvmsg(nonblock, seq) else: - self._queue.append((env,msg)) - return self.recvmsg(nonblock, seq) - else: - return isc.cc.message.from_wire(data[2:header_length+2]), None + return isc.cc.message.from_wire(data[2:header_length+2]), None + except ValueError as ve: + # TODO: when we have logging here, add a debug + # message printing the data that we were unable + # to parse as JSON + raise SessionError(ve) return None, None def _receive_bytes(self, size): diff --git a/src/lib/python/isc/cc/tests/Makefile.am b/src/lib/python/isc/cc/tests/Makefile.am index 4e49501458..4c2acc05d4 100644 --- a/src/lib/python/isc/cc/tests/Makefile.am +++ b/src/lib/python/isc/cc/tests/Makefile.am @@ -10,7 +10,7 @@ EXTRA_DIST += test_session.py # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS @@ -23,7 +23,7 @@ endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python \ + PYTHONPATH=$(COMMON_PYTHON_PATH) \ BIND10_TEST_SOCKET_FILE=$(builddir)/test_socket.sock \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ done diff --git a/src/lib/python/isc/cc/tests/message_test.py b/src/lib/python/isc/cc/tests/message_test.py index 20242018e4..c417068120 100644 --- a/src/lib/python/isc/cc/tests/message_test.py +++ b/src/lib/python/isc/cc/tests/message_test.py @@ -31,6 +31,10 @@ class MessageTest(unittest.TestCase): self.msg2_str = "{\"aaa\": [1, 1.1, true, false, null]}"; self.msg2_wire = self.msg2_str.encode() + self.msg3 = { "aaa": [ 1, 1.1, True, False, "string\n" ] } + self.msg3_str = "{\"aaa\": [1, 1.1, true, false, \"string\n\" ]}"; + self.msg3_wire = self.msg3_str.encode() + def test_encode_json(self): self.assertEqual(self.msg1_wire, isc.cc.message.to_wire(self.msg1)) self.assertEqual(self.msg2_wire, isc.cc.message.to_wire(self.msg2)) @@ -40,6 +44,7 @@ class MessageTest(unittest.TestCase): def test_decode_json(self): self.assertEqual(self.msg1, isc.cc.message.from_wire(self.msg1_wire)) self.assertEqual(self.msg2, isc.cc.message.from_wire(self.msg2_wire)) + self.assertEqual(self.msg3, isc.cc.message.from_wire(self.msg3_wire)) self.assertRaises(AttributeError, isc.cc.message.from_wire, 1) self.assertRaises(ValueError, isc.cc.message.from_wire, b'\x001') diff --git a/src/lib/python/isc/cc/tests/session_test.py b/src/lib/python/isc/cc/tests/session_test.py index fe35a6cbfa..772ed0c961 100644 --- a/src/lib/python/isc/cc/tests/session_test.py +++ b/src/lib/python/isc/cc/tests/session_test.py @@ -274,6 +274,16 @@ class testSession(unittest.TestCase): self.assertEqual({"hello": "b"}, msg) self.assertFalse(sess.has_queued_msgs()) + def test_recv_bad_msg(self): + sess = MySession() + self.assertFalse(sess.has_queued_msgs()) + sess._socket.addrecv({'to': 'someone' }, {'hello': 'b'}) + sess._socket.addrecv({'to': 'someone', 'reply': 1}, {'hello': 'a'}) + # mangle the bytes a bit + sess._socket.recvqueue[5] = sess._socket.recvqueue[5] - 2 + sess._socket.recvqueue = sess._socket.recvqueue[:-2] + self.assertRaises(SessionError, sess.recvmsg, True, 1) + def test_next_sequence(self): sess = MySession() self.assertEqual(sess._sequence, 1) diff --git a/src/lib/python/isc/config/Makefile.am b/src/lib/python/isc/config/Makefile.am index 1efb6fc06c..ef696fb4c1 100644 --- a/src/lib/python/isc/config/Makefile.am +++ b/src/lib/python/isc/config/Makefile.am @@ -1,19 +1,31 @@ SUBDIRS = . tests python_PYTHON = __init__.py ccsession.py cfgmgr.py config_data.py module_spec.py -pyexec_DATA = cfgmgr_messages.py - pythondir = $(pyexecdir)/isc/config -# Define rule to build logging source files from message file -cfgmgr_messages.py: cfgmgr_messages.mes - $(top_builddir)/src/lib/log/compiler/message -p $(top_srcdir)/src/lib/python/isc/config/cfgmgr_messages.mes +BUILT_SOURCES = $(PYTHON_LOGMSGPKG_DIR)/work/cfgmgr_messages.py +BUILT_SOURCES += $(PYTHON_LOGMSGPKG_DIR)/work/config_messages.py +nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/cfgmgr_messages.py +nodist_pylogmessage_PYTHON += $(PYTHON_LOGMSGPKG_DIR)/work/config_messages.py +pylogmessagedir = $(pyexecdir)/isc/log_messages/ -CLEANFILES = cfgmgr_messages.py cfgmgr_messages.pyc +CLEANFILES = $(PYTHON_LOGMSGPKG_DIR)/work/cfgmgr_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/cfgmgr_messages.pyc +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/config_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/config_messages.pyc CLEANDIRS = __pycache__ -EXTRA_DIST = cfgmgr_messages.mes +EXTRA_DIST = cfgmgr_messages.mes config_messages.mes + +# Define rule to build logging source files from message file +$(PYTHON_LOGMSGPKG_DIR)/work/cfgmgr_messages.py : cfgmgr_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/cfgmgr_messages.mes + +$(PYTHON_LOGMSGPKG_DIR)/work/config_messages.py : config_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/config_messages.mes clean-local: rm -rf $(CLEANDIRS) diff --git a/src/lib/python/isc/config/ccsession.py b/src/lib/python/isc/config/ccsession.py index bff4f58c84..d07df1e3e8 100644 --- a/src/lib/python/isc/config/ccsession.py +++ b/src/lib/python/isc/config/ccsession.py @@ -43,6 +43,9 @@ from isc.util.file import path_search import bind10_config from isc.log import log_config_update import json +from isc.log_messages.config_messages import * + +logger = isc.log.Logger("config") class ModuleCCSessionError(Exception): pass @@ -88,6 +91,7 @@ COMMAND_CONFIG_UPDATE = "config_update" COMMAND_MODULE_SPECIFICATION_UPDATE = "module_specification_update" COMMAND_GET_COMMANDS_SPEC = "get_commands_spec" +COMMAND_GET_STATISTICS_SPEC = "get_statistics_spec" COMMAND_GET_CONFIG = "get_config" COMMAND_SET_CONFIG = "set_config" COMMAND_GET_MODULE_SPEC = "get_module_spec" @@ -127,10 +131,7 @@ def default_logconfig_handler(new_config, config_data): isc.log.log_config_update(json.dumps(new_config), json.dumps(config_data.get_module_spec().get_full_spec())) else: - # no logging here yet, TODO: log these errors - print("Error in logging configuration, ignoring config update: ") - for err in errors: - print(err) + logger.error(CONFIG_LOG_CONFIG_ERRORS, errors) class ModuleCCSession(ConfigData): """This class maintains a connection to the command channel, as @@ -142,7 +143,7 @@ class ModuleCCSession(ConfigData): callbacks are called when 'check_command' is called on the ModuleCCSession""" - def __init__(self, spec_file_name, config_handler, command_handler, cc_session=None, handle_logging_config=False): + def __init__(self, spec_file_name, config_handler, command_handler, cc_session=None, handle_logging_config=True): """Initialize a ModuleCCSession. This does *NOT* send the specification and request the configuration yet. Use start() for that once the ModuleCCSession has been initialized. @@ -163,7 +164,7 @@ class ModuleCCSession(ConfigData): the logger manager to apply it. It will also inform the logger manager when the logging configuration gets updated. The module does not need to do anything except intializing - its loggers, and provide log messages + its loggers, and provide log messages. Defaults to true. """ module_spec = isc.config.module_spec_from_file(spec_file_name) ConfigData.__init__(self, module_spec) @@ -312,7 +313,7 @@ class ModuleCCSession(ConfigData): module_spec = isc.config.module_spec_from_file(spec_file_name) module_cfg = ConfigData(module_spec) module_name = module_spec.get_module_name() - self._session.group_subscribe(module_name); + self._session.group_subscribe(module_name) # Get the current config for that module now seq = self._session.group_sendmsg(create_command(COMMAND_GET_CONFIG, { "module_name": module_name }), "ConfigManager") @@ -327,7 +328,7 @@ class ModuleCCSession(ConfigData): rcode, value = parse_answer(answer) if rcode == 0: if value != None and module_spec.validate_config(False, value): - module_cfg.set_local_config(value); + module_cfg.set_local_config(value) if config_update_callback is not None: config_update_callback(value, module_cfg) @@ -377,7 +378,7 @@ class ModuleCCSession(ConfigData): if self.get_module_spec().validate_config(False, value, errors): - self.set_local_config(value); + self.set_local_config(value) if self._config_handler: self._config_handler(value) else: @@ -385,8 +386,7 @@ class ModuleCCSession(ConfigData): "Wrong data in configuration: " + " ".join(errors)) else: - # log error - print("[" + self._module_name + "] Error requesting configuration: " + value) + logger.error(CONFIG_GET_FAILED, value) else: raise ModuleCCSessionError("No answer from configuration manager") except isc.cc.SessionTimeout: @@ -415,8 +415,8 @@ class UIModuleCCSession(MultiConfigData): self.set_specification(isc.config.ModuleSpec(specs[module])) def update_specs_and_config(self): - self.request_specifications(); - self.request_current_config(); + self.request_specifications() + self.request_current_config() def request_current_config(self): """Requests the current configuration from the configuration @@ -426,47 +426,90 @@ class UIModuleCCSession(MultiConfigData): raise ModuleCCSessionError("Bad config version") self._set_current_config(config) - - def add_value(self, identifier, value_str = None): - """Add a value to a configuration list. Raises a DataTypeError - if the value does not conform to the list_item_spec field - of the module config data specification. If value_str is - not given, we add the default as specified by the .spec - file.""" - module_spec = self.find_spec_part(identifier) - if (type(module_spec) != dict or "list_item_spec" not in module_spec): - raise isc.cc.data.DataNotFoundError(str(identifier) + " is not a list") - + def _add_value_to_list(self, identifier, value, module_spec): cur_list, status = self.get_value(identifier) if not cur_list: cur_list = [] - # Hmm. Do we need to check for duplicates? - value = None - if value_str is not None: - value = isc.cc.data.parse_value_str(value_str) - else: + if value is None: if "item_default" in module_spec["list_item_spec"]: value = module_spec["list_item_spec"]["item_default"] if value is None: - raise isc.cc.data.DataNotFoundError("No value given and no default for " + str(identifier)) - + raise isc.cc.data.DataNotFoundError( + "No value given and no default for " + str(identifier)) + if value not in cur_list: cur_list.append(value) self.set_value(identifier, cur_list) + else: + raise isc.cc.data.DataAlreadyPresentError(value + + " already in " + + identifier) - def remove_value(self, identifier, value_str): - """Remove a value from a configuration list. The value string - must be a string representation of the full item. Raises - a DataTypeError if the value at the identifier is not a list, - or if the given value_str does not match the list_item_spec - """ + def _add_value_to_named_set(self, identifier, value, item_value): + if type(value) != str: + raise isc.cc.data.DataTypeError("Name for named_set " + + identifier + + " must be a string") + # fail on both None and empty string + if not value: + raise isc.cc.data.DataNotFoundError( + "Need a name to add a new item to named_set " + + str(identifier)) + else: + cur_map, status = self.get_value(identifier) + if not cur_map: + cur_map = {} + if value not in cur_map: + cur_map[value] = item_value + self.set_value(identifier, cur_map) + else: + raise isc.cc.data.DataAlreadyPresentError(value + + " already in " + + identifier) + + def add_value(self, identifier, value_str = None, set_value_str = None): + """Add a value to a configuration list. Raises a DataTypeError + if the value does not conform to the list_item_spec field + of the module config data specification. If value_str is + not given, we add the default as specified by the .spec + file. Raises a DataNotFoundError if the given identifier + is not specified in the specification as a map or list. + Raises a DataAlreadyPresentError if the specified element + already exists.""" module_spec = self.find_spec_part(identifier) - if (type(module_spec) != dict or "list_item_spec" not in module_spec): - raise isc.cc.data.DataNotFoundError(str(identifier) + " is not a list") + if module_spec is None: + raise isc.cc.data.DataNotFoundError("Unknown item " + str(identifier)) - if value_str is None: + # the specified element must be a list or a named_set + if 'list_item_spec' in module_spec: + value = None + # in lists, we might get the value with spaces, making it + # the third argument. In that case we interpret both as + # one big string meant as the value + if value_str is not None: + if set_value_str is not None: + value_str += set_value_str + value = isc.cc.data.parse_value_str(value_str) + self._add_value_to_list(identifier, value, module_spec) + elif 'named_set_item_spec' in module_spec: + item_name = None + item_value = None + if value_str is not None: + item_name = isc.cc.data.parse_value_str(value_str) + if set_value_str is not None: + item_value = isc.cc.data.parse_value_str(set_value_str) + else: + if 'item_default' in module_spec['named_set_item_spec']: + item_value = module_spec['named_set_item_spec']['item_default'] + self._add_value_to_named_set(identifier, item_name, + item_value) + else: + raise isc.cc.data.DataNotFoundError(str(identifier) + " is not a list or a named set") + + def _remove_value_from_list(self, identifier, value): + if value is None: # we are directly removing an list index id, list_indices = isc.cc.data.split_identifier_list_indices(identifier) if list_indices is None: @@ -474,17 +517,52 @@ class UIModuleCCSession(MultiConfigData): else: self.set_value(identifier, None) else: - value = isc.cc.data.parse_value_str(value_str) - isc.config.config_data.check_type(module_spec, [value]) cur_list, status = self.get_value(identifier) - #if not cur_list: - # cur_list = isc.cc.data.find_no_exc(self.config.data, identifier) if not cur_list: cur_list = [] - if value in cur_list: + elif value in cur_list: cur_list.remove(value) self.set_value(identifier, cur_list) + def _remove_value_from_named_set(self, identifier, value): + if value is None: + raise isc.cc.data.DataNotFoundError("Need a name to remove an item from named_set " + str(identifier)) + elif type(value) != str: + raise isc.cc.data.DataTypeError("Name for named_set " + identifier + " must be a string") + else: + cur_map, status = self.get_value(identifier) + if not cur_map: + cur_map = {} + if value in cur_map: + del cur_map[value] + else: + raise isc.cc.data.DataNotFoundError(value + " not found in named_set " + str(identifier)) + + def remove_value(self, identifier, value_str): + """Remove a value from a configuration list or named set. + The value string must be a string representation of the full + item. Raises a DataTypeError if the value at the identifier + is not a list, or if the given value_str does not match the + list_item_spec """ + module_spec = self.find_spec_part(identifier) + if module_spec is None: + raise isc.cc.data.DataNotFoundError("Unknown item " + str(identifier)) + + value = None + if value_str is not None: + value = isc.cc.data.parse_value_str(value_str) + + if 'list_item_spec' in module_spec: + if value is not None: + isc.config.config_data.check_type(module_spec['list_item_spec'], value) + self._remove_value_from_list(identifier, value) + elif 'named_set_item_spec' in module_spec: + self._remove_value_from_named_set(identifier, value) + else: + raise isc.cc.data.DataNotFoundError(str(identifier) + " is not a list or a named_set") + + + def commit(self): """Commit all local changes, send them through b10-cmdctl to the configuration manager""" @@ -498,7 +576,6 @@ class UIModuleCCSession(MultiConfigData): self.request_current_config() self.clear_local_changes() elif "error" in answer: - print("Error: " + answer["error"]) - print("Configuration not committed") + raise ModuleCCSessionError("Error: " + str(answer["error"]) + "\n" + "Configuration not committed") else: raise ModuleCCSessionError("Unknown format of answer in commit(): " + str(answer)) diff --git a/src/lib/python/isc/config/cfgmgr.py b/src/lib/python/isc/config/cfgmgr.py index 83db159794..9996a19852 100644 --- a/src/lib/python/isc/config/cfgmgr.py +++ b/src/lib/python/isc/config/cfgmgr.py @@ -32,7 +32,7 @@ from isc.config import ccsession, config_data, module_spec from isc.util.file import path_search import bind10_config import isc.log -from cfgmgr_messages import * +from isc.log_messages.cfgmgr_messages import * logger = isc.log.Logger("cfgmgr") @@ -267,6 +267,19 @@ class ConfigManager: commands[module_name] = self.module_specs[module_name].get_commands_spec() return commands + def get_statistics_spec(self, name = None): + """Returns a dict containing 'module_name': statistics_spec for + all modules. If name is specified, only that module will + be included""" + statistics = {} + if name: + if name in self.module_specs: + statistics[name] = self.module_specs[name].get_statistics_spec() + else: + for module_name in self.module_specs.keys(): + statistics[module_name] = self.module_specs[module_name].get_statistics_spec() + return statistics + def read_config(self): """Read the current configuration from the file specificied at init()""" try: @@ -380,6 +393,9 @@ class ConfigManager: answer, env = self.cc.group_recvmsg(False, seq) except isc.cc.SessionTimeout: answer = ccsession.create_answer(1, "Timeout waiting for answer from " + module_name) + except isc.cc.SessionError as se: + logger.error(CFGMGR_BAD_UPDATE_RESPONSE_FROM_MODULE, module_name, se) + answer = ccsession.create_answer(1, "Unable to parse response from " + module_name + ": " + str(se)) if answer: rcode, val = ccsession.parse_answer(answer) if rcode == 0: @@ -454,6 +470,8 @@ class ConfigManager: if cmd: if cmd == ccsession.COMMAND_GET_COMMANDS_SPEC: answer = ccsession.create_answer(0, self.get_commands_spec()) + elif cmd == ccsession.COMMAND_GET_STATISTICS_SPEC: + answer = ccsession.create_answer(0, self.get_statistics_spec()) elif cmd == ccsession.COMMAND_GET_MODULE_SPEC: answer = self._handle_get_module_spec(arg) elif cmd == ccsession.COMMAND_GET_CONFIG: diff --git a/src/lib/python/isc/config/cfgmgr_messages.mes b/src/lib/python/isc/config/cfgmgr_messages.mes index 9355e4d976..61a63ed2f7 100644 --- a/src/lib/python/isc/config/cfgmgr_messages.mes +++ b/src/lib/python/isc/config/cfgmgr_messages.mes @@ -20,6 +20,13 @@ An older version of the configuration database has been found, from which there was an automatic upgrade path to the current version. These changes are now applied, and no action from the administrator is necessary. +% CFGMGR_BAD_UPDATE_RESPONSE_FROM_MODULE Unable to parse response from module %1: %2 +The configuration manager sent a configuration update to a module, but +the module responded with an answer that could not be parsed. The answer +message appears to be invalid JSON data, or not decodable to a string. +This is likely to be a problem in the module in question. The update is +assumed to have failed, and will not be stored. + % CFGMGR_CC_SESSION_ERROR Error connecting to command channel: %1 The configuration manager daemon was unable to connect to the messaging system. The most likely cause is that msgq is not running. diff --git a/src/lib/python/isc/config/config_data.py b/src/lib/python/isc/config/config_data.py index 1efe4a9849..fabd37d54b 100644 --- a/src/lib/python/isc/config/config_data.py +++ b/src/lib/python/isc/config/config_data.py @@ -145,6 +145,8 @@ def _find_spec_part_single(cur_spec, id_part): return cur_spec['list_item_spec'] # not found raise isc.cc.data.DataNotFoundError(id + " not found") + elif type(cur_spec) == dict and 'named_set_item_spec' in cur_spec.keys(): + return cur_spec['named_set_item_spec'] elif type(cur_spec) == list: for cur_spec_item in cur_spec: if cur_spec_item['item_name'] == id: @@ -191,11 +193,14 @@ def spec_name_list(spec, prefix="", recurse=False): result.extend(spec_name_list(map_el['map_item_spec'], prefix + map_el['item_name'], recurse)) else: result.append(prefix + name) + elif 'named_set_item_spec' in spec: + # we added a '/' above, but in this one case we don't want it + result.append(prefix[:-1]) else: for name in spec: result.append(prefix + name + "/") if recurse: - result.extend(spec_name_list(spec[name],name, recurse)) + result.extend(spec_name_list(spec[name], name, recurse)) elif type(spec) == list: for list_el in spec: if 'item_name' in list_el: @@ -207,7 +212,7 @@ def spec_name_list(spec, prefix="", recurse=False): else: raise ConfigDataError("Bad specification") else: - raise ConfigDataError("Bad specication") + raise ConfigDataError("Bad specification") return result class ConfigData: @@ -255,7 +260,7 @@ class ConfigData: def get_local_config(self): """Returns the non-default config values in a dict""" - return self.data; + return self.data def get_item_list(self, identifier = None, recurse = False): """Returns a list of strings containing the full identifiers of @@ -412,7 +417,39 @@ class MultiConfigData: item_id, list_indices = isc.cc.data.split_identifier_list_indices(id_part) id_list = module + "/" + id_prefix + "/" + item_id id_prefix += "/" + id_part - if list_indices is not None: + part_spec = find_spec_part(self._specifications[module].get_config_spec(), id_prefix) + if part_spec['item_type'] == 'named_set': + # For named sets, the identifier is partly defined + # by which values are actually present, and not + # purely by the specification. + # So if there is a part of the identifier left, + # we need to look up the value, then see if that + # contains the next part of the identifier we got + if len(id_parts) == 0: + if 'item_default' in part_spec: + return part_spec['item_default'] + else: + return None + id_part = id_parts.pop(0) + + named_set_value, type = self.get_value(id_list) + if id_part in named_set_value: + if len(id_parts) > 0: + # we are looking for the *default* value. + # so if not present in here, we need to + # lookup the one from the spec + rest_of_id = "/".join(id_parts) + result = isc.cc.data.find_no_exc(named_set_value[id_part], rest_of_id) + if result is None: + spec_part = self.find_spec_part(identifier) + if 'item_default' in spec_part: + return spec_part['item_default'] + return result + else: + return named_set_value[id_part] + else: + return None + elif list_indices is not None: # there's actually two kinds of default here for # lists; they can have a default value (like an # empty list), but their elements can also have @@ -449,7 +486,12 @@ class MultiConfigData: spec = find_spec_part(self._specifications[module].get_config_spec(), id) if 'item_default' in spec: - return spec['item_default'] + # one special case, named_set + if spec['item_type'] == 'named_set': + print("is " + id_part + " in named set?") + return spec['item_default'] + else: + return spec['item_default'] else: return None @@ -493,7 +535,7 @@ class MultiConfigData: spec_part_list = spec_part['list_item_spec'] list_value, status = self.get_value(identifier) if list_value is None: - raise isc.cc.data.DataNotFoundError(identifier) + raise isc.cc.data.DataNotFoundError(identifier + " not found") if type(list_value) != list: # the identifier specified a single element @@ -509,12 +551,38 @@ class MultiConfigData: for i in range(len(list_value)): self._append_value_item(result, spec_part_list, "%s[%d]" % (identifier, i), all) elif item_type == "map": + value, status = self.get_value(identifier) # just show the specific contents of a map, we are # almost never interested in just its name spec_part_map = spec_part['map_item_spec'] self._append_value_item(result, spec_part_map, identifier, all) + elif item_type == "named_set": + value, status = self.get_value(identifier) + + # show just the one entry, when either the map is empty, + # or when this is element is not requested specifically + if len(value.keys()) == 0: + entry = _create_value_map_entry(identifier, + item_type, + {}, status) + result.append(entry) + elif not first and not all: + entry = _create_value_map_entry(identifier, + item_type, + None, status) + result.append(entry) + else: + spec_part_named_set = spec_part['named_set_item_spec'] + for entry in value: + self._append_value_item(result, + spec_part_named_set, + identifier + "/" + entry, + all) else: value, status = self.get_value(identifier) + if status == self.NONE and not spec_part['item_optional']: + raise isc.cc.data.DataNotFoundError(identifier + " not found") + entry = _create_value_map_entry(identifier, item_type, value, status) @@ -569,7 +637,7 @@ class MultiConfigData: spec_part = spec_part['list_item_spec'] check_type(spec_part, value) else: - raise isc.cc.data.DataNotFoundError(identifier) + raise isc.cc.data.DataNotFoundError(identifier + " not found") # Since we do not support list diffs (yet?), we need to # copy the currently set list of items to _local_changes @@ -579,15 +647,50 @@ class MultiConfigData: cur_id_part = '/' for id_part in id_parts: id, list_indices = isc.cc.data.split_identifier_list_indices(id_part) + cur_value, status = self.get_value(cur_id_part + id) + # Check if the value was there in the first place + if status == MultiConfigData.NONE and cur_id_part != "/": + raise isc.cc.data.DataNotFoundError(id_part + + " not found in " + + cur_id_part) if list_indices is not None: - cur_list, status = self.get_value(cur_id_part + id) + # And check if we don't set something outside of any + # list + cur_list = cur_value + for list_index in list_indices: + if list_index >= len(cur_list): + raise isc.cc.data.DataNotFoundError("No item " + + str(list_index) + " in " + id_part) + else: + cur_list = cur_list[list_index] if status != MultiConfigData.LOCAL: isc.cc.data.set(self._local_changes, cur_id_part + id, - cur_list) + cur_value) cur_id_part = cur_id_part + id_part + "/" isc.cc.data.set(self._local_changes, identifier, value) - + + def _get_list_items(self, item_name): + """This method is used in get_config_item_list, to add list + indices and named_set names to the completion list. If + the given item_name is for a list or named_set, it'll + return a list of those (appended to item_name), otherwise + the list will only contain the item_name itself.""" + spec_part = self.find_spec_part(item_name) + if 'item_type' in spec_part and \ + spec_part['item_type'] == 'named_set': + subslash = "" + if spec_part['named_set_item_spec']['item_type'] == 'map' or\ + spec_part['named_set_item_spec']['item_type'] == 'named_set': + subslash = "/" + values, status = self.get_value(item_name) + if len(values) > 0: + return [ item_name + "/" + v + subslash for v in values.keys() ] + else: + return [ item_name ] + else: + return [ item_name ] + def get_config_item_list(self, identifier = None, recurse = False): """Returns a list of strings containing the item_names of the child items at the given identifier. If no identifier is @@ -598,7 +701,11 @@ class MultiConfigData: if identifier.startswith("/"): identifier = identifier[1:] spec = self.find_spec_part(identifier) - return spec_name_list(spec, identifier + "/", recurse) + spec_list = spec_name_list(spec, identifier + "/", recurse) + result_list = [] + for spec_name in spec_list: + result_list.extend(self._get_list_items(spec_name)) + return result_list else: if recurse: id_list = [] diff --git a/src/lib/python/isc/config/config_messages.mes b/src/lib/python/isc/config/config_messages.mes new file mode 100644 index 0000000000..c52efb4301 --- /dev/null +++ b/src/lib/python/isc/config/config_messages.mes @@ -0,0 +1,33 @@ +# Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and/or distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +# PERFORMANCE OF THIS SOFTWARE. + +# No namespace declaration - these constants go in the global namespace +# of the config_messages python module. + +# since these messages are for the python config library, care must +# be taken that names do not conflict with the messages from the c++ +# config library. A checker script should verify that, but we do not +# have that at this moment. So when adding a message, make sure that +# the name is not already used in src/lib/config/config_messages.mes + +% CONFIG_LOG_CONFIG_ERRORS error(s) in logging configuration: %1 +There was a logging configuration update, but the internal validator +for logging configuration found that it contained errors. The errors +are shown, and the update is ignored. + +% CONFIG_GET_FAILED error getting configuration from cfgmgr: %1 +The configuration manager returned an error response when the module +requested its configuration. The full error message answer from the +configuration manager is appended to the log error. + diff --git a/src/lib/python/isc/config/module_spec.py b/src/lib/python/isc/config/module_spec.py index 61711494de..b79f928237 100644 --- a/src/lib/python/isc/config/module_spec.py +++ b/src/lib/python/isc/config/module_spec.py @@ -23,6 +23,7 @@ import json import sys +import time import isc.cc.data @@ -91,7 +92,7 @@ class ModuleSpec: return _validate_spec_list(data_def, full, data, errors) else: # no spec, always bad - if errors != None: + if errors is not None: errors.append("No config_data specification") return False @@ -117,6 +118,26 @@ class ModuleSpec: return False + def validate_statistics(self, full, stat, errors = None): + """Check whether the given piece of data conforms to this + data definition. If so, it returns True. If not, it will + return false. If errors is given, and is an array, a string + describing the error will be appended to it. The current + version stops as soon as there is one error so this list + will not be exhaustive. If 'full' is true, it also errors on + non-optional missing values. Set this to False if you want to + validate only a part of a statistics tree (like a list of + non-default values). Also it checks 'item_format' in case + of time""" + stat_spec = self.get_statistics_spec() + if stat_spec is not None: + return _validate_spec_list(stat_spec, full, stat, errors) + else: + # no spec, always bad + if errors is not None: + errors.append("No statistics specification") + return False + def get_module_name(self): """Returns a string containing the name of the module as specified by the specification given at __init__()""" @@ -152,6 +173,14 @@ class ModuleSpec: else: return None + def get_statistics_spec(self): + """Returns a dict representation of the statistics part of the + specification, or None if there is none.""" + if 'statistics' in self._module_spec: + return self._module_spec['statistics'] + else: + return None + def __str__(self): """Returns a string representation of the full specification""" return self._module_spec.__str__() @@ -160,8 +189,9 @@ def _check(module_spec): """Checks the full specification. This is a dict that contains the element "module_spec", which is in itself a dict that must contain at least a "module_name" (string) and optionally - a "config_data" and a "commands" element, both of which are lists - of dicts. Raises a ModuleSpecError if there is a problem.""" + a "config_data", a "commands" and a "statistics" element, all + of which are lists of dicts. Raises a ModuleSpecError if there + is a problem.""" if type(module_spec) != dict: raise ModuleSpecError("data specification not a dict") if "module_name" not in module_spec: @@ -173,6 +203,8 @@ def _check(module_spec): _check_config_spec(module_spec["config_data"]) if "commands" in module_spec: _check_command_spec(module_spec["commands"]) + if "statistics" in module_spec: + _check_statistics_spec(module_spec["statistics"]) def _check_config_spec(config_data): # config data is a list of items represented by dicts that contain @@ -229,7 +261,7 @@ def _check_item_spec(config_item): item_type = config_item["item_type"] if type(item_type) != str: raise ModuleSpecError("item_type in " + item_name + " is not a string: " + str(type(item_type))) - if item_type not in ["integer", "real", "boolean", "string", "list", "map", "any"]: + if item_type not in ["integer", "real", "boolean", "string", "list", "map", "named_set", "any"]: raise ModuleSpecError("unknown item_type in " + item_name + ": " + item_type) if "item_optional" in config_item: if type(config_item["item_optional"]) != bool: @@ -263,39 +295,96 @@ def _check_item_spec(config_item): if type(map_item) != dict: raise ModuleSpecError("map_item_spec element is not a dict") _check_item_spec(map_item) + if 'item_format' in config_item and 'item_default' in config_item: + item_format = config_item["item_format"] + item_default = config_item["item_default"] + if not _check_format(item_default, item_format): + raise ModuleSpecError( + "Wrong format for " + str(item_default) + " in " + str(item_name)) +def _check_statistics_spec(statistics): + # statistics is a list of items represented by dicts that contain + # things like "item_name", depending on the type they can have + # specific subitems + """Checks a list that contains the statistics part of the + specification. Raises a ModuleSpecError if there is a + problem.""" + if type(statistics) != list: + raise ModuleSpecError("statistics is of type " + str(type(statistics)) + + ", not a list of items") + for stat_item in statistics: + _check_item_spec(stat_item) + # Additionally checks if there are 'item_title' and + # 'item_description' + for item in [ 'item_title', 'item_description' ]: + if item not in stat_item: + raise ModuleSpecError("no " + item + " in statistics item") + +def _check_format(value, format_name): + """Check if specified value and format are correct. Return True if + is is correct.""" + # TODO: should be added other format types if necessary + time_formats = { 'date-time' : "%Y-%m-%dT%H:%M:%SZ", + 'date' : "%Y-%m-%d", + 'time' : "%H:%M:%S" } + for fmt in time_formats: + if format_name == fmt: + try: + # reverse check + return value == time.strftime( + time_formats[fmt], + time.strptime(value, time_formats[fmt])) + except (ValueError, TypeError): + break + return False def _validate_type(spec, value, errors): """Returns true if the value is of the correct type given the specification""" data_type = spec['item_type'] if data_type == "integer" and type(value) != int: - if errors != None: + if errors is not None: errors.append(str(value) + " should be an integer") return False elif data_type == "real" and type(value) != float: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a real") return False elif data_type == "boolean" and type(value) != bool: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a boolean") return False elif data_type == "string" and type(value) != str: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a string") return False elif data_type == "list" and type(value) != list: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a list") return False elif data_type == "map" and type(value) != dict: + if errors is not None: + errors.append(str(value) + " should be a map") + return False + elif data_type == "named_set" and type(value) != dict: if errors != None: errors.append(str(value) + " should be a map") return False else: return True +def _validate_format(spec, value, errors): + """Returns true if the value is of the correct format given the + specification. And also return true if no 'item_format'""" + if "item_format" in spec: + item_format = spec['item_format'] + if not _check_format(value, item_format): + if errors is not None: + errors.append("format type of " + str(value) + + " should be " + item_format) + return False + return True + def _validate_item(spec, full, data, errors): if not _validate_type(spec, data, errors): return False @@ -304,12 +393,24 @@ def _validate_item(spec, full, data, errors): for data_el in data: if not _validate_type(list_spec, data_el, errors): return False + if not _validate_format(list_spec, data_el, errors): + return False if list_spec['item_type'] == "map": if not _validate_item(list_spec, full, data_el, errors): return False elif type(data) == dict: - if not _validate_spec_list(spec['map_item_spec'], full, data, errors): - return False + if 'map_item_spec' in spec: + if not _validate_spec_list(spec['map_item_spec'], full, data, errors): + return False + else: + named_set_spec = spec['named_set_item_spec'] + for data_el in data.values(): + if not _validate_type(named_set_spec, data_el, errors): + return False + if not _validate_item(named_set_spec, full, data_el, errors): + return False + elif not _validate_format(spec, data, errors): + return False return True def _validate_spec(spec, full, data, errors): @@ -321,7 +422,7 @@ def _validate_spec(spec, full, data, errors): elif item_name in data: return _validate_item(spec, full, data[item_name], errors) elif full and not item_optional: - if errors != None: + if errors is not None: errors.append("non-optional item " + item_name + " missing") return False else: @@ -346,7 +447,7 @@ def _validate_spec_list(module_spec, full, data, errors): if spec_item["item_name"] == item_name: found = True if not found and item_name != "version": - if errors != None: + if errors is not None: errors.append("unknown item " + item_name) validated = False return validated diff --git a/src/lib/python/isc/config/tests/Makefile.am b/src/lib/python/isc/config/tests/Makefile.am index 47ccc41af3..6670ee7254 100644 --- a/src/lib/python/isc/config/tests/Makefile.am +++ b/src/lib/python/isc/config/tests/Makefile.am @@ -8,7 +8,7 @@ EXTRA_DIST += unittest_fakesession.py # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS @@ -21,7 +21,7 @@ endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/python/isc/config \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/python/isc/config \ B10_TEST_PLUGIN_DIR=$(abs_top_srcdir)/src/bin/cfgmgr/plugins \ CONFIG_TESTDATA_PATH=$(abs_top_srcdir)/src/lib/config/tests/testdata \ CONFIG_WR_TESTDATA_PATH=$(abs_top_builddir)/src/lib/config/tests/testdata \ diff --git a/src/lib/python/isc/config/tests/ccsession_test.py b/src/lib/python/isc/config/tests/ccsession_test.py index 830cbd762d..351c8e666d 100644 --- a/src/lib/python/isc/config/tests/ccsession_test.py +++ b/src/lib/python/isc/config/tests/ccsession_test.py @@ -23,6 +23,7 @@ from isc.config.ccsession import * from isc.config.config_data import BIND10_CONFIG_DATA_VERSION from unittest_fakesession import FakeModuleCCSession, WouldBlockForever import bind10_config +import isc.log class TestHelperFunctions(unittest.TestCase): def test_parse_answer(self): @@ -107,8 +108,11 @@ class TestModuleCCSession(unittest.TestCase): def spec_file(self, file): return self.data_path + os.sep + file - def create_session(self, spec_file_name, config_handler = None, command_handler = None, cc_session = None): - return ModuleCCSession(self.spec_file(spec_file_name), config_handler, command_handler, cc_session) + def create_session(self, spec_file_name, config_handler = None, + command_handler = None, cc_session = None): + return ModuleCCSession(self.spec_file(spec_file_name), + config_handler, command_handler, + cc_session, False) def test_init(self): fake_session = FakeModuleCCSession() @@ -691,6 +695,12 @@ class TestUIModuleCCSession(unittest.TestCase): fake_conn.set_get_answer('/config_data', { 'version': BIND10_CONFIG_DATA_VERSION }) return UIModuleCCSession(fake_conn) + def create_uccs_named_set(self, fake_conn): + module_spec = isc.config.module_spec_from_file(self.spec_file("spec32.spec")) + fake_conn.set_get_answer('/module_spec', { module_spec.get_module_name(): module_spec.get_full_spec()}) + fake_conn.set_get_answer('/config_data', { 'version': BIND10_CONFIG_DATA_VERSION }) + return UIModuleCCSession(fake_conn) + def test_init(self): fake_conn = fakeUIConn() fake_conn.set_get_answer('/module_spec', {}) @@ -711,12 +721,14 @@ class TestUIModuleCCSession(unittest.TestCase): def test_add_remove_value(self): fake_conn = fakeUIConn() uccs = self.create_uccs2(fake_conn) + self.assertRaises(isc.cc.data.DataNotFoundError, uccs.add_value, 1, "a") self.assertRaises(isc.cc.data.DataNotFoundError, uccs.add_value, "no_such_item", "a") self.assertRaises(isc.cc.data.DataNotFoundError, uccs.add_value, "Spec2/item1", "a") self.assertRaises(isc.cc.data.DataNotFoundError, uccs.remove_value, 1, "a") self.assertRaises(isc.cc.data.DataNotFoundError, uccs.remove_value, "no_such_item", "a") self.assertRaises(isc.cc.data.DataNotFoundError, uccs.remove_value, "Spec2/item1", "a") + self.assertEqual({}, uccs._local_changes) uccs.add_value("Spec2/item5", "foo") self.assertEqual({'Spec2': {'item5': ['a', 'b', 'foo']}}, uccs._local_changes) @@ -726,10 +738,37 @@ class TestUIModuleCCSession(unittest.TestCase): uccs.remove_value("Spec2/item5", "foo") uccs.add_value("Spec2/item5", "foo") self.assertEqual({'Spec2': {'item5': ['foo']}}, uccs._local_changes) - uccs.add_value("Spec2/item5", "foo") + self.assertRaises(isc.cc.data.DataAlreadyPresentError, + uccs.add_value, "Spec2/item5", "foo") self.assertEqual({'Spec2': {'item5': ['foo']}}, uccs._local_changes) + self.assertRaises(isc.cc.data.DataNotFoundError, + uccs.remove_value, "Spec2/item5[123]", None) uccs.remove_value("Spec2/item5[0]", None) self.assertEqual({'Spec2': {'item5': []}}, uccs._local_changes) + uccs.add_value("Spec2/item5", None); + self.assertEqual({'Spec2': {'item5': ['']}}, uccs._local_changes) + + def test_add_remove_value_named_set(self): + fake_conn = fakeUIConn() + uccs = self.create_uccs_named_set(fake_conn) + value, status = uccs.get_value("/Spec32/named_set_item") + self.assertEqual({'a': 1, 'b': 2}, value) + uccs.add_value("/Spec32/named_set_item", "foo") + value, status = uccs.get_value("/Spec32/named_set_item") + self.assertEqual({'a': 1, 'b': 2, 'foo': 3}, value) + + uccs.remove_value("/Spec32/named_set_item", "a") + uccs.remove_value("/Spec32/named_set_item", "foo") + value, status = uccs.get_value("/Spec32/named_set_item") + self.assertEqual({'b': 2}, value) + + self.assertRaises(isc.cc.data.DataNotFoundError, + uccs.set_value, + "/Spec32/named_set_item/no_such_item", + 4) + self.assertRaises(isc.cc.data.DataNotFoundError, + uccs.remove_value, "/Spec32/named_set_item", + "no_such_item") def test_commit(self): fake_conn = fakeUIConn() @@ -739,5 +778,6 @@ class TestUIModuleCCSession(unittest.TestCase): uccs.commit() if __name__ == '__main__': + isc.log.init("bind10") unittest.main() diff --git a/src/lib/python/isc/config/tests/cfgmgr_test.py b/src/lib/python/isc/config/tests/cfgmgr_test.py index 0a9e2d3e44..eacc425dd5 100644 --- a/src/lib/python/isc/config/tests/cfgmgr_test.py +++ b/src/lib/python/isc/config/tests/cfgmgr_test.py @@ -219,6 +219,25 @@ class TestConfigManager(unittest.TestCase): commands_spec = self.cm.get_commands_spec('Spec2') self.assertEqual(commands_spec['Spec2'], module_spec.get_commands_spec()) + def test_get_statistics_spec(self): + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec, {}) + module_spec = isc.config.module_spec.module_spec_from_file(self.data_path + os.sep + "spec1.spec") + self.assert_(module_spec.get_module_name() not in self.cm.module_specs) + self.cm.set_module_spec(module_spec) + self.assert_(module_spec.get_module_name() in self.cm.module_specs) + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec, { 'Spec1': None }) + self.cm.remove_module_spec('Spec1') + module_spec = isc.config.module_spec.module_spec_from_file(self.data_path + os.sep + "spec2.spec") + self.assert_(module_spec.get_module_name() not in self.cm.module_specs) + self.cm.set_module_spec(module_spec) + self.assert_(module_spec.get_module_name() in self.cm.module_specs) + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec['Spec2'], module_spec.get_statistics_spec()) + statistics_spec = self.cm.get_statistics_spec('Spec2') + self.assertEqual(statistics_spec['Spec2'], module_spec.get_statistics_spec()) + def test_read_config(self): self.assertEqual(self.cm.config.data, {'version': config_data.BIND10_CONFIG_DATA_VERSION}) self.cm.read_config() @@ -241,6 +260,7 @@ class TestConfigManager(unittest.TestCase): self._handle_msg_helper("", { 'result': [ 1, 'Unknown message format: ']}) self._handle_msg_helper({ "command": [ "badcommand" ] }, { 'result': [ 1, "Unknown command: badcommand"]}) self._handle_msg_helper({ "command": [ "get_commands_spec" ] }, { 'result': [ 0, {} ]}) + self._handle_msg_helper({ "command": [ "get_statistics_spec" ] }, { 'result': [ 0, {} ]}) self._handle_msg_helper({ "command": [ "get_module_spec" ] }, { 'result': [ 0, {} ]}) self._handle_msg_helper({ "command": [ "get_module_spec", { "module_name": "Spec2" } ] }, { 'result': [ 0, {} ]}) #self._handle_msg_helper({ "command": [ "get_module_spec", { "module_name": "nosuchmodule" } ] }, @@ -329,6 +349,7 @@ class TestConfigManager(unittest.TestCase): { "module_name" : "Spec2" } ] }, { 'result': [ 0, self.spec.get_full_spec() ] }) self._handle_msg_helper({ "command": [ "get_commands_spec" ] }, { 'result': [ 0, { self.spec.get_module_name(): self.spec.get_commands_spec() } ]}) + self._handle_msg_helper({ "command": [ "get_statistics_spec" ] }, { 'result': [ 0, { self.spec.get_module_name(): self.spec.get_statistics_spec() } ]}) # re-add this once we have new way to propagate spec changes (1 instead of the current 2 messages) #self.assertEqual(len(self.fake_session.message_queue), 2) # the name here is actually wrong (and hardcoded), but needed in the current version @@ -450,6 +471,7 @@ class TestConfigManager(unittest.TestCase): def test_run(self): self.fake_session.group_sendmsg({ "command": [ "get_commands_spec" ] }, "ConfigManager") + self.fake_session.group_sendmsg({ "command": [ "get_statistics_spec" ] }, "ConfigManager") self.fake_session.group_sendmsg({ "command": [ "shutdown" ] }, "ConfigManager") self.cm.run() pass diff --git a/src/lib/python/isc/config/tests/config_data_test.py b/src/lib/python/isc/config/tests/config_data_test.py index fc1bffaef1..0dd441ddcb 100644 --- a/src/lib/python/isc/config/tests/config_data_test.py +++ b/src/lib/python/isc/config/tests/config_data_test.py @@ -236,6 +236,7 @@ class TestConfigData(unittest.TestCase): value, default = self.cd.get_value("item6/value2") self.assertEqual(None, value) self.assertEqual(False, default) + self.assertRaises(isc.cc.data.DataNotFoundError, self.cd.get_value, "item6/no_such_item") def test_get_default_value(self): self.assertEqual(1, self.cd.get_default_value("item1")) @@ -360,7 +361,7 @@ class TestMultiConfigData(unittest.TestCase): def test_get_current_config(self): cf = { 'module1': { 'item1': 2, 'item2': True } } - self.mcd._set_current_config(cf); + self.mcd._set_current_config(cf) self.assertEqual(cf, self.mcd.get_current_config()) def test_get_local_changes(self): @@ -421,6 +422,17 @@ class TestMultiConfigData(unittest.TestCase): value = self.mcd.get_default_value("Spec2/no_such_item/asdf") self.assertEqual(None, value) + module_spec = isc.config.module_spec_from_file(self.data_path + os.sep + "spec32.spec") + self.mcd.set_specification(module_spec) + value = self.mcd.get_default_value("Spec32/named_set_item") + self.assertEqual({ 'a': 1, 'b': 2}, value) + value = self.mcd.get_default_value("Spec32/named_set_item/a") + self.assertEqual(1, value) + value = self.mcd.get_default_value("Spec32/named_set_item/b") + self.assertEqual(2, value) + value = self.mcd.get_default_value("Spec32/named_set_item/no_such_item") + self.assertEqual(None, value) + def test_get_value(self): module_spec = isc.config.module_spec_from_file(self.data_path + os.sep + "spec2.spec") self.mcd.set_specification(module_spec) @@ -544,6 +556,29 @@ class TestMultiConfigData(unittest.TestCase): maps = self.mcd.get_value_maps("/Spec22/value9") self.assertEqual(expected, maps) + def test_get_value_maps_named_set(self): + module_spec = isc.config.module_spec_from_file(self.data_path + os.sep + "spec32.spec") + self.mcd.set_specification(module_spec) + maps = self.mcd.get_value_maps() + self.assertEqual([{'default': False, 'type': 'module', + 'name': 'Spec32', 'value': None, + 'modified': False}], maps) + maps = self.mcd.get_value_maps("/Spec32/named_set_item") + self.assertEqual([{'default': True, 'type': 'integer', + 'name': 'Spec32/named_set_item/a', + 'value': 1, 'modified': False}, + {'default': True, 'type': 'integer', + 'name': 'Spec32/named_set_item/b', + 'value': 2, 'modified': False}], maps) + maps = self.mcd.get_value_maps("/Spec32/named_set_item/a") + self.assertEqual([{'default': True, 'type': 'integer', + 'name': 'Spec32/named_set_item/a', + 'value': 1, 'modified': False}], maps) + maps = self.mcd.get_value_maps("/Spec32/named_set_item/b") + self.assertEqual([{'default': True, 'type': 'integer', + 'name': 'Spec32/named_set_item/b', + 'value': 2, 'modified': False}], maps) + def test_set_value(self): module_spec = isc.config.module_spec_from_file(self.data_path + os.sep + "spec2.spec") self.mcd.set_specification(module_spec) @@ -582,6 +617,24 @@ class TestMultiConfigData(unittest.TestCase): config_items = self.mcd.get_config_item_list("Spec2", True) self.assertEqual(['Spec2/item1', 'Spec2/item2', 'Spec2/item3', 'Spec2/item4', 'Spec2/item5', 'Spec2/item6/value1', 'Spec2/item6/value2'], config_items) + def test_get_config_item_list_named_set(self): + config_items = self.mcd.get_config_item_list() + self.assertEqual([], config_items) + module_spec = isc.config.module_spec_from_file(self.data_path + os.sep + "spec32.spec") + self.mcd.set_specification(module_spec) + config_items = self.mcd.get_config_item_list() + self.assertEqual(['Spec32'], config_items) + config_items = self.mcd.get_config_item_list(None, False) + self.assertEqual(['Spec32'], config_items) + config_items = self.mcd.get_config_item_list(None, True) + self.assertEqual(['Spec32/named_set_item'], config_items) + self.mcd.set_value('Spec32/named_set_item', { "aaaa": 4, "aabb": 5, "bbbb": 6}) + config_items = self.mcd.get_config_item_list("/Spec32/named_set_item", True) + self.assertEqual(['Spec32/named_set_item/aaaa', + 'Spec32/named_set_item/aabb', + 'Spec32/named_set_item/bbbb', + ], config_items) + if __name__ == '__main__': unittest.main() diff --git a/src/lib/python/isc/config/tests/module_spec_test.py b/src/lib/python/isc/config/tests/module_spec_test.py index a4dcdecd21..fc53d23221 100644 --- a/src/lib/python/isc/config/tests/module_spec_test.py +++ b/src/lib/python/isc/config/tests/module_spec_test.py @@ -81,6 +81,11 @@ class TestModuleSpec(unittest.TestCase): self.assertRaises(ModuleSpecError, self.read_spec_file, "spec20.spec") self.assertRaises(ModuleSpecError, self.read_spec_file, "spec21.spec") self.assertRaises(ModuleSpecError, self.read_spec_file, "spec26.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec34.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec35.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec36.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec37.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec38.spec") def validate_data(self, specfile_name, datafile_name): dd = self.read_spec_file(specfile_name); @@ -98,6 +103,9 @@ class TestModuleSpec(unittest.TestCase): self.assertEqual(True, self.validate_data("spec22.spec", "data22_6.data")) self.assertEqual(True, self.validate_data("spec22.spec", "data22_7.data")) self.assertEqual(False, self.validate_data("spec22.spec", "data22_8.data")) + self.assertEqual(True, self.validate_data("spec32.spec", "data32_1.data")) + self.assertEqual(False, self.validate_data("spec32.spec", "data32_2.data")) + self.assertEqual(False, self.validate_data("spec32.spec", "data32_3.data")) def validate_command_params(self, specfile_name, datafile_name, cmd_name): dd = self.read_spec_file(specfile_name); @@ -120,6 +128,17 @@ class TestModuleSpec(unittest.TestCase): self.assertEqual(False, self.validate_command_params("spec27.spec", "data22_8.data", 'cmd1')) self.assertEqual(False, self.validate_command_params("spec27.spec", "data22_8.data", 'cmd2')) + def test_statistics_validation(self): + def _validate_stat(specfile_name, datafile_name): + dd = self.read_spec_file(specfile_name); + data_file = open(self.spec_file(datafile_name)) + data_str = data_file.read() + data = isc.cc.data.parse_value_str(data_str) + return dd.validate_statistics(True, data, []) + self.assertFalse(self.read_spec_file("spec1.spec").validate_statistics(True, None, None)); + self.assertTrue(_validate_stat("spec33.spec", "data33_1.data")) + self.assertFalse(_validate_stat("spec33.spec", "data33_2.data")) + def test_init(self): self.assertRaises(ModuleSpecError, ModuleSpec, 1) module_spec = isc.config.module_spec_from_file(self.spec_file("spec1.spec"), False) @@ -266,6 +285,80 @@ class TestModuleSpec(unittest.TestCase): } ) + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_datetime", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "date-time" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_date", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "date" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_time", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "time" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_datetime", + 'item_type': "string", + 'item_optional': False, + 'item_default': "2011-05-27T19:42:57Z", + 'item_format': "dummy-format" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_date", + 'item_type': "string", + 'item_optional': False, + 'item_default': "2011-05-27", + 'item_format': "dummy-format" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_time", + 'item_type': "string", + 'item_optional': False, + 'item_default': "19:42:57Z", + 'item_format': "dummy-format" + } + ) + + def test_check_format(self): + self.assertTrue(isc.config.module_spec._check_format('2011-05-27T19:42:57Z', 'date-time')) + self.assertTrue(isc.config.module_spec._check_format('2011-05-27', 'date')) + self.assertTrue(isc.config.module_spec._check_format('19:42:57', 'time')) + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T19:42:57Z', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('2011-05-27', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('19:42:57', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('2011-13-99T99:99:99Z', 'date-time')) + self.assertFalse(isc.config.module_spec._check_format('2011-13-99', 'date')) + self.assertFalse(isc.config.module_spec._check_format('99:99:99', 'time')) + self.assertFalse(isc.config.module_spec._check_format('', 'date-time')) + self.assertFalse(isc.config.module_spec._check_format(None, 'date-time')) + self.assertFalse(isc.config.module_spec._check_format(None, None)) + # wrong date-time-type format not ending with "Z" + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T19:42:57', 'date-time')) + # wrong date-type format ending with "T" + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T', 'date')) + # wrong time-type format ending with "Z" + self.assertFalse(isc.config.module_spec._check_format('19:42:57Z', 'time')) + def test_validate_type(self): errors = [] self.assertEqual(True, isc.config.module_spec._validate_type({ 'item_type': 'integer' }, 1, errors)) @@ -303,6 +396,25 @@ class TestModuleSpec(unittest.TestCase): self.assertEqual(False, isc.config.module_spec._validate_type({ 'item_type': 'map' }, 1, errors)) self.assertEqual(['1 should be a map'], errors) + def test_validate_format(self): + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "2011-05-27T19:42:57Z", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "a", errors)) + self.assertEqual(['format type of a should be date-time'], errors) + + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "2011-05-27", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "a", errors)) + self.assertEqual(['format type of a should be date'], errors) + + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "19:42:57", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "a", errors)) + self.assertEqual(['format type of a should be time'], errors) + def test_validate_spec(self): spec = { 'item_name': "an_item", 'item_type': "string", diff --git a/src/lib/python/isc/datasrc/Makefile.am b/src/lib/python/isc/datasrc/Makefile.am index 46fb661ccb..07fb417b2a 100644 --- a/src/lib/python/isc/datasrc/Makefile.am +++ b/src/lib/python/isc/datasrc/Makefile.am @@ -1,10 +1,44 @@ SUBDIRS = . tests +# old data, should be removed in the near future once conversion is done +pythondir = $(pyexecdir)/isc/datasrc python_PYTHON = __init__.py master.py sqlite3_ds.py -pythondir = $(pyexecdir)/isc/datasrc + +# new data + +AM_CPPFLAGS = -I$(top_srcdir)/src/lib -I$(top_builddir)/src/lib +AM_CPPFLAGS += $(SQLITE_CFLAGS) + +python_LTLIBRARIES = datasrc.la +datasrc_la_SOURCES = datasrc.cc datasrc.h +datasrc_la_SOURCES += client_python.cc client_python.h +datasrc_la_SOURCES += iterator_python.cc iterator_python.h +datasrc_la_SOURCES += finder_python.cc finder_python.h +datasrc_la_SOURCES += updater_python.cc updater_python.h +# This is a temporary workaround for #1206, where the InMemoryClient has been +# moved to an ldopened library. We could add that library to LDADD, but that +# is nonportable. When #1207 is done this becomes moot anyway, and the +# specific workaround is not needed anymore, so we can then remove this +# line again. +datasrc_la_SOURCES += ${top_srcdir}/src/lib/datasrc/sqlite3_accessor.cc + +datasrc_la_CPPFLAGS = $(AM_CPPFLAGS) $(PYTHON_INCLUDES) +datasrc_la_CXXFLAGS = $(AM_CXXFLAGS) $(PYTHON_CXXFLAGS) +datasrc_la_LDFLAGS = $(PYTHON_LDFLAGS) +datasrc_la_LDFLAGS += -module +datasrc_la_LIBADD = $(top_builddir)/src/lib/datasrc/libdatasrc.la +datasrc_la_LIBADD += $(top_builddir)/src/lib/dns/python/libpydnspp.la +datasrc_la_LIBADD += $(PYTHON_LIB) +#datasrc_la_LIBADD += $(SQLITE_LIBS) + +EXTRA_DIST = client_inc.cc +EXTRA_DIST += finder_inc.cc +EXTRA_DIST += iterator_inc.cc +EXTRA_DIST += updater_inc.cc CLEANDIRS = __pycache__ clean-local: rm -rf $(CLEANDIRS) + diff --git a/src/lib/python/isc/datasrc/__init__.py b/src/lib/python/isc/datasrc/__init__.py index 0e1e481080..0b4ed989cf 100644 --- a/src/lib/python/isc/datasrc/__init__.py +++ b/src/lib/python/isc/datasrc/__init__.py @@ -1,2 +1,21 @@ -from isc.datasrc.master import * +import sys +import os + +# this setup is a temporary workaround to deal with the problem of +# having both 'normal' python modules and a wrapper module +# Once all programs use the new interface, we should remove the +# old, and the setup can be made similar to that of the log wrappers. +intree = False +for base in sys.path[:]: + datasrc_libdir = os.path.join(base, 'isc/datasrc/.libs') + if os.path.exists(datasrc_libdir): + sys.path.insert(0, datasrc_libdir) + intree = True + +if intree: + from datasrc import * +else: + from isc.datasrc.datasrc import * from isc.datasrc.sqlite3_ds import * +from isc.datasrc.master import * + diff --git a/src/lib/python/isc/datasrc/client_inc.cc b/src/lib/python/isc/datasrc/client_inc.cc new file mode 100644 index 0000000000..1eba4885b4 --- /dev/null +++ b/src/lib/python/isc/datasrc/client_inc.cc @@ -0,0 +1,157 @@ +namespace { + +const char* const DataSourceClient_doc = "\ +The base class of data source clients.\n\ +\n\ +This is the python wrapper for the abstract base class that defines\n\ +the common interface for various types of data source clients. A data\n\ +source client is a top level access point to a data source, allowing \n\ +various operations on the data source such as lookups, traversing or \n\ +updates. The client class itself has limited focus and delegates \n\ +the responsibility for these specific operations to other (c++) classes;\n\ +in general methods of this class act as factories of these other classes.\n\ +\n\ +- InMemoryClient: A client of a conceptual data source that stores all\n\ + necessary data in memory for faster lookups\n\ +- DatabaseClient: A client that uses a real database backend (such as\n\ + an SQL database). It would internally hold a connection to the\n\ + underlying database system.\n\ +\n\ +It is intentional that while the term these derived classes don't\n\ +contain \"DataSource\" unlike their base class. It's also noteworthy\n\ +that the naming of the base class is somewhat redundant because the\n\ +namespace datasrc would indicate that it's related to a data source.\n\ +The redundant naming comes from the observation that namespaces are\n\ +often omitted with using directives, in which case \"Client\" would be\n\ +too generic. On the other hand, concrete derived classes are generally\n\ +not expected to be referenced directly from other modules and\n\ +applications, so we'll give them more concise names such as\n\ +InMemoryClient. A single DataSourceClient object is expected to handle\n\ +only a single RR class even if the underlying data source contains\n\ +records for multiple RR classes. Likewise, (when we support views) a\n\ +DataSourceClient object is expected to handle only a single view.\n\ +\n\ +If the application uses multiple threads, each thread will need to\n\ +create and use a separate DataSourceClient. This is because some\n\ +database backend doesn't allow multiple threads to share the same\n\ +connection to the database.\n\ +\n\ +For a client using an in memory backend, this may result in having a\n\ +multiple copies of the same data in memory, increasing the memory\n\ +footprint substantially. Depending on how to support multiple CPU\n\ +cores for concurrent lookups on the same single data source (which is\n\ +not fully fixed yet, and for which multiple threads may be used), this\n\ +design may have to be revisited. This class (and therefore its derived\n\ +classes) are not copyable. This is because the derived classes would\n\ +generally contain attributes that are not easy to copy (such as a\n\ +large size of in memory data or a network connection to a database\n\ +server). In order to avoid a surprising disruption with a naive copy\n\ +it's prohibited explicitly. For the expected usage of the client\n\ +classes the restriction should be acceptable.\n\ +\n\ +Todo: This class is still not complete. It will need more factory\n\ +methods, e.g. for (re)loading a zone.\n\ +"; + +const char* const DataSourceClient_findZone_doc = "\ +find_zone(name) -> (code, ZoneFinder)\n\ +\n\ +Returns a ZoneFinder for a zone that best matches the given name.\n\ +\n\ +code: The result code of the operation (integer).\n\ +- DataSourceClient.SUCCESS: A zone that gives an exact match is found\n\ +- DataSourceClient.PARTIALMATCH: A zone whose origin is a super domain of name\n\ + is found (but there is no exact match)\n\ +- DataSourceClient.NOTFOUND: For all other cases.\n\ +ZoneFinder: ZoneFinder object for the found zone if one is found;\n\ +otherwise None.\n\ +\n\ +Any internal error will be raised as an isc.datasrc.Error exception\n\ +\n\ +Parameters:\n\ + name A domain name for which the search is performed.\n\ +\n\ +Return Value(s): A tuple containing a result value and a ZoneFinder object or\n\ +None\n\ +"; + +const char* const DataSourceClient_getIterator_doc = "\ +get_iterator(name) -> ZoneIterator\n\ +\n\ +Returns an iterator to the given zone.\n\ +\n\ +This allows for traversing the whole zone. The returned object can\n\ +provide the RRsets one by one.\n\ +\n\ +This throws isc.datasrc.Error when the zone does not exist in the\n\ +datasource, or when an internal error occurs.\n\ +\n\ +The default implementation throws isc.datasrc.NotImplemented. This allows for\n\ +easy and fast deployment of minimal custom data sources, where the\n\ +user/implementator doesn't have to care about anything else but the\n\ +actual queries. Also, in some cases, it isn't possible to traverse the\n\ +zone from logic point of view (eg. dynamically generated zone data).\n\ +\n\ +It is not fixed if a concrete implementation of this method can throw\n\ +anything else.\n\ +\n\ +Parameters:\n\ + isc.dns.Name The name of zone apex to be traversed. It doesn't do\n\ + nearest match as find_zone.\n\ +\n\ +Return Value(s): Pointer to the iterator.\n\ +"; + +const char* const DataSourceClient_getUpdater_doc = "\ +get_updater(name, replace) -> ZoneUpdater\n\ +\n\ +Return an updater to make updates to a specific zone.\n\ +\n\ +The RR class of the zone is the one that the client is expected to\n\ +handle (see the detailed description of this class).\n\ +\n\ +If the specified zone is not found via the client, a NULL pointer will\n\ +be returned; in other words a completely new zone cannot be created\n\ +using an updater. It must be created beforehand (even if it's an empty\n\ +placeholder) in a way specific to the underlying data source.\n\ +\n\ +Conceptually, the updater will trigger a separate transaction for\n\ +subsequent updates to the zone within the context of the updater (the\n\ +actual implementation of the \"transaction\" may vary for the specific\n\ +underlying data source). Until commit() is performed on the updater,\n\ +the intermediate updates won't affect the results of other methods\n\ +(and the result of the object's methods created by other factory\n\ +methods). Likewise, if the updater is destructed without performing\n\ +commit(), the intermediate updates will be effectively canceled and\n\ +will never affect other methods.\n\ +\n\ +If the underlying data source allows concurrent updates, this method\n\ +can be called multiple times while the previously returned updater(s)\n\ +are still active. In this case each updater triggers a different\n\ +\"transaction\". Normally it would be for different zones for such a\n\ +case as handling multiple incoming AXFR streams concurrently, but this\n\ +interface does not even prohibit an attempt of getting more than one\n\ +updater for the same zone, as long as the underlying data source\n\ +allows such an operation (and any conflict resolution is left to the\n\ +specific implementation).\n\ +\n\ +If replace is true, any existing RRs of the zone will be deleted on\n\ +successful completion of updates (after commit() on the updater); if\n\ +it's false, the existing RRs will be intact unless explicitly deleted\n\ +by delete_rrset() on the updater.\n\ +\n\ +A data source can be \"read only\" or can prohibit partial updates. In\n\ +such cases this method will result in an isc.datasrc.NotImplemented exception\n\ +unconditionally or when replace is false).\n\ +\n\ +Exceptions:\n\ + isc.datasrc. NotImplemented The underlying data source does not support\n\ + updates.\n\ + isc.datasrc.Error Internal error in the underlying data source.\n\ +\n\ +Parameters:\n\ + name The zone name to be updated\n\ + replace Whether to delete existing RRs before making updates\n\ +\n\ +"; +} // unnamed namespace diff --git a/src/lib/python/isc/datasrc/client_python.cc b/src/lib/python/isc/datasrc/client_python.cc new file mode 100644 index 0000000000..984eabf594 --- /dev/null +++ b/src/lib/python/isc/datasrc/client_python.cc @@ -0,0 +1,264 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// Enable this if you use s# variants with PyArg_ParseTuple(), see +// http://docs.python.org/py3k/c-api/arg.html#strings-and-buffers +//#define PY_SSIZE_T_CLEAN + +// Python.h needs to be placed at the head of the program file, see: +// http://docs.python.org/py3k/extending/extending.html#a-simple-example +#include + +#include + +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "datasrc.h" +#include "client_python.h" +#include "finder_python.h" +#include "iterator_python.h" +#include "updater_python.h" +#include "client_inc.cc" + +using namespace std; +using namespace isc::util::python; +using namespace isc::dns::python; +using namespace isc::datasrc; +using namespace isc::datasrc::python; + +namespace { +// The s_* Class simply covers one instantiation of the object +class s_DataSourceClient : public PyObject { +public: + s_DataSourceClient() : cppobj(NULL) {}; + DataSourceClient* cppobj; +}; + +// Shortcut type which would be convenient for adding class variables safely. +typedef CPPPyObjectContainer + DataSourceClientContainer; + +PyObject* +DataSourceClient_findZone(PyObject* po_self, PyObject* args) { + s_DataSourceClient* const self = static_cast(po_self); + PyObject *name; + if (PyArg_ParseTuple(args, "O!", &name_type, &name)) { + try { + DataSourceClient::FindResult find_result( + self->cppobj->findZone(PyName_ToName(name))); + + result::Result r = find_result.code; + ZoneFinderPtr zfp = find_result.zone_finder; + // Use N instead of O so refcount isn't increased twice + return (Py_BuildValue("IN", r, createZoneFinderObject(zfp))); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } + } else { + return (NULL); + } +} + +PyObject* +DataSourceClient_getIterator(PyObject* po_self, PyObject* args) { + s_DataSourceClient* const self = static_cast(po_self); + PyObject *name_obj; + if (PyArg_ParseTuple(args, "O!", &name_type, &name_obj)) { + try { + return (createZoneIteratorObject( + self->cppobj->getIterator(PyName_ToName(name_obj)))); + } catch (const isc::NotImplemented& ne) { + PyErr_SetString(getDataSourceException("NotImplemented"), + ne.what()); + return (NULL); + } catch (const DataSourceError& dse) { + PyErr_SetString(getDataSourceException("Error"), dse.what()); + return (NULL); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } + } else { + return (NULL); + } +} + +PyObject* +DataSourceClient_getUpdater(PyObject* po_self, PyObject* args) { + s_DataSourceClient* const self = static_cast(po_self); + PyObject *name_obj; + PyObject *replace_obj; + if (PyArg_ParseTuple(args, "O!O", &name_type, &name_obj, &replace_obj) && + PyBool_Check(replace_obj)) { + bool replace = (replace_obj != Py_False); + try { + return (createZoneUpdaterObject( + self->cppobj->getUpdater(PyName_ToName(name_obj), + replace))); + } catch (const isc::NotImplemented& ne) { + PyErr_SetString(getDataSourceException("NotImplemented"), + ne.what()); + return (NULL); + } catch (const DataSourceError& dse) { + PyErr_SetString(getDataSourceException("Error"), dse.what()); + return (NULL); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } + } else { + return (NULL); + } +} + +// This list contains the actual set of functions we have in +// python. Each entry has +// 1. Python method name +// 2. Our static function here +// 3. Argument type +// 4. Documentation +PyMethodDef DataSourceClient_methods[] = { + { "find_zone", reinterpret_cast(DataSourceClient_findZone), + METH_VARARGS, DataSourceClient_findZone_doc }, + { "get_iterator", + reinterpret_cast(DataSourceClient_getIterator), METH_VARARGS, + DataSourceClient_getIterator_doc }, + { "get_updater", reinterpret_cast(DataSourceClient_getUpdater), + METH_VARARGS, DataSourceClient_getUpdater_doc }, + { NULL, NULL, 0, NULL } +}; + +int +DataSourceClient_init(s_DataSourceClient* self, PyObject* args) { + // TODO: we should use the factory function which hasn't been written + // yet. For now we hardcode the sqlite3 initialization, and pass it one + // string for the database file. (similar to how the 'old direct' + // sqlite3_ds code works) + try { + char* db_file_name; + if (PyArg_ParseTuple(args, "s", &db_file_name)) { + boost::shared_ptr sqlite3_accessor( + new SQLite3Accessor(db_file_name, isc::dns::RRClass::IN())); + self->cppobj = new DatabaseClient(isc::dns::RRClass::IN(), + sqlite3_accessor); + return (0); + } else { + return (-1); + } + + } catch (const exception& ex) { + const string ex_what = "Failed to construct DataSourceClient object: " + + string(ex.what()); + PyErr_SetString(getDataSourceException("Error"), ex_what.c_str()); + return (-1); + } catch (...) { + PyErr_SetString(PyExc_RuntimeError, + "Unexpected exception in constructing DataSourceClient"); + return (-1); + } + PyErr_SetString(PyExc_TypeError, + "Invalid arguments to DataSourceClient constructor"); + + return (-1); +} + +void +DataSourceClient_destroy(s_DataSourceClient* const self) { + delete self->cppobj; + self->cppobj = NULL; + Py_TYPE(self)->tp_free(self); +} + +} // end anonymous namespace + +namespace isc { +namespace datasrc { +namespace python { +// This defines the complete type for reflection in python and +// parsing of PyObject* to s_DataSourceClient +// Most of the functions are not actually implemented and NULL here. +PyTypeObject datasourceclient_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "datasrc.DataSourceClient", + sizeof(s_DataSourceClient), // tp_basicsize + 0, // tp_itemsize + reinterpret_cast(DataSourceClient_destroy),// tp_dealloc + NULL, // tp_print + NULL, // tp_getattr + NULL, // tp_setattr + NULL, // tp_reserved + NULL, // tp_repr + NULL, // tp_as_number + NULL, // tp_as_sequence + NULL, // tp_as_mapping + NULL, // tp_hash + NULL, // tp_call + NULL, // tp_str + NULL, // tp_getattro + NULL, // tp_setattro + NULL, // tp_as_buffer + Py_TPFLAGS_DEFAULT, // tp_flags + DataSourceClient_doc, + NULL, // tp_traverse + NULL, // tp_clear + NULL, // tp_richcompare + 0, // tp_weaklistoffset + NULL, // tp_iter + NULL, // tp_iternext + DataSourceClient_methods, // tp_methods + NULL, // tp_members + NULL, // tp_getset + NULL, // tp_base + NULL, // tp_dict + NULL, // tp_descr_get + NULL, // tp_descr_set + 0, // tp_dictoffset + reinterpret_cast(DataSourceClient_init),// tp_init + NULL, // tp_alloc + PyType_GenericNew, // tp_new + NULL, // tp_free + NULL, // tp_is_gc + NULL, // tp_bases + NULL, // tp_mro + NULL, // tp_cache + NULL, // tp_subclasses + NULL, // tp_weaklist + NULL, // tp_del + 0 // tp_version_tag +}; + +} // namespace python +} // namespace datasrc +} // namespace isc diff --git a/src/lib/python/isc/datasrc/client_python.h b/src/lib/python/isc/datasrc/client_python.h new file mode 100644 index 0000000000..b20fb6b4c7 --- /dev/null +++ b/src/lib/python/isc/datasrc/client_python.h @@ -0,0 +1,35 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_DATASRC_CLIENT_H +#define __PYTHON_DATASRC_CLIENT_H 1 + +#include + +namespace isc { +namespace datasrc { +class DataSourceClient; + +namespace python { + +extern PyTypeObject datasourceclient_type; + +} // namespace python +} // namespace datasrc +} // namespace isc +#endif // __PYTHON_DATASRC_CLIENT_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/datasrc/datasrc.cc b/src/lib/python/isc/datasrc/datasrc.cc new file mode 100644 index 0000000000..4b0324a4d3 --- /dev/null +++ b/src/lib/python/isc/datasrc/datasrc.cc @@ -0,0 +1,225 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#define PY_SSIZE_T_CLEAN +#include +#include + +#include + +#include +#include +#include + +#include "datasrc.h" +#include "client_python.h" +#include "finder_python.h" +#include "iterator_python.h" +#include "updater_python.h" + +#include +#include + +using namespace isc::datasrc; +using namespace isc::datasrc::python; +using namespace isc::util::python; +using namespace isc::dns::python; + +namespace isc { +namespace datasrc { +namespace python { +PyObject* +getDataSourceException(const char* ex_name) { + PyObject* ex_obj = NULL; + + PyObject* datasrc_module = PyImport_AddModule("isc.datasrc"); + if (datasrc_module != NULL) { + PyObject* datasrc_dict = PyModule_GetDict(datasrc_module); + if (datasrc_dict != NULL) { + ex_obj = PyDict_GetItemString(datasrc_dict, ex_name); + } + } + + if (ex_obj == NULL) { + ex_obj = PyExc_RuntimeError; + } + return (ex_obj); +} + +} // end namespace python +} // end namespace datasrc +} // end namespace isc + +namespace { + +bool +initModulePart_DataSourceClient(PyObject* mod) { + // We initialize the static description object with PyType_Ready(), + // then add it to the module. This is not just a check! (leaving + // this out results in segmentation faults) + if (PyType_Ready(&datasourceclient_type) < 0) { + return (false); + } + void* dscp = &datasourceclient_type; + if (PyModule_AddObject(mod, "DataSourceClient", static_cast(dscp)) < 0) { + return (false); + } + Py_INCREF(&datasourceclient_type); + + addClassVariable(datasourceclient_type, "SUCCESS", + Py_BuildValue("I", result::SUCCESS)); + addClassVariable(datasourceclient_type, "EXIST", + Py_BuildValue("I", result::EXIST)); + addClassVariable(datasourceclient_type, "NOTFOUND", + Py_BuildValue("I", result::NOTFOUND)); + addClassVariable(datasourceclient_type, "PARTIALMATCH", + Py_BuildValue("I", result::PARTIALMATCH)); + + return (true); +} + +bool +initModulePart_ZoneFinder(PyObject* mod) { + // We initialize the static description object with PyType_Ready(), + // then add it to the module. This is not just a check! (leaving + // this out results in segmentation faults) + if (PyType_Ready(&zonefinder_type) < 0) { + return (false); + } + void* zip = &zonefinder_type; + if (PyModule_AddObject(mod, "ZoneFinder", static_cast(zip)) < 0) { + return (false); + } + Py_INCREF(&zonefinder_type); + + addClassVariable(zonefinder_type, "SUCCESS", + Py_BuildValue("I", ZoneFinder::SUCCESS)); + addClassVariable(zonefinder_type, "DELEGATION", + Py_BuildValue("I", ZoneFinder::DELEGATION)); + addClassVariable(zonefinder_type, "NXDOMAIN", + Py_BuildValue("I", ZoneFinder::NXDOMAIN)); + addClassVariable(zonefinder_type, "NXRRSET", + Py_BuildValue("I", ZoneFinder::NXRRSET)); + addClassVariable(zonefinder_type, "CNAME", + Py_BuildValue("I", ZoneFinder::CNAME)); + addClassVariable(zonefinder_type, "DNAME", + Py_BuildValue("I", ZoneFinder::DNAME)); + + addClassVariable(zonefinder_type, "FIND_DEFAULT", + Py_BuildValue("I", ZoneFinder::FIND_DEFAULT)); + addClassVariable(zonefinder_type, "FIND_GLUE_OK", + Py_BuildValue("I", ZoneFinder::FIND_GLUE_OK)); + addClassVariable(zonefinder_type, "FIND_DNSSEC", + Py_BuildValue("I", ZoneFinder::FIND_DNSSEC)); + + + return (true); +} + +bool +initModulePart_ZoneIterator(PyObject* mod) { + // We initialize the static description object with PyType_Ready(), + // then add it to the module. This is not just a check! (leaving + // this out results in segmentation faults) + if (PyType_Ready(&zoneiterator_type) < 0) { + return (false); + } + void* zip = &zoneiterator_type; + if (PyModule_AddObject(mod, "ZoneIterator", static_cast(zip)) < 0) { + return (false); + } + Py_INCREF(&zoneiterator_type); + + return (true); +} + +bool +initModulePart_ZoneUpdater(PyObject* mod) { + // We initialize the static description object with PyType_Ready(), + // then add it to the module. This is not just a check! (leaving + // this out results in segmentation faults) + if (PyType_Ready(&zoneupdater_type) < 0) { + return (false); + } + void* zip = &zoneupdater_type; + if (PyModule_AddObject(mod, "ZoneUpdater", static_cast(zip)) < 0) { + return (false); + } + Py_INCREF(&zoneupdater_type); + + return (true); +} + + +PyObject* po_DataSourceError; +PyObject* po_NotImplemented; + +PyModuleDef iscDataSrc = { + { PyObject_HEAD_INIT(NULL) NULL, 0, NULL}, + "datasrc", + "Python bindings for the classes in the isc::datasrc namespace.\n\n" + "These bindings are close match to the C++ API, but they are not complete " + "(some parts are not needed) and some are done in more python-like ways.", + -1, + NULL, + NULL, + NULL, + NULL, + NULL +}; + +} // end anonymous namespace + +PyMODINIT_FUNC +PyInit_datasrc(void) { + PyObject* mod = PyModule_Create(&iscDataSrc); + if (mod == NULL) { + return (NULL); + } + + if (!initModulePart_DataSourceClient(mod)) { + Py_DECREF(mod); + return (NULL); + } + + if (!initModulePart_ZoneFinder(mod)) { + Py_DECREF(mod); + return (NULL); + } + + if (!initModulePart_ZoneIterator(mod)) { + Py_DECREF(mod); + return (NULL); + } + + if (!initModulePart_ZoneUpdater(mod)) { + Py_DECREF(mod); + return (NULL); + } + + try { + po_DataSourceError = PyErr_NewException("isc.datasrc.Error", NULL, + NULL); + PyObjectContainer(po_DataSourceError).installToModule(mod, "Error"); + po_NotImplemented = PyErr_NewException("isc.datasrc.NotImplemented", + NULL, NULL); + PyObjectContainer(po_NotImplemented).installToModule(mod, + "NotImplemented"); + } catch (...) { + Py_DECREF(mod); + return (NULL); + } + + return (mod); +} diff --git a/src/lib/python/isc/datasrc/datasrc.h b/src/lib/python/isc/datasrc/datasrc.h new file mode 100644 index 0000000000..d82881b9ce --- /dev/null +++ b/src/lib/python/isc/datasrc/datasrc.h @@ -0,0 +1,50 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_DATASRC_H +#define __PYTHON_DATASRC_H 1 + +#include + +namespace isc { +namespace datasrc { +namespace python { + +// Return a Python exception object of the given name (ex_name) defined in +// the isc.datasrc.datasrc loadable module. +// +// Since the datasrc module is a different binary image and is loaded separately +// from the dns module, it would be very tricky to directly access to +// C/C++ symbols defined in that module. So we get access to these object +// using the Python interpretor through this wrapper function. +// +// The __init__.py file should ensure isc.datasrc has been loaded by the time +// whenever this function is called, and there shouldn't be any operation +// within this function that can fail (such as dynamic memory allocation), +// so this function should always succeed. Yet there may be an overlooked +// failure mode, perhaps due to a bug in the binding implementation, or +// due to invalid usage. As a last resort for such cases, this function +// returns PyExc_RuntimeError (a C binding of Python's RuntimeError) should +// it encounters an unexpected failure. +extern PyObject* getDataSourceException(const char* ex_name); + +} // namespace python +} // namespace datasrc +} // namespace isc + +#endif // __PYTHON_ACL_DNS_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/datasrc/finder_inc.cc b/src/lib/python/isc/datasrc/finder_inc.cc new file mode 100644 index 0000000000..2b47d021d2 --- /dev/null +++ b/src/lib/python/isc/datasrc/finder_inc.cc @@ -0,0 +1,96 @@ +namespace { +const char* const ZoneFinder_doc = "\ +The base class to search a zone for RRsets.\n\ +\n\ +The ZoneFinder class is a wrapper for the c++ base class for representing an\n\ +object that performs DNS lookups in a specific zone accessible via a\n\ +data source. In general, different types of data sources (in-memory,\n\ +database-based, etc) define their own derived c++ classes of ZoneFinder,\n\ +implementing ways to retrieve the required data through the common\n\ +interfaces declared in the base class. Each concrete ZoneFinder object\n\ +is therefore (conceptually) associated with a specific zone of one\n\ +specific data source instance.\n\ +\n\ +The origin name and the RR class of the associated zone are available\n\ +via the get_origin() and get_class() methods, respectively.\n\ +\n\ +The most important method of this class is find(), which performs the\n\ +lookup for a given domain and type. See the description of the method\n\ +for details.\n\ +\n\ +It's not clear whether we should request that a zone finder form a\n\ +\"transaction\", that is, whether to ensure the finder is not\n\ +susceptible to changes made by someone else than the creator of the\n\ +finder. If we don't request that, for example, two different lookup\n\ +results for the same name and type can be different if other threads\n\ +or programs make updates to the zone between the lookups. We should\n\ +revisit this point as we gain more experiences.\n\ +\n\ +"; + +const char* const ZoneFinder_getOrigin_doc = "\ +get_origin() -> isc.dns.Name\n\ +\n\ +Return the origin name of the zone.\n\ +\n\ +"; + +const char* const ZoneFinder_getClass_doc = "\ +get_class() -> isc.dns.RRClass\n\ +\n\ +Return the RR class of the zone.\n\ +\n\ +"; + +const char* const ZoneFinder_find_doc = "\ +find(name, type, target=NULL, options=FIND_DEFAULT) -> (code, FindResult)\n\ +\n\ +Search the zone for a given pair of domain name and RR type.\n\ +\n\ +- If the search name belongs under a zone cut, it returns the code of\n\ + DELEGATION and the NS RRset at the zone cut.\n\ +- If there is no matching name, it returns the code of NXDOMAIN, and,\n\ + if DNSSEC is requested, the NSEC RRset that proves the non-\n\ + existence.\n\ +- If there is a matching name but no RRset of the search type, it\n\ + returns the code of NXRRSET, and, if DNSSEC is required, the NSEC\n\ + RRset for that name.\n\ +- If there is a CNAME RR of the searched name but there is no RR of\n\ + the searched type of the name (so this type is different from\n\ + CNAME), it returns the code of CNAME and that CNAME RR. Note that if\n\ + the searched RR type is CNAME, it is considered a successful match,\n\ + and the code of SUCCESS will be returned.\n\ +- If the search name matches a delegation point of DNAME, it returns\n\ + the code of DNAME and that DNAME RR.\n\ +- If the target is a list, all RRsets under the domain are inserted\n\ + there and SUCCESS (or NXDOMAIN, in case of empty domain) is returned\n\ + instead of normall processing. This is intended to handle ANY query.\n\ + : this behavior is controversial as we discussed in\n\ + https://lists.isc.org/pipermail/bind10-dev/2011-January/001918.html\n\ + We should revisit the interface before we heavily rely on it. The\n\ + options parameter specifies customized behavior of the search. Their\n\ + semantics is as follows:\n\ + (This feature is disable at this time)\n\ +- GLUE_OK Allow search under a zone cut. By default the search will\n\ + stop once it encounters a zone cut. If this option is specified it\n\ + remembers information about the highest zone cut and continues the\n\ + search until it finds an exact match for the given name or it\n\ + detects there is no exact match. If an exact match is found, RRsets\n\ + for that name are searched just like the normal case; otherwise, if\n\ + the search has encountered a zone cut, DELEGATION with the\n\ + information of the highest zone cut will be returned.\n\ +\n\ +This method raises an isc.datasrc.Error exception if there is an internal\n\ +error in the datasource.\n\ +\n\ +Parameters:\n\ + name The domain name to be searched for.\n\ + type The RR type to be searched for.\n\ + target If target is not NULL, insert all RRs under the domain\n\ + into it.\n\ + options The search options.\n\ +\n\ +Return Value(s): A tuple of a result code an a FindResult object enclosing\n\ +the search result (see above).\n\ +"; +} // unnamed namespace diff --git a/src/lib/python/isc/datasrc/finder_python.cc b/src/lib/python/isc/datasrc/finder_python.cc new file mode 100644 index 0000000000..598d3001af --- /dev/null +++ b/src/lib/python/isc/datasrc/finder_python.cc @@ -0,0 +1,248 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// Enable this if you use s# variants with PyArg_ParseTuple(), see +// http://docs.python.org/py3k/c-api/arg.html#strings-and-buffers +//#define PY_SSIZE_T_CLEAN + +// Python.h needs to be placed at the head of the program file, see: +// http://docs.python.org/py3k/extending/extending.html#a-simple-example +#include + +#include + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "datasrc.h" +#include "finder_python.h" +#include "finder_inc.cc" + +using namespace std; +using namespace isc::util::python; +using namespace isc::dns::python; +using namespace isc::datasrc; +using namespace isc::datasrc::python; + +namespace isc_datasrc_internal { +// This is the shared code for the find() call in the finder and the updater +// Is is intentionally not available through any header, nor at our standard +// namespace, as it is not supposed to be called anywhere but from finder and +// updater +PyObject* ZoneFinder_helper(ZoneFinder* finder, PyObject* args) { + if (finder == NULL) { + PyErr_SetString(getDataSourceException("Error"), + "Internal error in find() wrapper; finder object NULL"); + return (NULL); + } + PyObject *name; + PyObject *rrtype; + PyObject *target; + int options_int; + if (PyArg_ParseTuple(args, "O!O!OI", &name_type, &name, + &rrtype_type, &rrtype, + &target, &options_int)) { + try { + ZoneFinder::FindOptions options = + static_cast(options_int); + ZoneFinder::FindResult find_result( + finder->find(PyName_ToName(name), + PyRRType_ToRRType(rrtype), + NULL, + options + )); + ZoneFinder::Result r = find_result.code; + isc::dns::ConstRRsetPtr rrsp = find_result.rrset; + if (rrsp) { + // Use N instead of O so the refcount isn't increased twice + return (Py_BuildValue("IN", r, createRRsetObject(*rrsp))); + } else { + return (Py_BuildValue("IO", r, Py_None)); + } + } catch (const DataSourceError& dse) { + PyErr_SetString(getDataSourceException("Error"), dse.what()); + return (NULL); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } + } else { + return (NULL); + } + return Py_BuildValue("I", 1); +} + +} // end namespace internal + +namespace { +// The s_* Class simply covers one instantiation of the object +class s_ZoneFinder : public PyObject { +public: + s_ZoneFinder() : cppobj(ZoneFinderPtr()) {}; + ZoneFinderPtr cppobj; +}; + +// Shortcut type which would be convenient for adding class variables safely. +typedef CPPPyObjectContainer ZoneFinderContainer; + +// General creation and destruction +int +ZoneFinder_init(s_ZoneFinder* self, PyObject* args) { + // can't be called directly + PyErr_SetString(PyExc_TypeError, + "ZoneFinder cannot be constructed directly"); + + return (-1); +} + +void +ZoneFinder_destroy(s_ZoneFinder* const self) { + // cppobj is a shared ptr, but to make sure things are not destroyed in + // the wrong order, we reset it here. + self->cppobj.reset(); + Py_TYPE(self)->tp_free(self); +} + +PyObject* +ZoneFinder_getClass(PyObject* po_self, PyObject*) { + s_ZoneFinder* self = static_cast(po_self); + try { + return (createRRClassObject(self->cppobj->getClass())); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } +} + +PyObject* +ZoneFinder_getOrigin(PyObject* po_self, PyObject*) { + s_ZoneFinder* self = static_cast(po_self); + try { + return (createNameObject(self->cppobj->getOrigin())); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } +} + +PyObject* +ZoneFinder_find(PyObject* po_self, PyObject* args) { + s_ZoneFinder* const self = static_cast(po_self); + return (isc_datasrc_internal::ZoneFinder_helper(self->cppobj.get(), args)); +} + +// This list contains the actual set of functions we have in +// python. Each entry has +// 1. Python method name +// 2. Our static function here +// 3. Argument type +// 4. Documentation +PyMethodDef ZoneFinder_methods[] = { + { "get_origin", reinterpret_cast(ZoneFinder_getOrigin), + METH_NOARGS, ZoneFinder_getOrigin_doc }, + { "get_class", reinterpret_cast(ZoneFinder_getClass), + METH_NOARGS, ZoneFinder_getClass_doc }, + { "find", reinterpret_cast(ZoneFinder_find), METH_VARARGS, + ZoneFinder_find_doc }, + { NULL, NULL, 0, NULL } +}; + +} // end of unnamed namespace + +namespace isc { +namespace datasrc { +namespace python { + +PyTypeObject zonefinder_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "datasrc.ZoneFinder", + sizeof(s_ZoneFinder), // tp_basicsize + 0, // tp_itemsize + reinterpret_cast(ZoneFinder_destroy),// tp_dealloc + NULL, // tp_print + NULL, // tp_getattr + NULL, // tp_setattr + NULL, // tp_reserved + NULL, // tp_repr + NULL, // tp_as_number + NULL, // tp_as_sequence + NULL, // tp_as_mapping + NULL, // tp_hash + NULL, // tp_call + NULL, // tp_str + NULL, // tp_getattro + NULL, // tp_setattro + NULL, // tp_as_buffer + Py_TPFLAGS_DEFAULT, // tp_flags + ZoneFinder_doc, + NULL, // tp_traverse + NULL, // tp_clear + NULL, // tp_richcompare + 0, // tp_weaklistoffset + NULL, // tp_iter + NULL, // tp_iternext + ZoneFinder_methods, // tp_methods + NULL, // tp_members + NULL, // tp_getset + NULL, // tp_base + NULL, // tp_dict + NULL, // tp_descr_get + NULL, // tp_descr_set + 0, // tp_dictoffset + reinterpret_cast(ZoneFinder_init),// tp_init + NULL, // tp_alloc + PyType_GenericNew, // tp_new + NULL, // tp_free + NULL, // tp_is_gc + NULL, // tp_bases + NULL, // tp_mro + NULL, // tp_cache + NULL, // tp_subclasses + NULL, // tp_weaklist + NULL, // tp_del + 0 // tp_version_tag +}; + +PyObject* +createZoneFinderObject(isc::datasrc::ZoneFinderPtr source) { + s_ZoneFinder* py_zi = static_cast( + zonefinder_type.tp_alloc(&zonefinder_type, 0)); + if (py_zi != NULL) { + py_zi->cppobj = source; + } + return (py_zi); +} + +} // namespace python +} // namespace datasrc +} // namespace isc + diff --git a/src/lib/python/isc/datasrc/finder_python.h b/src/lib/python/isc/datasrc/finder_python.h new file mode 100644 index 0000000000..5f2404e342 --- /dev/null +++ b/src/lib/python/isc/datasrc/finder_python.h @@ -0,0 +1,36 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_DATASRC_FINDER_H +#define __PYTHON_DATASRC_FINDER_H 1 + +#include + +namespace isc { +namespace datasrc { + +namespace python { + +extern PyTypeObject zonefinder_type; + +PyObject* createZoneFinderObject(isc::datasrc::ZoneFinderPtr source); + +} // namespace python +} // namespace datasrc +} // namespace isc +#endif // __PYTHON_DATASRC_FINDER_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/datasrc/iterator_inc.cc b/src/lib/python/isc/datasrc/iterator_inc.cc new file mode 100644 index 0000000000..b1d9d2550e --- /dev/null +++ b/src/lib/python/isc/datasrc/iterator_inc.cc @@ -0,0 +1,34 @@ +namespace { + +const char* const ZoneIterator_doc = "\ +Read-only iterator to a zone.\n\ +\n\ +You can get an instance of the ZoneIterator from\n\ +DataSourceClient.get_iterator() method. The actual concrete\n\ +c++ implementation will be different depending on the actual data source\n\ +used. This is the abstract interface.\n\ +\n\ +There's no way to start iterating from the beginning again or return.\n\ +\n\ +The ZoneIterator is a python iterator, and can be iterated over directly.\n\ +"; + +const char* const ZoneIterator_getNextRRset_doc = "\ +get_next_rrset() -> isc.dns.RRset\n\ +\n\ +Get next RRset from the zone.\n\ +\n\ +This returns the next RRset in the zone.\n\ +\n\ +Any special order is not guaranteed.\n\ +\n\ +While this can potentially throw anything (including standard\n\ +allocation errors), it should be rare.\n\ +\n\ +Pointer to the next RRset or None pointer when the iteration gets to\n\ +the end of the zone.\n\ +\n\ +Raises an isc.datasrc.Error exception if it is called again after returning\n\ +None\n\ +"; +} // unnamed namespace diff --git a/src/lib/python/isc/datasrc/iterator_python.cc b/src/lib/python/isc/datasrc/iterator_python.cc new file mode 100644 index 0000000000..b482ea69e4 --- /dev/null +++ b/src/lib/python/isc/datasrc/iterator_python.cc @@ -0,0 +1,202 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// Enable this if you use s# variants with PyArg_ParseTuple(), see +// http://docs.python.org/py3k/c-api/arg.html#strings-and-buffers +//#define PY_SSIZE_T_CLEAN + +// Python.h needs to be placed at the head of the program file, see: +// http://docs.python.org/py3k/extending/extending.html#a-simple-example +#include + +#include + +#include +#include +#include +#include + +#include +#include + +#include "datasrc.h" +#include "iterator_python.h" + +#include "iterator_inc.cc" + +using namespace std; +using namespace isc::util::python; +using namespace isc::dns::python; +using namespace isc::datasrc; +using namespace isc::datasrc::python; + +namespace { +// The s_* Class simply covers one instantiation of the object +class s_ZoneIterator : public PyObject { +public: + s_ZoneIterator() : cppobj(ZoneIteratorPtr()) {}; + ZoneIteratorPtr cppobj; +}; + +// Shortcut type which would be convenient for adding class variables safely. +typedef CPPPyObjectContainer + ZoneIteratorContainer; + +// General creation and destruction +int +ZoneIterator_init(s_ZoneIterator* self, PyObject* args) { + // can't be called directly + PyErr_SetString(PyExc_TypeError, + "ZoneIterator cannot be constructed directly"); + + return (-1); +} + +void +ZoneIterator_destroy(s_ZoneIterator* const self) { + // cppobj is a shared ptr, but to make sure things are not destroyed in + // the wrong order, we reset it here. + self->cppobj.reset(); + Py_TYPE(self)->tp_free(self); +} + +// +// We declare the functions here, the definitions are below +// the type definition of the object, since both can use the other +// +PyObject* +ZoneIterator_getNextRRset(PyObject* po_self, PyObject*) { + s_ZoneIterator* self = static_cast(po_self); + if (!self->cppobj) { + PyErr_SetString(getDataSourceException("Error"), + "get_next_rrset() called past end of iterator"); + return (NULL); + } + try { + isc::dns::ConstRRsetPtr rrset = self->cppobj->getNextRRset(); + if (!rrset) { + Py_RETURN_NONE; + } + return (createRRsetObject(*rrset)); + } catch (const isc::Exception& isce) { + // isc::Unexpected is thrown when we call getNextRRset() when we are + // already done iterating ('iterating past end') + // We could also simply return None again + PyErr_SetString(getDataSourceException("Error"), isce.what()); + return (NULL); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } +} + +PyObject* +ZoneIterator_iter(PyObject *self) { + Py_INCREF(self); + return (self); +} + +PyObject* +ZoneIterator_next(PyObject* self) { + PyObject *result = ZoneIterator_getNextRRset(self, NULL); + // iter_next must return NULL without error instead of Py_None + if (result == Py_None) { + Py_DECREF(result); + return (NULL); + } else { + return (result); + } +} + +PyMethodDef ZoneIterator_methods[] = { + { "get_next_rrset", + reinterpret_cast(ZoneIterator_getNextRRset), METH_NOARGS, + ZoneIterator_getNextRRset_doc }, + { NULL, NULL, 0, NULL } +}; + + +} // end of unnamed namespace + +namespace isc { +namespace datasrc { +namespace python { +PyTypeObject zoneiterator_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "datasrc.ZoneIterator", + sizeof(s_ZoneIterator), // tp_basicsize + 0, // tp_itemsize + reinterpret_cast(ZoneIterator_destroy),// tp_dealloc + NULL, // tp_print + NULL, // tp_getattr + NULL, // tp_setattr + NULL, // tp_reserved + NULL, // tp_repr + NULL, // tp_as_number + NULL, // tp_as_sequence + NULL, // tp_as_mapping + NULL, // tp_hash + NULL, // tp_call + NULL, // tp_str + NULL, // tp_getattro + NULL, // tp_setattro + NULL, // tp_as_buffer + Py_TPFLAGS_DEFAULT, // tp_flags + ZoneIterator_doc, + NULL, // tp_traverse + NULL, // tp_clear + NULL, // tp_richcompare + 0, // tp_weaklistoffset + ZoneIterator_iter, // tp_iter + ZoneIterator_next, // tp_iternext + ZoneIterator_methods, // tp_methods + NULL, // tp_members + NULL, // tp_getset + NULL, // tp_base + NULL, // tp_dict + NULL, // tp_descr_get + NULL, // tp_descr_set + 0, // tp_dictoffset + reinterpret_cast(ZoneIterator_init),// tp_init + NULL, // tp_alloc + PyType_GenericNew, // tp_new + NULL, // tp_free + NULL, // tp_is_gc + NULL, // tp_bases + NULL, // tp_mro + NULL, // tp_cache + NULL, // tp_subclasses + NULL, // tp_weaklist + NULL, // tp_del + 0 // tp_version_tag +}; + +PyObject* +createZoneIteratorObject(isc::datasrc::ZoneIteratorPtr source) { + s_ZoneIterator* py_zi = static_cast( + zoneiterator_type.tp_alloc(&zoneiterator_type, 0)); + if (py_zi != NULL) { + py_zi->cppobj = source; + } + return (py_zi); +} + +} // namespace python +} // namespace datasrc +} // namespace isc + diff --git a/src/lib/python/isc/datasrc/iterator_python.h b/src/lib/python/isc/datasrc/iterator_python.h new file mode 100644 index 0000000000..b457740159 --- /dev/null +++ b/src/lib/python/isc/datasrc/iterator_python.h @@ -0,0 +1,38 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_DATASRC_ITERATOR_H +#define __PYTHON_DATASRC_ITERATOR_H 1 + +#include + +namespace isc { +namespace datasrc { +class DataSourceClient; + +namespace python { + +extern PyTypeObject zoneiterator_type; + +PyObject* createZoneIteratorObject(isc::datasrc::ZoneIteratorPtr source); + + +} // namespace python +} // namespace datasrc +} // namespace isc +#endif // __PYTHON_DATASRC_ITERATOR_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/datasrc/sqlite3_ds.py b/src/lib/python/isc/datasrc/sqlite3_ds.py index a77645a11f..fd63741ef2 100644 --- a/src/lib/python/isc/datasrc/sqlite3_ds.py +++ b/src/lib/python/isc/datasrc/sqlite3_ds.py @@ -33,44 +33,63 @@ def create(cur): Arguments: cur - sqlite3 cursor. """ - cur.execute("CREATE TABLE schema_version (version INTEGER NOT NULL)") - cur.execute("INSERT INTO schema_version VALUES (1)") - cur.execute("""CREATE TABLE zones (id INTEGER PRIMARY KEY, - name STRING NOT NULL COLLATE NOCASE, - rdclass STRING NOT NULL COLLATE NOCASE DEFAULT 'IN', - dnssec BOOLEAN NOT NULL DEFAULT 0)""") - cur.execute("CREATE INDEX zones_byname ON zones (name)") - cur.execute("""CREATE TABLE records (id INTEGER PRIMARY KEY, - zone_id INTEGER NOT NULL, - name STRING NOT NULL COLLATE NOCASE, - rname STRING NOT NULL COLLATE NOCASE, - ttl INTEGER NOT NULL, - rdtype STRING NOT NULL COLLATE NOCASE, - sigtype STRING COLLATE NOCASE, - rdata STRING NOT NULL)""") - cur.execute("CREATE INDEX records_byname ON records (name)") - cur.execute("CREATE INDEX records_byrname ON records (rname)") - cur.execute("""CREATE TABLE nsec3 (id INTEGER PRIMARY KEY, - zone_id INTEGER NOT NULL, - hash STRING NOT NULL COLLATE NOCASE, - owner STRING NOT NULL COLLATE NOCASE, - ttl INTEGER NOT NULL, - rdtype STRING NOT NULL COLLATE NOCASE, - rdata STRING NOT NULL)""") - cur.execute("CREATE INDEX nsec3_byhash ON nsec3 (hash)") + # We are creating the database because it apparently had not been at + # the time we tried to read from it. However, another process may have + # had the same idea, resulting in a potential race condition. + # Therefore, we obtain an exclusive lock before we create anything + # When we have it, we check *again* whether the database has been + # initialized. If not, we do so. -def open(dbfile): + # If the database is perpetually locked, it'll time out automatically + # and we just let it fail. + cur.execute("BEGIN EXCLUSIVE TRANSACTION") + try: + cur.execute("SELECT version FROM schema_version") + row = cur.fetchone() + except sqlite3.OperationalError: + cur.execute("CREATE TABLE schema_version (version INTEGER NOT NULL)") + cur.execute("INSERT INTO schema_version VALUES (1)") + cur.execute("""CREATE TABLE zones (id INTEGER PRIMARY KEY, + name STRING NOT NULL COLLATE NOCASE, + rdclass STRING NOT NULL COLLATE NOCASE DEFAULT 'IN', + dnssec BOOLEAN NOT NULL DEFAULT 0)""") + cur.execute("CREATE INDEX zones_byname ON zones (name)") + cur.execute("""CREATE TABLE records (id INTEGER PRIMARY KEY, + zone_id INTEGER NOT NULL, + name STRING NOT NULL COLLATE NOCASE, + rname STRING NOT NULL COLLATE NOCASE, + ttl INTEGER NOT NULL, + rdtype STRING NOT NULL COLLATE NOCASE, + sigtype STRING COLLATE NOCASE, + rdata STRING NOT NULL)""") + cur.execute("CREATE INDEX records_byname ON records (name)") + cur.execute("CREATE INDEX records_byrname ON records (rname)") + cur.execute("""CREATE TABLE nsec3 (id INTEGER PRIMARY KEY, + zone_id INTEGER NOT NULL, + hash STRING NOT NULL COLLATE NOCASE, + owner STRING NOT NULL COLLATE NOCASE, + ttl INTEGER NOT NULL, + rdtype STRING NOT NULL COLLATE NOCASE, + rdata STRING NOT NULL)""") + cur.execute("CREATE INDEX nsec3_byhash ON nsec3 (hash)") + row = [1] + cur.execute("COMMIT TRANSACTION") + return row + +def open(dbfile, connect_timeout=5.0): """ Open a database, if the database is not yet set up, call create to do so. It may raise Sqlite3DSError if failed to open sqlite3 database file or find bad database schema version in the database. Arguments: dbfile - the filename for the sqlite3 database. + connect_timeout - timeout for opening the database or acquiring locks + defaults to sqlite3 module's default of 5.0 seconds Return sqlite3 connection, sqlite3 cursor. """ try: - conn = sqlite3.connect(dbfile) + conn = sqlite3.connect(dbfile, timeout=connect_timeout) cur = conn.cursor() except Exception as e: fail = "Failed to open " + dbfile + ": " + e.args[0] @@ -80,10 +99,13 @@ def open(dbfile): try: cur.execute("SELECT version FROM schema_version") row = cur.fetchone() - except: - create(cur) - conn.commit() - row = [1] + except sqlite3.OperationalError: + # temporarily disable automatic transactions so + # we can do our own + iso_lvl = conn.isolation_level + conn.isolation_level = None + row = create(cur) + conn.isolation_level = iso_lvl if row == None or row[0] != 1: raise Sqlite3DSError("Bad database schema version") diff --git a/src/lib/python/isc/datasrc/tests/Makefile.am b/src/lib/python/isc/datasrc/tests/Makefile.am index 6f6d15731d..be30dfa0e2 100644 --- a/src/lib/python/isc/datasrc/tests/Makefile.am +++ b/src/lib/python/isc/datasrc/tests/Makefile.am @@ -1,16 +1,18 @@ PYCOVERAGE_RUN = @PYCOVERAGE_RUN@ -PYTESTS = master_test.py sqlite3_ds_test.py +# old tests, TODO remove or change to use new API? +#PYTESTS = master_test.py sqlite3_ds_test.py +PYTESTS = datasrc_test.py EXTRA_DIST = $(PYTESTS) EXTRA_DIST += testdata/brokendb.sqlite3 EXTRA_DIST += testdata/example.com.sqlite3 -CLEANFILES = $(abs_builddir)/example.com.out.sqlite3 +CLEANFILES = $(abs_builddir)/rwtest.sqlite3.copied # If necessary (rare cases), explicitly specify paths to dynamic libraries # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS @@ -23,7 +25,7 @@ endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/python/isc/log \ + PYTHONPATH=:$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/python/isc/log:$(abs_top_builddir)/src/lib/python/isc/datasrc/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs \ TESTDATA_PATH=$(abs_srcdir)/testdata \ TESTDATA_WRITE_PATH=$(abs_builddir) \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ diff --git a/src/lib/python/isc/datasrc/tests/datasrc_test.py b/src/lib/python/isc/datasrc/tests/datasrc_test.py new file mode 100644 index 0000000000..15ceb805e7 --- /dev/null +++ b/src/lib/python/isc/datasrc/tests/datasrc_test.py @@ -0,0 +1,389 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +import isc.log +import isc.datasrc +import isc.dns +import unittest +import os +import shutil + +TESTDATA_PATH = os.environ['TESTDATA_PATH'] + os.sep +TESTDATA_WRITE_PATH = os.environ['TESTDATA_WRITE_PATH'] + os.sep + +READ_ZONE_DB_FILE = TESTDATA_PATH + "example.com.sqlite3" +BROKEN_DB_FILE = TESTDATA_PATH + "brokendb.sqlite3" +WRITE_ZONE_DB_FILE = TESTDATA_WRITE_PATH + "rwtest.sqlite3.copied" +NEW_DB_FILE = TESTDATA_WRITE_PATH + "new_db.sqlite3" + +def add_rrset(rrset_list, name, rrclass, rrtype, ttl, rdatas): + rrset_to_add = isc.dns.RRset(name, rrclass, rrtype, ttl) + if rdatas is not None: + for rdata in rdatas: + rrset_to_add.add_rdata(isc.dns.Rdata(rrtype, rrclass, rdata)) + rrset_list.append(rrset_to_add) + +# helper function, we have no direct rrset comparison atm +def rrsets_equal(a, b): + # no accessor for sigs either (so this only checks name, class, type, ttl, + # and rdata) + # also, because of the fake data in rrsigs, if the type is rrsig, the + # rdata is not checked + return a.get_name() == b.get_name() and\ + a.get_class() == b.get_class() and\ + a.get_type() == b.get_type() and \ + a.get_ttl() == b.get_ttl() and\ + (a.get_type() == isc.dns.RRType.RRSIG() or + sorted(a.get_rdata()) == sorted(b.get_rdata())) + +# returns true if rrset is in expected_rrsets +# will remove the rrset from expected_rrsets if found +def check_for_rrset(expected_rrsets, rrset): + for cur_rrset in expected_rrsets[:]: + if rrsets_equal(cur_rrset, rrset): + expected_rrsets.remove(cur_rrset) + return True + return False + +class DataSrcClient(unittest.TestCase): + + def test_construct(self): + # can't construct directly + self.assertRaises(TypeError, isc.datasrc.ZoneIterator) + + + def test_iterate(self): + dsc = isc.datasrc.DataSourceClient(READ_ZONE_DB_FILE) + + # for RRSIGS, the TTL's are currently modified. This test should + # start failing when we fix that. + rrs = dsc.get_iterator(isc.dns.Name("sql1.example.com.")) + + # we do not know the order in which they are returned by the iterator + # but we do want to check them, so we put all records into one list + # sort it (doesn't matter which way it is sorted, as long as it is + # sorted) + + # RRset is (atm) an unorderable type, and within an rrset, the + # rdatas and rrsigs may also be in random order. In theory the + # rrsets themselves can be returned in any order. + # + # So we create a second list with all rrsets we expect, and for each + # rrset we get from the iterator, see if it is in that list, and + # remove it. + # + # When the iterator is empty, we check no rrsets are left in the + # list of expected ones + expected_rrset_list = [] + + name = isc.dns.Name("sql1.example.com") + rrclass = isc.dns.RRClass.IN() + add_rrset(expected_rrset_list, name, rrclass, + isc.dns.RRType.DNSKEY(), isc.dns.RRTTL(3600), + [ + "256 3 5 AwEAAdYdRhBAEY67R/8G1N5AjGF6asIiNh/pNGeQ8xDQP13J"+ + "N2lo+sNqWcmpYNhuVqRbLB+mamsU1XcCICSBvAlSmfz/ZUdafX23knAr"+ + "TlALxMmspcfdpqun3Yr3YYnztuj06rV7RqmveYckWvAUXVYMSMQZfJ30"+ + "5fs0dE/xLztL/CzZ", + "257 3 5 AwEAAbaKDSa9XEFTsjSYpUTHRotTS9Tz3krfDucugW5UokGQ"+ + "KC26QlyHXlPTZkC+aRFUs/dicJX2kopndLcnlNAPWiKnKtrsFSCnIJDB"+ + "ZIyvcKq+9RXmV3HK3bUdHnQZ88IZWBRmWKfZ6wnzHo53kdYKAemTErkz"+ + "taX3lRRPLYWpxRcDPEjysXT3Lh0vfL5D+CIO1yKw/q7C+v6+/kYAxc2l"+ + "fbNE3HpklSuF+dyX4nXxWgzbcFuLz5Bwfq6ZJ9RYe/kNkA0uMWNa1KkG"+ + "eRh8gg22kgD/KT5hPTnpezUWLvoY5Qc7IB3T0y4n2JIwiF2ZrZYVrWgD"+ + "jRWAzGsxJiJyjd6w2k0=" + ]) + add_rrset(expected_rrset_list, name, rrclass, + isc.dns.RRType.NS(), isc.dns.RRTTL(3600), + [ + "dns01.example.com.", + "dns02.example.com.", + "dns03.example.com." + ]) + add_rrset(expected_rrset_list, name, rrclass, + isc.dns.RRType.NSEC(), isc.dns.RRTTL(7200), + [ + "www.sql1.example.com. NS SOA RRSIG NSEC DNSKEY" + ]) + # For RRSIGS, we can't add the fake data through the API, so we + # simply pass no rdata at all (which is skipped by the check later) + add_rrset(expected_rrset_list, name, rrclass, + isc.dns.RRType.RRSIG(), isc.dns.RRTTL(3600), None) + add_rrset(expected_rrset_list, name, rrclass, + isc.dns.RRType.SOA(), isc.dns.RRTTL(3600), + [ + "master.example.com. admin.example.com. 678 3600 1800 2419200 7200" + ]) + name = isc.dns.Name("www.sql1.example.com.") + add_rrset(expected_rrset_list, name, rrclass, + isc.dns.RRType.A(), isc.dns.RRTTL(3600), + [ + "192.0.2.100" + ]) + name = isc.dns.Name("www.sql1.example.com.") + add_rrset(expected_rrset_list, name, rrclass, + isc.dns.RRType.NSEC(), isc.dns.RRTTL(7200), + [ + "sql1.example.com. A RRSIG NSEC" + ]) + add_rrset(expected_rrset_list, name, rrclass, + isc.dns.RRType.RRSIG(), isc.dns.RRTTL(3600), None) + + # rrs is an iterator, but also has direct get_next_rrset(), use + # the latter one here + rrset_to_check = rrs.get_next_rrset() + while (rrset_to_check != None): + self.assertTrue(check_for_rrset(expected_rrset_list, + rrset_to_check), + "Unexpected rrset returned by iterator:\n" + + rrset_to_check.to_text()) + rrset_to_check = rrs.get_next_rrset() + + # Now check there are none left + self.assertEqual(0, len(expected_rrset_list), + "RRset(s) not returned by iterator: " + + str([rrset.to_text() for rrset in expected_rrset_list ] + )) + + # TODO should we catch this (iterating past end) and just return None + # instead of failing? + self.assertRaises(isc.datasrc.Error, rrs.get_next_rrset) + + rrets = dsc.get_iterator(isc.dns.Name("example.com")) + # there are more than 80 RRs in this zone... let's just count them + # (already did a full check of the smaller zone above) + self.assertEqual(55, len(list(rrets))) + # TODO should we catch this (iterating past end) and just return None + # instead of failing? + self.assertRaises(isc.datasrc.Error, rrs.get_next_rrset) + + self.assertRaises(TypeError, dsc.get_iterator, "asdf") + + def test_construct(self): + # can't construct directly + self.assertRaises(TypeError, isc.datasrc.ZoneFinder) + + def test_find(self): + dsc = isc.datasrc.DataSourceClient(READ_ZONE_DB_FILE) + + result, finder = dsc.find_zone(isc.dns.Name("example.com")) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual(isc.dns.RRClass.IN(), finder.get_class()) + self.assertEqual("example.com.", finder.get_origin().to_text()) + + result, rrset = finder.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual("www.example.com. 3600 IN A 192.0.2.1\n", + rrset.to_text()) + + result, rrset = finder.find(isc.dns.Name("www.sql1.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.DELEGATION, result) + self.assertEqual("sql1.example.com. 3600 IN NS dns01.example.com.\n" + + "sql1.example.com. 3600 IN NS dns02.example.com.\n" + + "sql1.example.com. 3600 IN NS dns03.example.com.\n", + rrset.to_text()) + + result, rrset = finder.find(isc.dns.Name("doesnotexist.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.NXDOMAIN, result) + self.assertEqual(None, rrset) + + result, rrset = finder.find(isc.dns.Name("www.some.other.domain"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.NXDOMAIN, result) + self.assertEqual(None, rrset) + + result, rrset = finder.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.TXT(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.NXRRSET, result) + self.assertEqual(None, rrset) + + result, rrset = finder.find(isc.dns.Name("cname-ext.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.CNAME, result) + self.assertEqual( + "cname-ext.example.com. 3600 IN CNAME www.sql1.example.com.\n", + rrset.to_text()) + + self.assertRaises(TypeError, finder.find, + "foo", + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertRaises(TypeError, finder.find, + isc.dns.Name("cname-ext.example.com"), + "foo", + None, + finder.FIND_DEFAULT) + self.assertRaises(TypeError, finder.find, + isc.dns.Name("cname-ext.example.com"), + isc.dns.RRType.A(), + None, + "foo") + + +class DataSrcUpdater(unittest.TestCase): + + def setUp(self): + # Make a fresh copy of the writable database with all original content + shutil.copyfile(READ_ZONE_DB_FILE, WRITE_ZONE_DB_FILE) + + def test_construct(self): + # can't construct directly + self.assertRaises(TypeError, isc.datasrc.ZoneUpdater) + + def test_update_delete_commit(self): + + dsc = isc.datasrc.DataSourceClient(WRITE_ZONE_DB_FILE) + + # first make sure, through a separate finder, that some record exists + result, finder = dsc.find_zone(isc.dns.Name("example.com")) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual(isc.dns.RRClass.IN(), finder.get_class()) + self.assertEqual("example.com.", finder.get_origin().to_text()) + + result, rrset = finder.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual("www.example.com. 3600 IN A 192.0.2.1\n", + rrset.to_text()) + + rrset_to_delete = rrset; + + # can't delete rrset with associated sig. Abuse that to force an + # exception first, then remove the sig, then delete the record + updater = dsc.get_updater(isc.dns.Name("example.com"), True) + self.assertRaises(isc.datasrc.Error, updater.delete_rrset, + rrset_to_delete) + + rrset_to_delete.remove_rrsig() + + updater.delete_rrset(rrset_to_delete) + + # The record should be gone in the updater, but not in the original + # finder (since we have not committed) + result, rrset = updater.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.NXDOMAIN, result) + self.assertEqual(None, rrset) + + result, rrset = finder.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual("www.example.com. 3600 IN A 192.0.2.1\n", + rrset.to_text()) + + updater.commit() + # second commit should raise exception + self.assertRaises(isc.datasrc.Error, updater.commit) + + # the record should be gone now in the 'real' finder as well + result, rrset = finder.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.NXDOMAIN, result) + self.assertEqual(None, rrset) + + # now add it again + updater = dsc.get_updater(isc.dns.Name("example.com"), True) + updater.add_rrset(rrset_to_delete) + updater.commit() + + # second commit should throw + self.assertRaises(isc.datasrc.Error, updater.commit) + + result, rrset = finder.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual("www.example.com. 3600 IN A 192.0.2.1\n", + rrset.to_text()) + + def test_update_delete_abort(self): + dsc = isc.datasrc.DataSourceClient(WRITE_ZONE_DB_FILE) + + # first make sure, through a separate finder, that some record exists + result, finder = dsc.find_zone(isc.dns.Name("example.com")) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual(isc.dns.RRClass.IN(), finder.get_class()) + self.assertEqual("example.com.", finder.get_origin().to_text()) + + result, rrset = finder.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual("www.example.com. 3600 IN A 192.0.2.1\n", + rrset.to_text()) + + rrset_to_delete = rrset; + + # can't delete rrset with associated sig. Abuse that to force an + # exception first, then remove the sig, then delete the record + updater = dsc.get_updater(isc.dns.Name("example.com"), True) + self.assertRaises(isc.datasrc.Error, updater.delete_rrset, + rrset_to_delete) + + rrset_to_delete.remove_rrsig() + + updater.delete_rrset(rrset_to_delete) + + # The record should be gone in the updater, but not in the original + # finder (since we have not committed) + result, rrset = updater.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.NXDOMAIN, result) + self.assertEqual(None, rrset) + + # destroy the updater, which should make it roll back + updater = None + + # the record should still be available in the 'real' finder as well + result, rrset = finder.find(isc.dns.Name("www.example.com"), + isc.dns.RRType.A(), + None, + finder.FIND_DEFAULT) + self.assertEqual(finder.SUCCESS, result) + self.assertEqual("www.example.com. 3600 IN A 192.0.2.1\n", + rrset.to_text()) + + +if __name__ == "__main__": + isc.log.init("bind10") + unittest.main() diff --git a/src/lib/python/isc/datasrc/tests/sqlite3_ds_test.py b/src/lib/python/isc/datasrc/tests/sqlite3_ds_test.py index 707994f6f0..10c61cf2a1 100644 --- a/src/lib/python/isc/datasrc/tests/sqlite3_ds_test.py +++ b/src/lib/python/isc/datasrc/tests/sqlite3_ds_test.py @@ -23,8 +23,9 @@ TESTDATA_PATH = os.environ['TESTDATA_PATH'] + os.sep TESTDATA_WRITE_PATH = os.environ['TESTDATA_WRITE_PATH'] + os.sep READ_ZONE_DB_FILE = TESTDATA_PATH + "example.com.sqlite3" -WRITE_ZONE_DB_FILE = TESTDATA_WRITE_PATH + "example.com.out.sqlite3" BROKEN_DB_FILE = TESTDATA_PATH + "brokendb.sqlite3" +WRITE_ZONE_DB_FILE = TESTDATA_WRITE_PATH + "example.com.out.sqlite3" +NEW_DB_FILE = TESTDATA_WRITE_PATH + "new_db.sqlite3" def example_reader(): my_zone = [ @@ -91,5 +92,52 @@ class TestSqlite3_ds(unittest.TestCase): # and make sure lock does not stay sqlite3_ds.load(WRITE_ZONE_DB_FILE, ".", example_reader) +class NewDBFile(unittest.TestCase): + def tearDown(self): + # remove the created database after every test + if (os.path.exists(NEW_DB_FILE)): + os.remove(NEW_DB_FILE) + + def setUp(self): + # remove the created database before every test too, just + # in case a test got aborted half-way, and cleanup didn't occur + if (os.path.exists(NEW_DB_FILE)): + os.remove(NEW_DB_FILE) + + def test_new_db(self): + self.assertFalse(os.path.exists(NEW_DB_FILE)) + sqlite3_ds.open(NEW_DB_FILE) + self.assertTrue(os.path.exists(NEW_DB_FILE)) + + def test_new_db_locked(self): + self.assertFalse(os.path.exists(NEW_DB_FILE)) + con = sqlite3.connect(NEW_DB_FILE); + con.isolation_level = None + cur = con.cursor() + cur.execute("BEGIN IMMEDIATE TRANSACTION") + + # load should now fail, since the database is locked, + # and the open() call needs an exclusive lock + self.assertRaises(sqlite3.OperationalError, + sqlite3_ds.open, NEW_DB_FILE, 0.1) + + con.rollback() + cur.close() + con.close() + self.assertTrue(os.path.exists(NEW_DB_FILE)) + + # now that we closed our connection, load should work again + sqlite3_ds.open(NEW_DB_FILE) + + # the database should now have been created, and a new load should + # not require an exclusive lock anymore, so we lock it again + con = sqlite3.connect(NEW_DB_FILE); + cur = con.cursor() + cur.execute("BEGIN IMMEDIATE TRANSACTION") + sqlite3_ds.open(NEW_DB_FILE, 0.1) + con.rollback() + cur.close() + con.close() + if __name__ == '__main__': unittest.main() diff --git a/src/lib/python/isc/datasrc/updater_inc.cc b/src/lib/python/isc/datasrc/updater_inc.cc new file mode 100644 index 0000000000..32715ecd65 --- /dev/null +++ b/src/lib/python/isc/datasrc/updater_inc.cc @@ -0,0 +1,181 @@ +namespace { + +const char* const ZoneUpdater_doc = "\ +The base class to make updates to a single zone.\n\ +\n\ +On construction, each derived class object will start a\n\ +\"transaction\" for making updates to a specific zone (this means a\n\ +constructor of a derived class would normally take parameters to\n\ +identify the zone to be updated). The underlying realization of a\n\ +\"transaction\" will differ for different derived classes; if it uses\n\ +a general purpose database as a backend, it will involve performing\n\ +some form of \"begin transaction\" statement for the database.\n\ +\n\ +Updates (adding or deleting RRs) are made via add_rrset() and\n\ +delete_rrset() methods. Until the commit() method is called the\n\ +changes are local to the updater object. For example, they won't be\n\ +visible via a ZoneFinder object, but only by the updater's own find()\n\ +method. The commit() completes the transaction and makes the changes\n\ +visible to others.\n\ +\n\ +This class does not provide an explicit \"rollback\" interface. If\n\ +something wrong or unexpected happens during the updates and the\n\ +caller wants to cancel the intermediate updates, the caller should\n\ +simply destroy the updater object without calling commit(). The\n\ +destructor is supposed to perform the \"rollback\" operation,\n\ +depending on the internal details of the derived class.\n\ +\n\ +This initial implementation provides a quite simple interface of\n\ +adding and deleting RRs (see the description of the related methods).\n\ +It may be revisited as we gain more experiences.\n\ +\n\ +"; + +const char* const ZoneUpdater_addRRset_doc = "\ +add_rrset(rrset) -> No return value\n\ +\n\ +Add an RRset to a zone via the updater.\n\ +It performs a few basic checks:\n\ +- Whether the RR class is identical to that for the zone to be updated\n\ +- Whether the RRset is not empty, i.e., it has at least one RDATA\n\ +- Whether the RRset is not associated with an RRSIG, i.e., whether\n\ + get_rrsig() on the RRset returns a NULL pointer.\n\ +\n\ +and otherwise does not check any oddity. For example, it doesn't check\n\ +whether the owner name of the specified RRset is a subdomain of the\n\ +zone's origin; it doesn't care whether or not there is already an\n\ +RRset of the same name and RR type in the zone, and if there is,\n\ +whether any of the existing RRs have duplicate RDATA with the added\n\ +ones. If these conditions matter the calling application must examine\n\ +the existing data beforehand using the ZoneFinder returned by\n\ +get_finder().\n\ +\n\ +The validation requirement on the associated RRSIG is temporary. If we\n\ +find it more reasonable and useful to allow adding a pair of RRset and\n\ +its RRSIG RRset as we gain experiences with the interface, we may\n\ +remove this restriction. Until then we explicitly check it to prevent\n\ +accidental misuse.\n\ +\n\ +Conceptually, on successful call to this method, the zone will have\n\ +the specified RRset, and if there is already an RRset of the same name\n\ +and RR type, these two sets will be \"merged\". \"Merged\" means that\n\ +a subsequent call to ZoneFinder.find() for the name and type will\n\ +result in success and the returned RRset will contain all previously\n\ +existing and newly added RDATAs with the TTL being the minimum of the\n\ +two RRsets. The underlying representation of the \"merged\" RRsets may\n\ +vary depending on the characteristic of the underlying data source.\n\ +For example, if it uses a general purpose database that stores each RR\n\ +of the same RRset separately, it may simply be a larger sets of RRs\n\ +based on both the existing and added RRsets; the TTLs of the RRs may\n\ +be different within the database, and there may even be duplicate RRs\n\ +in different database rows. As long as the RRset returned via\n\ +ZoneFinder.find() conforms to the concept of \"merge\", the actual\n\ +internal representation is up to the implementation.\n\ +\n\ +This method must not be called once commit() is performed. If it calls\n\ +after commit() the implementation must throw a isc.datasrc.Error\n\ +exception.\n\ +\n\ +Todo As noted above we may have to revisit the design details as we\n\ +gain experiences:\n\ +\n\ +- we may want to check (and maybe reject) if there is already a\n\ + duplicate RR (that has the same RDATA).\n\ +- we may want to check (and maybe reject) if there is already an RRset\n\ + of the same name and RR type with different TTL\n\ +- we may even want to check if there is already any RRset of the same\n\ + name and RR type.\n\ +- we may want to add an \"options\" parameter that can control the\n\ + above points\n\ +- we may want to have this method return a value containing the\n\ + information on whether there's a duplicate, etc.\n\ +\n\ +Exceptions:\n\ + isc.datasrc.Error Called after commit(), RRset is invalid (see above),\n\ + internal data source error, or wrapper error\n\ +\n\ +Parameters:\n\ + rrset The RRset to be added\n\ +\n\ +"; + +const char* const ZoneUpdater_deleteRRset_doc = "\ +delete_rrset(rrset) -> No return value\n\ +\n\ +Delete an RRset from a zone via the updater.\n\ +\n\ +Like add_rrset(), the detailed semantics and behavior of this method\n\ +may have to be revisited in a future version. The following are based\n\ +on the initial implementation decisions.\n\ +\n\ +- Existing RRs that don't match any of the specified RDATAs will\n\ + remain in the zone.\n\ +- Any RRs of the specified RRset that doesn't exist in the zone will\n\ + simply be ignored; the implementation of this method is not supposed\n\ + to check that condition.\n\ +- The TTL of the RRset is ignored; matching is only performed by the\n\ + owner name, RR type and RDATA\n\ +\n\ +Ignoring the TTL may not look sensible, but it's based on the\n\ +observation that it will result in more intuitive result, especially\n\ +when the underlying data source is a general purpose database. See\n\ +also the c++ documentation of DatabaseAccessor::DeleteRecordInZone()\n\ +on this point. It also matches the dynamic update protocol (RFC2136),\n\ +where TTLs are ignored when deleting RRs.\n\ +\n\ +This method performs a limited level of validation on the specified\n\ +RRset:\n\ +- Whether the RR class is identical to that for the zone to be updated\n\ +- Whether the RRset is not empty, i.e., it has at least one RDATA\n\ +- Whether the RRset is not associated with an RRSIG\n\ +\n\ +This method must not be called once commit() is performed. If it calls\n\ +after commit() the implementation must throw a isc.datasrc.Error\n\ +exception.\n\ +\n\ +Todo: As noted above we may have to revisit the design details as we\n\ +gain experiences:\n\ +\n\ +- we may want to check (and maybe reject) if some or all of the RRs\n\ + for the specified RRset don't exist in the zone\n\ +- we may want to allow an option to \"delete everything\" for\n\ + specified name and/or specified name + RR type.\n\ +- as mentioned above, we may want to include the TTL in matching the\n\ + deleted RRs\n\ +- we may want to add an \"options\" parameter that can control the\n\ + above points\n\ +- we may want to have this method return a value containing the\n\ + information on whether there's any RRs that are specified but don't\n\ + exit, the number of actually deleted RRs, etc.\n\ +\n\ +Exceptions:\n\ + isc.datasrc.Error Called after commit(), RRset is invalid (see above),\n\ + internal data source error\n\ + std.bad_alloc Resource allocation failure\n\ +\n\ +Parameters:\n\ + rrset The RRset to be deleted\n\ +\n\ +"; + +const char* const ZoneUpdater_commit_doc = "\ +commit() -> void\n\ +\n\ +Commit the updates made in the updater to the zone.\n\ +\n\ +This method completes the \"transaction\" started at the creation of\n\ +the updater. After successful completion of this method, the updates\n\ +will be visible outside the scope of the updater. The actual internal\n\ +behavior will defer for different derived classes. For a derived class\n\ +with a general purpose database as a backend, for example, this method\n\ +would perform a \"commit\" statement for the database.\n\ +\n\ +This operation can only be performed at most once. A duplicate call\n\ +must result in a isc.datasrc.Error exception.\n\ +\n\ +Exceptions:\n\ + isc.datasrc.Error Duplicate call of the method, internal data source\n\ + error, or wrapper error\n\\n\ +\n\ +"; +} // unnamed namespace diff --git a/src/lib/python/isc/datasrc/updater_python.cc b/src/lib/python/isc/datasrc/updater_python.cc new file mode 100644 index 0000000000..a9dc581088 --- /dev/null +++ b/src/lib/python/isc/datasrc/updater_python.cc @@ -0,0 +1,318 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// Enable this if you use s# variants with PyArg_ParseTuple(), see +// http://docs.python.org/py3k/c-api/arg.html#strings-and-buffers +//#define PY_SSIZE_T_CLEAN + +// Python.h needs to be placed at the head of the program file, see: +// http://docs.python.org/py3k/extending/extending.html#a-simple-example +#include + +#include + +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "datasrc.h" +#include "updater_python.h" + +#include "updater_inc.cc" +#include "finder_inc.cc" + +using namespace std; +using namespace isc::util::python; +using namespace isc::dns::python; +using namespace isc::datasrc; +using namespace isc::datasrc::python; + +namespace isc_datasrc_internal { +// See finder_python.cc +PyObject* ZoneFinder_helper(ZoneFinder* finder, PyObject* args); +} + +namespace { +// The s_* Class simply covers one instantiation of the object +class s_ZoneUpdater : public PyObject { +public: + s_ZoneUpdater() : cppobj(ZoneUpdaterPtr()) {}; + ZoneUpdaterPtr cppobj; +}; + +// Shortcut type which would be convenient for adding class variables safely. +typedef CPPPyObjectContainer ZoneUpdaterContainer; + +// +// We declare the functions here, the definitions are below +// the type definition of the object, since both can use the other +// + +// General creation and destruction +int +ZoneUpdater_init(s_ZoneUpdater* self, PyObject* args) { + // can't be called directly + PyErr_SetString(PyExc_TypeError, + "ZoneUpdater cannot be constructed directly"); + + return (-1); +} + +void +ZoneUpdater_destroy(s_ZoneUpdater* const self) { + // cppobj is a shared ptr, but to make sure things are not destroyed in + // the wrong order, we reset it here. + self->cppobj.reset(); + Py_TYPE(self)->tp_free(self); +} + +PyObject* +ZoneUpdater_addRRset(PyObject* po_self, PyObject* args) { + s_ZoneUpdater* const self = static_cast(po_self); + PyObject* rrset_obj; + if (PyArg_ParseTuple(args, "O!", &rrset_type, &rrset_obj)) { + try { + self->cppobj->addRRset(PyRRset_ToRRset(rrset_obj)); + Py_RETURN_NONE; + } catch (const DataSourceError& dse) { + PyErr_SetString(getDataSourceException("Error"), dse.what()); + return (NULL); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } + } else { + return (NULL); + } +} + +PyObject* +ZoneUpdater_deleteRRset(PyObject* po_self, PyObject* args) { + s_ZoneUpdater* const self = static_cast(po_self); + PyObject* rrset_obj; + if (PyArg_ParseTuple(args, "O!", &rrset_type, &rrset_obj)) { + try { + self->cppobj->deleteRRset(PyRRset_ToRRset(rrset_obj)); + Py_RETURN_NONE; + } catch (const DataSourceError& dse) { + PyErr_SetString(getDataSourceException("Error"), dse.what()); + return (NULL); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } + } else { + return (NULL); + } +} + +PyObject* +ZoneUpdater_commit(PyObject* po_self, PyObject*) { + s_ZoneUpdater* const self = static_cast(po_self); + try { + self->cppobj->commit(); + Py_RETURN_NONE; + } catch (const DataSourceError& dse) { + PyErr_SetString(getDataSourceException("Error"), dse.what()); + return (NULL); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } +} + +PyObject* +ZoneUpdater_getClass(PyObject* po_self, PyObject*) { + s_ZoneUpdater* self = static_cast(po_self); + try { + return (createRRClassObject(self->cppobj->getFinder().getClass())); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } +} + +PyObject* +ZoneUpdater_getOrigin(PyObject* po_self, PyObject*) { + s_ZoneUpdater* self = static_cast(po_self); + try { + return (createNameObject(self->cppobj->getFinder().getOrigin())); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } +} + +PyObject* +ZoneUpdater_find(PyObject* po_self, PyObject* args) { + s_ZoneUpdater* const self = static_cast(po_self); + return (isc_datasrc_internal::ZoneFinder_helper(&self->cppobj->getFinder(), + args)); +} + +PyObject* +AZoneUpdater_find(PyObject* po_self, PyObject* args) { + s_ZoneUpdater* const self = static_cast(po_self); + PyObject *name; + PyObject *rrtype; + PyObject *target; + int options_int; + if (PyArg_ParseTuple(args, "O!O!OI", &name_type, &name, + &rrtype_type, &rrtype, + &target, &options_int)) { + try { + ZoneFinder::FindOptions options = + static_cast(options_int); + ZoneFinder::FindResult find_result( + self->cppobj->getFinder().find(PyName_ToName(name), + PyRRType_ToRRType(rrtype), + NULL, + options + )); + ZoneFinder::Result r = find_result.code; + isc::dns::ConstRRsetPtr rrsp = find_result.rrset; + if (rrsp) { + // Use N instead of O so the refcount isn't increased twice + return Py_BuildValue("IN", r, createRRsetObject(*rrsp)); + } else { + return Py_BuildValue("IO", r, Py_None); + } + } catch (const DataSourceError& dse) { + PyErr_SetString(getDataSourceException("Error"), dse.what()); + return (NULL); + } catch (const std::exception& exc) { + PyErr_SetString(getDataSourceException("Error"), exc.what()); + return (NULL); + } catch (...) { + PyErr_SetString(getDataSourceException("Error"), + "Unexpected exception"); + return (NULL); + } + } else { + return (NULL); + } + return Py_BuildValue("I", 1); +} + + +// This list contains the actual set of functions we have in +// python. Each entry has +// 1. Python method name +// 2. Our static function here +// 3. Argument type +// 4. Documentation +PyMethodDef ZoneUpdater_methods[] = { + { "add_rrset", reinterpret_cast(ZoneUpdater_addRRset), + METH_VARARGS, ZoneUpdater_addRRset_doc }, + { "delete_rrset", reinterpret_cast(ZoneUpdater_deleteRRset), + METH_VARARGS, ZoneUpdater_deleteRRset_doc }, + { "commit", reinterpret_cast(ZoneUpdater_commit), METH_NOARGS, + ZoneUpdater_commit_doc }, + // Instead of a getFinder, we implement the finder functionality directly + // This is because ZoneFinder is non-copyable, and we should not create + // a ZoneFinder object from a reference only (which is what is returned + // by getFinder(). Apart from that + { "get_origin", reinterpret_cast(ZoneUpdater_getOrigin), + METH_NOARGS, ZoneFinder_getOrigin_doc }, + { "get_class", reinterpret_cast(ZoneUpdater_getClass), + METH_NOARGS, ZoneFinder_getClass_doc }, + { "find", reinterpret_cast(ZoneUpdater_find), METH_VARARGS, + ZoneFinder_find_doc }, + { NULL, NULL, 0, NULL } +}; + +} // end of unnamed namespace + +namespace isc { +namespace datasrc { +namespace python { +PyTypeObject zoneupdater_type = { + PyVarObject_HEAD_INIT(NULL, 0) + "datasrc.ZoneUpdater", + sizeof(s_ZoneUpdater), // tp_basicsize + 0, // tp_itemsize + reinterpret_cast(ZoneUpdater_destroy),// tp_dealloc + NULL, // tp_print + NULL, // tp_getattr + NULL, // tp_setattr + NULL, // tp_reserved + NULL, // tp_repr + NULL, // tp_as_number + NULL, // tp_as_sequence + NULL, // tp_as_mapping + NULL, // tp_hash + NULL, // tp_call + NULL, // tp_str + NULL, // tp_getattro + NULL, // tp_setattro + NULL, // tp_as_buffer + Py_TPFLAGS_DEFAULT, // tp_flags + ZoneUpdater_doc, + NULL, // tp_traverse + NULL, // tp_clear + NULL, // tp_richcompare + 0, // tp_weaklistoffset + NULL, // tp_iter + NULL, // tp_iternext + ZoneUpdater_methods, // tp_methods + NULL, // tp_members + NULL, // tp_getset + NULL, // tp_base + NULL, // tp_dict + NULL, // tp_descr_get + NULL, // tp_descr_set + 0, // tp_dictoffset + reinterpret_cast(ZoneUpdater_init),// tp_init + NULL, // tp_alloc + PyType_GenericNew, // tp_new + NULL, // tp_free + NULL, // tp_is_gc + NULL, // tp_bases + NULL, // tp_mro + NULL, // tp_cache + NULL, // tp_subclasses + NULL, // tp_weaklist + NULL, // tp_del + 0 // tp_version_tag +}; + +PyObject* +createZoneUpdaterObject(isc::datasrc::ZoneUpdaterPtr source) { + s_ZoneUpdater* py_zi = static_cast( + zoneupdater_type.tp_alloc(&zoneupdater_type, 0)); + if (py_zi != NULL) { + py_zi->cppobj = source; + } + return (py_zi); +} + +} // namespace python +} // namespace datasrc +} // namespace isc + diff --git a/src/lib/python/isc/datasrc/updater_python.h b/src/lib/python/isc/datasrc/updater_python.h new file mode 100644 index 0000000000..3886aa3065 --- /dev/null +++ b/src/lib/python/isc/datasrc/updater_python.h @@ -0,0 +1,39 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __PYTHON_DATASRC_UPDATER_H +#define __PYTHON_DATASRC_UPDATER_H 1 + +#include + +namespace isc { +namespace datasrc { +class DataSourceClient; + +namespace python { + + +extern PyTypeObject zoneupdater_type; + +PyObject* createZoneUpdaterObject(isc::datasrc::ZoneUpdaterPtr source); + + +} // namespace python +} // namespace datasrc +} // namespace isc +#endif // __PYTHON_DATASRC_UPDATER_H + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/python/isc/dns/Makefile.am b/src/lib/python/isc/dns/Makefile.am new file mode 100644 index 0000000000..161c2a5d09 --- /dev/null +++ b/src/lib/python/isc/dns/Makefile.am @@ -0,0 +1,7 @@ +python_PYTHON = __init__.py + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) + diff --git a/src/lib/python/isc/log/Makefile.am b/src/lib/python/isc/log/Makefile.am index b228caf428..5ff2c28a4e 100644 --- a/src/lib/python/isc/log/Makefile.am +++ b/src/lib/python/isc/log/Makefile.am @@ -23,6 +23,15 @@ log_la_LIBADD += $(PYTHON_LIB) # This is not installed, it helps locate the module during tests EXTRA_DIST = __init__.py +# We're going to abuse install-data-local for a pre-install check. +# This is to be considered a short term hack and is expected to be removed +# in a near future version. +install-data-local: + if test -d @pyexecdir@/isc/log; then \ + echo "@pyexecdir@/isc/log is deprecated, and will confuse newer versions. Please (re)move it by hand."; \ + exit 1; \ + fi + pytest: $(SHELL) tests/log_test diff --git a/src/lib/python/isc/log/log.cc b/src/lib/python/isc/log/log.cc index 484151f424..5bb6a94c34 100644 --- a/src/lib/python/isc/log/log.cc +++ b/src/lib/python/isc/log/log.cc @@ -20,6 +20,7 @@ #include #include +#include #include #include @@ -35,7 +36,7 @@ using boost::bind; // (tags/RELEASE_28 115909)) on OSX, where unwinding the stack // segfaults the moment this exception was thrown and caught. // -// Placing it in a named namespace instead of the original +// Placing it in a named namespace instead of the originalRecommend // unnamed namespace appears to solve this, so as a temporary // workaround, we create a local randomly named namespace here // to solve this issue. @@ -184,6 +185,27 @@ init(PyObject*, PyObject* args) { Py_RETURN_NONE; } +// This initialization is for unit tests. It allows message settings to +// be determined by a set of B10_xxx environment variables. (See the +// description of initLogger() for more details.) The function has been named +// resetUnitTestRootLogger() here as being more descriptive and +// trying to avoid confusion. +PyObject* +resetUnitTestRootLogger(PyObject*, PyObject*) { + try { + isc::log::resetUnitTestRootLogger(); + } + catch (const std::exception& e) { + PyErr_SetString(PyExc_RuntimeError, e.what()); + return (NULL); + } + catch (...) { + PyErr_SetString(PyExc_RuntimeError, "Unknown C++ exception"); + return (NULL); + } + Py_RETURN_NONE; +} + PyObject* logConfigUpdate(PyObject*, PyObject* args) { // we have no wrappers for ElementPtr and ConfigData, @@ -246,6 +268,12 @@ PyMethodDef methods[] = { "logging severity (one of 'DEBUG', 'INFO', 'WARN', 'ERROR' or " "'FATAL'), a debug level (integer in the range 0-99) and a file name " "of a dictionary with message text translations."}, + {"resetUnitTestRootLogger", resetUnitTestRootLogger, METH_VARARGS, + "Resets the configuration of the root logger to that set by the " + "B10_XXX environment variables. It is aimed at unit tests, where " + "the logging is initialized by the code under test; called before " + "the unit test starts, this function resets the logging configuration " + "to that in use for the C++ unit tests."}, {"log_config_update", logConfigUpdate, METH_VARARGS, "Update logger settings. This method is automatically used when " "ModuleCCSession is initialized with handle_logging_config set " diff --git a/src/lib/python/isc/log/tests/Makefile.am b/src/lib/python/isc/log/tests/Makefile.am index 6bb67de0d3..170eee6cb0 100644 --- a/src/lib/python/isc/log/tests/Makefile.am +++ b/src/lib/python/isc/log/tests/Makefile.am @@ -1,28 +1,40 @@ PYCOVERAGE_RUN = @PYCOVERAGE_RUN@ -PYTESTS = log_test.py -EXTRA_DIST = $(PYTESTS) log_console.py.in console.out check_output.sh +PYTESTS_GEN = log_console.py +PYTESTS_NOGEN = log_test.py +noinst_SCRIPTS = $(PYTESTS_GEN) +EXTRA_DIST = console.out check_output.sh $(PYTESTS_NOGEN) # If necessary (rare cases), explicitly specify paths to dynamic libraries # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS +# We need to run the cycle twice, because once the files are in builddir, once in srcdir check-local: + chmod +x $(abs_builddir)/log_console.py $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/python/isc/log \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/python/isc/log \ $(abs_srcdir)/check_output.sh $(abs_builddir)/log_console.py $(abs_srcdir)/console.out if ENABLE_PYTHON_COVERAGE touch $(abs_top_srcdir)/.coverage rm -f .coverage ${LN_S} $(abs_top_srcdir)/.coverage .coverage endif - for pytest in $(PYTESTS) ; do \ + for pytest in $(PYTESTS_NOGEN) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/python/isc/log:$(abs_top_builddir)/src/lib/log/python/.libs \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/python/isc/log:$(abs_top_builddir)/src/lib/log/python/.libs \ B10_TEST_PLUGIN_DIR=$(abs_top_srcdir)/src/bin/cfgmgr/plugins \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ + done ; \ + for pytest in $(PYTESTS_GEN) ; do \ + echo Running test: $$pytest ; \ + chmod +x $(abs_builddir)/$$pytest ; \ + $(LIBRARY_PATH_PLACEHOLDER) \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/python/isc/log:$(abs_top_builddir)/src/lib/log/python/.libs \ + B10_TEST_PLUGIN_DIR=$(abs_top_srcdir)/src/bin/cfgmgr/plugins \ + $(PYCOVERAGE_RUN) $(abs_builddir)/$$pytest || exit ; \ done diff --git a/src/lib/python/isc/log_messages/Makefile.am b/src/lib/python/isc/log_messages/Makefile.am new file mode 100644 index 0000000000..30f8374bf6 --- /dev/null +++ b/src/lib/python/isc/log_messages/Makefile.am @@ -0,0 +1,32 @@ +SUBDIRS = work + +EXTRA_DIST = __init__.py +EXTRA_DIST += bind10_messages.py +EXTRA_DIST += cmdctl_messages.py +EXTRA_DIST += stats_messages.py +EXTRA_DIST += stats_httpd_messages.py +EXTRA_DIST += xfrin_messages.py +EXTRA_DIST += xfrout_messages.py +EXTRA_DIST += zonemgr_messages.py +EXTRA_DIST += cfgmgr_messages.py +EXTRA_DIST += config_messages.py +EXTRA_DIST += notify_out_messages.py +EXTRA_DIST += libxfrin_messages.py + +CLEANFILES = __init__.pyc +CLEANFILES += bind10_messages.pyc +CLEANFILES += cmdctl_messages.pyc +CLEANFILES += stats_messages.pyc +CLEANFILES += stats_httpd_messages.pyc +CLEANFILES += xfrin_messages.pyc +CLEANFILES += xfrout_messages.pyc +CLEANFILES += zonemgr_messages.pyc +CLEANFILES += cfgmgr_messages.pyc +CLEANFILES += config_messages.pyc +CLEANFILES += notify_out_messages.pyc +CLEANFILES += libxfrin_messages.pyc + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/lib/python/isc/log_messages/README b/src/lib/python/isc/log_messages/README new file mode 100644 index 0000000000..c96f78c730 --- /dev/null +++ b/src/lib/python/isc/log_messages/README @@ -0,0 +1,68 @@ +This is a placeholder package for logging messages of various modules +in the form of python scripts. This package is expected to be installed +somewhere like /python3.x/site-packages/isc/log_messages +and each message script is expected to be imported as +"isc.log_messages.some_module_messages". + +We also need to allow in-source test code to get access to the message +scripts in the same manner. That's why the package is stored in the +directory that shares the same trailing part as the install directory, +i.e., isc/log_messages. + +Furthermore, we need to support a build mode using a separate build +tree (such as in the case with 'make distcheck'). In that case if an +application (via a test script) imports "isc.log_messages.xxx", it +would try to import the module under the source tree, where the +generated message script doesn't exist. So, in the source directory +(i.e., here) we provide dummy scripts that subsequently import the +same name of module under the "work" sub-package. The caller +application is assumed to have /src/lib/python/isc/log_messages +in its module search path (this is done by including +$(COMMON_PYTHON_PATH) in the PYTHONPATH environment variable), +which ensures the right directory is chosen. + +A python module or program that defines its own log messages needs to +make sure that the setup described above is implemented. It's a +complicated process, but can generally be done by following a common +pattern: + +1. Create the dummy script (see above) for the module and update + Makefile.am in this directory accordingly. See (and use) + a helper shell script named gen-forwarder.sh. +2. Update Makefil.am of the module that defines the log message. The + following are a sample snippet for Makefile.am for a module named + "mymodule" (which is supposed to be generated from a file + "mymodule_messages.mes"). In many cases it should work simply by + replacing 'mymodule' with the actual module name. + +==================== begin Makefile.am additions =================== +nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/mymodule_messages.py +pylogmessagedir = $(pyexecdir)/isc/log_messages/ + +CLEANFILES = $(PYTHON_LOGMSGPKG_DIR)/work/mymodule_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/mymodule_messages.pyc + +EXTRA_DIST = mymodule_messages.mes + +$(PYTHON_LOGMSGPKG_DIR)/work/mymodule_messages.py : mymodule_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/mymodule_messages.mes + +# This rule ensures mymodule_messages.py is (re)generated as a result of +# 'make'. If there's no other appropriate target, specify +# mymodule_messages.py in BUILT_SOURCES. +mymodule: $(PYTHON_LOGMSGPKG_DIR)/work/mymodule_messages.py +===================== end Makefile.am additions ==================== + +Notes: +- "nodist_" prefix is important. Without this, 'make distcheck' tries + to make _messages.py before actually starting the main build, which + would fail because the message compiler isn't built yet. +- "pylogmessage" is a prefix for python scripts that define log + messages and are expected to be installed in the common isc/log_messages + directory. It's intentionally named differently from the common + "python" prefix (as in python_PYTHON), because the latter may be + used for other scripts in the same Makefile.am file. +- $(PYTHON_LOGMSGPKG_DIR) should be set to point to this directory (or + the corresponding build directory if it's different) by the + configure script. diff --git a/src/lib/python/isc/log_messages/__init__.py b/src/lib/python/isc/log_messages/__init__.py new file mode 100644 index 0000000000..d222b8c0c4 --- /dev/null +++ b/src/lib/python/isc/log_messages/__init__.py @@ -0,0 +1,3 @@ +""" +This is an in-source forwarder package redirecting to work/* scripts. +""" diff --git a/src/lib/python/isc/log_messages/bind10_messages.py b/src/lib/python/isc/log_messages/bind10_messages.py new file mode 100644 index 0000000000..68ce94ca6a --- /dev/null +++ b/src/lib/python/isc/log_messages/bind10_messages.py @@ -0,0 +1 @@ +from work.bind10_messages import * diff --git a/src/lib/python/isc/log_messages/cfgmgr_messages.py b/src/lib/python/isc/log_messages/cfgmgr_messages.py new file mode 100644 index 0000000000..55571003ba --- /dev/null +++ b/src/lib/python/isc/log_messages/cfgmgr_messages.py @@ -0,0 +1 @@ +from work.cfgmgr_messages import * diff --git a/src/lib/python/isc/log_messages/cmdctl_messages.py b/src/lib/python/isc/log_messages/cmdctl_messages.py new file mode 100644 index 0000000000..7283d5a15e --- /dev/null +++ b/src/lib/python/isc/log_messages/cmdctl_messages.py @@ -0,0 +1 @@ +from work.cmdctl_messages import * diff --git a/src/lib/python/isc/log_messages/config_messages.py b/src/lib/python/isc/log_messages/config_messages.py new file mode 100644 index 0000000000..c5579752f2 --- /dev/null +++ b/src/lib/python/isc/log_messages/config_messages.py @@ -0,0 +1 @@ +from work.config_messages import * diff --git a/src/lib/python/isc/log_messages/gen-forwarder.sh b/src/lib/python/isc/log_messages/gen-forwarder.sh new file mode 100755 index 0000000000..84c2450159 --- /dev/null +++ b/src/lib/python/isc/log_messages/gen-forwarder.sh @@ -0,0 +1,14 @@ +#!/bin/sh + +MODULE_NAME=$1 +if test -z $MODULE_NAME; then + echo 'Usage: gen-forwarder.sh module_name' + exit 1 +fi + +echo "from work.${MODULE_NAME}_messages import *" > ${MODULE_NAME}_messages.py +echo "Forwarder python script is generated. Make sure to perform:" +echo "git add ${MODULE_NAME}_messages.py" +echo "and add the following to Makefile.am:" +echo "EXTRA_DIST += ${MODULE_NAME}_messages.py" +echo "CLEANFILES += ${MODULE_NAME}_messages.pyc" diff --git a/src/lib/python/isc/log_messages/libxfrin_messages.py b/src/lib/python/isc/log_messages/libxfrin_messages.py new file mode 100644 index 0000000000..74da329c68 --- /dev/null +++ b/src/lib/python/isc/log_messages/libxfrin_messages.py @@ -0,0 +1 @@ +from work.libxfrin_messages import * diff --git a/src/lib/python/isc/log_messages/notify_out_messages.py b/src/lib/python/isc/log_messages/notify_out_messages.py new file mode 100644 index 0000000000..6aa37ea5ab --- /dev/null +++ b/src/lib/python/isc/log_messages/notify_out_messages.py @@ -0,0 +1 @@ +from work.notify_out_messages import * diff --git a/src/lib/python/isc/log_messages/stats_httpd_messages.py b/src/lib/python/isc/log_messages/stats_httpd_messages.py new file mode 100644 index 0000000000..7782c34a8d --- /dev/null +++ b/src/lib/python/isc/log_messages/stats_httpd_messages.py @@ -0,0 +1 @@ +from work.stats_httpd_messages import * diff --git a/src/lib/python/isc/log_messages/stats_messages.py b/src/lib/python/isc/log_messages/stats_messages.py new file mode 100644 index 0000000000..1324cfcf0f --- /dev/null +++ b/src/lib/python/isc/log_messages/stats_messages.py @@ -0,0 +1 @@ +from work.stats_messages import * diff --git a/src/lib/python/isc/log_messages/work/Makefile.am b/src/lib/python/isc/log_messages/work/Makefile.am new file mode 100644 index 0000000000..9bc5e0ff35 --- /dev/null +++ b/src/lib/python/isc/log_messages/work/Makefile.am @@ -0,0 +1,12 @@ +# .py is generated in the builddir by the configure script so that test +# scripts can refer to it when a separate builddir is used. + +python_PYTHON = __init__.py + +pythondir = $(pyexecdir)/isc/log_messages/ + +CLEANFILES = __init__.pyc +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/lib/python/isc/log_messages/work/__init__.py.in b/src/lib/python/isc/log_messages/work/__init__.py.in new file mode 100644 index 0000000000..991f10acd4 --- /dev/null +++ b/src/lib/python/isc/log_messages/work/__init__.py.in @@ -0,0 +1,3 @@ +""" +This package is a placeholder for python scripts of log messages. +""" diff --git a/src/lib/python/isc/log_messages/xfrin_messages.py b/src/lib/python/isc/log_messages/xfrin_messages.py new file mode 100644 index 0000000000..b412519a3d --- /dev/null +++ b/src/lib/python/isc/log_messages/xfrin_messages.py @@ -0,0 +1 @@ +from work.xfrin_messages import * diff --git a/src/lib/python/isc/log_messages/xfrout_messages.py b/src/lib/python/isc/log_messages/xfrout_messages.py new file mode 100644 index 0000000000..2093d5caca --- /dev/null +++ b/src/lib/python/isc/log_messages/xfrout_messages.py @@ -0,0 +1 @@ +from work.xfrout_messages import * diff --git a/src/lib/python/isc/log_messages/zonemgr_messages.py b/src/lib/python/isc/log_messages/zonemgr_messages.py new file mode 100644 index 0000000000..b3afe9ccdf --- /dev/null +++ b/src/lib/python/isc/log_messages/zonemgr_messages.py @@ -0,0 +1 @@ +from work.zonemgr_messages import * diff --git a/src/lib/python/isc/net/tests/Makefile.am b/src/lib/python/isc/net/tests/Makefile.am index 3a04f17b40..dd949464ea 100644 --- a/src/lib/python/isc/net/tests/Makefile.am +++ b/src/lib/python/isc/net/tests/Makefile.am @@ -6,7 +6,7 @@ EXTRA_DIST = $(PYTESTS) # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/util/io/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS @@ -19,6 +19,6 @@ endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/dns/python/.libs \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/dns/python/.libs \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ done diff --git a/src/lib/python/isc/notify/Makefile.am b/src/lib/python/isc/notify/Makefile.am index 4081a1703d..c247ab8cd7 100644 --- a/src/lib/python/isc/notify/Makefile.am +++ b/src/lib/python/isc/notify/Makefile.am @@ -1,10 +1,22 @@ SUBDIRS = . tests python_PYTHON = __init__.py notify_out.py - pythondir = $(pyexecdir)/isc/notify +BUILT_SOURCES = $(PYTHON_LOGMSGPKG_DIR)/work/notify_out_messages.py +nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/notify_out_messages.py +pylogmessagedir = $(pyexecdir)/isc/log_messages/ + +EXTRA_DIST = notify_out_messages.mes + +CLEANFILES = $(PYTHON_LOGMSGPKG_DIR)/work/notify_out_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/notify_out_messages.pyc + CLEANDIRS = __pycache__ +$(PYTHON_LOGMSGPKG_DIR)/work/notify_out_messages.py : notify_out_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/notify_out_messages.mes + clean-local: rm -rf $(CLEANDIRS) diff --git a/src/lib/python/isc/notify/notify_out.py b/src/lib/python/isc/notify/notify_out.py index 4b254633f3..6b91c87d55 100644 --- a/src/lib/python/isc/notify/notify_out.py +++ b/src/lib/python/isc/notify/notify_out.py @@ -23,11 +23,15 @@ import errno from isc.datasrc import sqlite3_ds from isc.net import addr import isc -try: - from pydnspp import * -except ImportError as e: - # C++ loadable module may not be installed; - sys.stderr.write('[b10-xfrout] failed to import DNS or XFR module: %s\n' % str(e)) +from isc.log_messages.notify_out_messages import * + +logger = isc.log.Logger("notify_out") + +# there used to be a printed message if this import failed, but if +# we can't import we should not start anyway, and logging an error +# is a bad idea since the logging system is most likely not +# initialized yet. see trac ticket #1103 +from pydnspp import * ZONE_NEW_DATA_READY_CMD = 'zone_new_data_ready' _MAX_NOTIFY_NUM = 30 @@ -46,9 +50,6 @@ _BAD_QR = 4 _BAD_REPLY_PACKET = 5 SOCK_DATA = b's' -def addr_to_str(addr): - return '%s#%s' % (addr[0], addr[1]) - class ZoneNotifyInfo: '''This class keeps track of notify-out information for one zone.''' @@ -105,11 +106,10 @@ class NotifyOut: notify message to its slaves). notify service can be started by calling dispatcher(), and it can be stoped by calling shutdown() in another thread. ''' - def __init__(self, datasrc_file, log=None, verbose=True): + def __init__(self, datasrc_file, verbose=True): self._notify_infos = {} # key is (zone_name, zone_class) self._waiting_zones = [] self._notifying_zones = [] - self._log = log self._serving = False self._read_sock, self._write_sock = socket.socketpair() self._read_sock.setblocking(False) @@ -362,18 +362,19 @@ class NotifyOut: tgt = zone_notify_info.get_current_notify_target() if event_type == _EVENT_READ: reply = self._get_notify_reply(zone_notify_info.get_socket(), tgt) - if reply: - if self._handle_notify_reply(zone_notify_info, reply): + if reply is not None: + if self._handle_notify_reply(zone_notify_info, reply, tgt): self._notify_next_target(zone_notify_info) elif event_type == _EVENT_TIMEOUT and zone_notify_info.notify_try_num > 0: - self._log_msg('info', 'notify retry to %s' % addr_to_str(tgt)) + logger.info(NOTIFY_OUT_TIMEOUT, tgt[0], tgt[1]) tgt = zone_notify_info.get_current_notify_target() if tgt: zone_notify_info.notify_try_num += 1 if zone_notify_info.notify_try_num > _MAX_NOTIFY_TRY_NUM: - self._log_msg('info', 'notify to %s: retried exceeded' % addr_to_str(tgt)) + logger.warn(NOTIFY_OUT_RETRY_EXCEEDED, tgt[0], tgt[1], + _MAX_NOTIFY_TRY_NUM) self._notify_next_target(zone_notify_info) else: # set exponential backoff according rfc1996 section 3.6 @@ -412,10 +413,15 @@ class NotifyOut: try: sock = zone_notify_info.create_socket(addrinfo[0]) sock.sendto(render.get_data(), 0, addrinfo) - self._log_msg('info', 'sending notify to %s' % addr_to_str(addrinfo)) + logger.info(NOTIFY_OUT_SENDING_NOTIFY, addrinfo[0], + addrinfo[1]) except (socket.error, addr.InvalidAddress) as err: - self._log_msg('error', 'send notify to %s failed: %s' % - (addr_to_str(addrinfo), str(err))) + logger.error(NOTIFY_OUT_SOCKET_ERROR, addrinfo[0], + addrinfo[1], err) + return False + except addr.InvalidAddress as iae: + logger.error(NOTIFY_OUT_INVALID_ADDRESS, addrinfo[0], + addrinfo[1], iae) return False return True @@ -446,34 +452,38 @@ class NotifyOut: msg.add_rrset(Message.SECTION_ANSWER, rrset_soa) return msg, qid - def _handle_notify_reply(self, zone_notify_info, msg_data): + def _handle_notify_reply(self, zone_notify_info, msg_data, from_addr): '''Parse the notify reply message. - TODO, the error message should be refined properly. rcode will not checked here, If we get the response from the slave, it means the slaves has got the notify.''' msg = Message(Message.PARSE) try: - errstr = 'notify reply error: ' msg.from_wire(msg_data) if not msg.get_header_flag(Message.HEADERFLAG_QR): - self._log_msg('error', errstr + 'bad flags') + logger.warn(NOTIFY_OUT_REPLY_QR_NOT_SET, from_addr[0], + from_addr[1]) return _BAD_QR if msg.get_qid() != zone_notify_info.notify_msg_id: - self._log_msg('error', errstr + 'bad query ID') + logger.warn(NOTIFY_OUT_REPLY_BAD_QID, from_addr[0], + from_addr[1], msg.get_qid(), + zone_notify_info.notify_msg_id) return _BAD_QUERY_ID question = msg.get_question()[0] if question.get_name() != Name(zone_notify_info.zone_name): - self._log_msg('error', errstr + 'bad query name') + logger.warn(NOTIFY_OUT_REPLY_BAD_QUERY_NAME, from_addr[0], + from_addr[1], question.get_name().to_text(), + Name(zone_notify_info.zone_name).to_text()) return _BAD_QUERY_NAME if msg.get_opcode() != Opcode.NOTIFY(): - self._log_msg('error', errstr + 'bad opcode') + logger.warn(NOTIFY_OUT_REPLY_BAD_OPCODE, from_addr[0], + from_addr[1], msg.get_opcode().to_text()) return _BAD_OPCODE except Exception as err: # We don't care what exception, just report it? - self._log_msg('error', errstr + str(err)) + logger.error(NOTIFY_OUT_REPLY_UNCAUGHT_EXCEPTION, err) return _BAD_REPLY_PACKET return _REPLY_OK @@ -481,14 +491,9 @@ class NotifyOut: def _get_notify_reply(self, sock, tgt_addr): try: msg, addr = sock.recvfrom(512) - except socket.error: - self._log_msg('error', "notify to %s failed: can't read notify reply" % addr_to_str(tgt_addr)) + except socket.error as err: + logger.error(NOTIFY_OUT_SOCKET_RECV_ERROR, tgt_addr[0], + tgt_addr[1], err) return None return msg - - - def _log_msg(self, level, msg): - if self._log: - self._log.log_message(level, msg) - diff --git a/src/lib/python/isc/notify/notify_out_messages.mes b/src/lib/python/isc/notify/notify_out_messages.mes new file mode 100644 index 0000000000..570f51e0e9 --- /dev/null +++ b/src/lib/python/isc/notify/notify_out_messages.mes @@ -0,0 +1,83 @@ +# Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and/or distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +# PERFORMANCE OF THIS SOFTWARE. + +# No namespace declaration - these constants go in the global namespace +# of the notify_out_messages python module. + +% NOTIFY_OUT_INVALID_ADDRESS invalid address %1#%2: %3 +The notify_out library tried to send a notify message to the given +address, but it appears to be an invalid address. The configuration +for secondary nameservers might contain a typographic error, or a +different BIND 10 module has forgotten to validate its data before +sending this module a notify command. As such, this should normally +not happen, and points to an oversight in a different module. + +% NOTIFY_OUT_REPLY_BAD_OPCODE bad opcode in notify reply from %1#%2: %3 +The notify_out library sent a notify message to the nameserver at +the given address, but the response did not have the opcode set to +NOTIFY. The opcode in the response is printed. Since there was a +response, no more notifies will be sent to this server for this +notification event. + +% NOTIFY_OUT_REPLY_BAD_QID bad QID in notify reply from %1#%2: got %3, should be %4 +The notify_out library sent a notify message to the nameserver at +the given address, but the query id in the response does not match +the one we sent. Since there was a response, no more notifies will +be sent to this server for this notification event. + +% NOTIFY_OUT_REPLY_BAD_QUERY_NAME bad query name in notify reply from %1#%2: got %3, should be %4 +The notify_out library sent a notify message to the nameserver at +the given address, but the query name in the response does not match +the one we sent. Since there was a response, no more notifies will +be sent to this server for this notification event. + +% NOTIFY_OUT_REPLY_QR_NOT_SET QR flags set to 0 in reply to notify from %1#%2 +The notify_out library sent a notify message to the namesever at the +given address, but the reply did not have the QR bit set to one. +Since there was a response, no more notifies will be sent to this +server for this notification event. + +% NOTIFY_OUT_RETRY_EXCEEDED notify to %1#%2: number of retries (%3) exceeded +The maximum number of retries for the notify target has been exceeded. +Either the address of the secondary nameserver is wrong, or it is not +responding. + +% NOTIFY_OUT_SENDING_NOTIFY sending notify to %1#%2 +A notify message is sent to the secondary nameserver at the given +address. + +% NOTIFY_OUT_SOCKET_ERROR socket error sending notify to %1#%2: %3 +There was a network error while trying to send a notify message to +the given address. The address might be unreachable. The socket +error is printed and should provide more information. + +% NOTIFY_OUT_SOCKET_RECV_ERROR socket error reading notify reply from %1#%2: %3 +There was a network error while trying to read a notify reply +message from the given address. The socket error is printed and should +provide more information. + +% NOTIFY_OUT_TIMEOUT retry notify to %1#%2 +The notify message to the given address (noted as address#port) has +timed out, and the message will be resent until the max retry limit +is reached. + +% NOTIFY_OUT_REPLY_UNCAUGHT_EXCEPTION uncaught exception: %1 +There was an uncaught exception in the handling of a notify reply +message, either in the message parser, or while trying to extract data +from the parsed message. The error is printed, and notify_out will +treat the response as a bad message, but this does point to a +programming error, since all exceptions should have been caught +explicitly. Please file a bug report. Since there was a response, +no more notifies will be sent to this server for this notification +event. diff --git a/src/lib/python/isc/notify/tests/Makefile.am b/src/lib/python/isc/notify/tests/Makefile.am index 1427d93c12..00c2eee95a 100644 --- a/src/lib/python/isc/notify/tests/Makefile.am +++ b/src/lib/python/isc/notify/tests/Makefile.am @@ -6,7 +6,7 @@ EXTRA_DIST = $(PYTESTS) # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS @@ -18,7 +18,7 @@ if ENABLE_PYTHON_COVERAGE endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/dns/python/.libs \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/dns/python/.libs \ $(LIBRARY_PATH_PLACEHOLDER) \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ done diff --git a/src/lib/python/isc/notify/tests/notify_out_test.py b/src/lib/python/isc/notify/tests/notify_out_test.py index 0eb77a34ef..83f6d1ae1e 100644 --- a/src/lib/python/isc/notify/tests/notify_out_test.py +++ b/src/lib/python/isc/notify/tests/notify_out_test.py @@ -21,6 +21,7 @@ import time import socket from isc.datasrc import sqlite3_ds from isc.notify import notify_out, SOCK_DATA +import isc.log # our fake socket, where we can read and insert messages class MockSocket(): @@ -79,7 +80,6 @@ class TestZoneNotifyInfo(unittest.TestCase): self.info.prepare_notify_out() self.assertEqual(self.info.get_current_notify_target(), ('127.0.0.1', 53)) - self.assertEqual('127.0.0.1#53', notify_out.addr_to_str(('127.0.0.1', 53))) self.info.set_next_notify_target() self.assertEqual(self.info.get_current_notify_target(), ('1.1.1.1', 5353)) self.info.set_next_notify_target() @@ -223,29 +223,30 @@ class TestNotifyOut(unittest.TestCase): self.assertEqual(0, len(self._notify._waiting_zones)) def test_handle_notify_reply(self): - self.assertEqual(notify_out._BAD_REPLY_PACKET, self._notify._handle_notify_reply(None, b'badmsg')) + fake_address = ('192.0.2.1', 53) + self.assertEqual(notify_out._BAD_REPLY_PACKET, self._notify._handle_notify_reply(None, b'badmsg', fake_address)) example_com_info = self._notify._notify_infos[('example.com.', 'IN')] example_com_info.notify_msg_id = 0X2f18 # test with right notify reply message data = b'\x2f\x18\xa0\x00\x00\x01\x00\x00\x00\x00\x00\x00\x07example\03com\x00\x00\x06\x00\x01' - self.assertEqual(notify_out._REPLY_OK, self._notify._handle_notify_reply(example_com_info, data)) + self.assertEqual(notify_out._REPLY_OK, self._notify._handle_notify_reply(example_com_info, data, fake_address)) # test with unright query id data = b'\x2e\x18\xa0\x00\x00\x01\x00\x00\x00\x00\x00\x00\x07example\03com\x00\x00\x06\x00\x01' - self.assertEqual(notify_out._BAD_QUERY_ID, self._notify._handle_notify_reply(example_com_info, data)) + self.assertEqual(notify_out._BAD_QUERY_ID, self._notify._handle_notify_reply(example_com_info, data, fake_address)) # test with unright query name data = b'\x2f\x18\xa0\x00\x00\x01\x00\x00\x00\x00\x00\x00\x07example\03net\x00\x00\x06\x00\x01' - self.assertEqual(notify_out._BAD_QUERY_NAME, self._notify._handle_notify_reply(example_com_info, data)) + self.assertEqual(notify_out._BAD_QUERY_NAME, self._notify._handle_notify_reply(example_com_info, data, fake_address)) # test with unright opcode data = b'\x2f\x18\x80\x00\x00\x01\x00\x00\x00\x00\x00\x00\x07example\03com\x00\x00\x06\x00\x01' - self.assertEqual(notify_out._BAD_OPCODE, self._notify._handle_notify_reply(example_com_info, data)) + self.assertEqual(notify_out._BAD_OPCODE, self._notify._handle_notify_reply(example_com_info, data, fake_address)) # test with unright qr data = b'\x2f\x18\x10\x10\x00\x01\x00\x00\x00\x00\x00\x00\x07example\03com\x00\x00\x06\x00\x01' - self.assertEqual(notify_out._BAD_QR, self._notify._handle_notify_reply(example_com_info, data)) + self.assertEqual(notify_out._BAD_QR, self._notify._handle_notify_reply(example_com_info, data, fake_address)) def test_send_notify_message_udp_ipv4(self): example_com_info = self._notify._notify_infos[('example.net.', 'IN')] @@ -300,6 +301,15 @@ class TestNotifyOut(unittest.TestCase): self._notify._zone_notify_handler(example_net_info, notify_out._EVENT_NONE) self.assertNotEqual(cur_tgt, example_net_info._notify_current) + cur_tgt = example_net_info._notify_current + example_net_info.create_socket('127.0.0.1') + # dns message, will result in bad_qid, but what we are testing + # here is whether handle_notify_reply is called correctly + example_net_info._sock.remote_end().send(b'\x2f\x18\xa0\x00\x00\x01\x00\x00\x00\x00\x00\x00\x07example\03com\x00\x00\x06\x00\x01') + self._notify._zone_notify_handler(example_net_info, notify_out._EVENT_READ) + self.assertNotEqual(cur_tgt, example_net_info._notify_current) + + def _example_net_data_reader(self): zone_data = [ ('example.net.', '1000', 'IN', 'SOA', 'a.dns.example.net. mail.example.net. 1 1 1 1 1'), @@ -406,6 +416,7 @@ class TestNotifyOut(unittest.TestCase): self.assertFalse(thread.is_alive()) if __name__== "__main__": + isc.log.init("bind10") unittest.main() diff --git a/src/lib/python/isc/util/tests/Makefile.am b/src/lib/python/isc/util/tests/Makefile.am index c3d35c2ac9..3b882b4c02 100644 --- a/src/lib/python/isc/util/tests/Makefile.am +++ b/src/lib/python/isc/util/tests/Makefile.am @@ -6,7 +6,7 @@ EXTRA_DIST = $(PYTESTS) # required by loadable python modules. LIBRARY_PATH_PLACEHOLDER = if SET_ENV_LIBRARY_PATH -LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$$$(ENV_LIBRARY_PATH) +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) endif # test using command-line arguments, so use check-local target instead of TESTS @@ -19,6 +19,6 @@ endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/lib/dns/python/.libs \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/dns/python/.libs \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ done diff --git a/src/lib/python/isc/xfrin/Makefile.am b/src/lib/python/isc/xfrin/Makefile.am new file mode 100644 index 0000000000..5804de6cf4 --- /dev/null +++ b/src/lib/python/isc/xfrin/Makefile.am @@ -0,0 +1,23 @@ +SUBDIRS = . tests + +python_PYTHON = __init__.py diff.py +BUILT_SOURCES = $(PYTHON_LOGMSGPKG_DIR)/work/libxfrin_messages.py +nodist_pylogmessage_PYTHON = $(PYTHON_LOGMSGPKG_DIR)/work/libxfrin_messages.py +pylogmessagedir = $(pyexecdir)/isc/log_messages/ + +EXTRA_DIST = libxfrin_messages.mes + +CLEANFILES = $(PYTHON_LOGMSGPKG_DIR)/work/libxfrin_messages.py +CLEANFILES += $(PYTHON_LOGMSGPKG_DIR)/work/libxfrin_messages.pyc + +# Define rule to build logging source files from message file +$(PYTHON_LOGMSGPKG_DIR)/work/libxfrin_messages.py: libxfrin_messages.mes + $(top_builddir)/src/lib/log/compiler/message \ + -d $(PYTHON_LOGMSGPKG_DIR)/work -p $(srcdir)/libxfrin_messages.mes + +pythondir = $(pyexecdir)/isc/xfrin + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/util/__init__.py b/src/lib/python/isc/xfrin/__init__.py similarity index 100% rename from src/bin/stats/tests/isc/util/__init__.py rename to src/lib/python/isc/xfrin/__init__.py diff --git a/src/lib/python/isc/xfrin/diff.py b/src/lib/python/isc/xfrin/diff.py new file mode 100644 index 0000000000..b6d824468f --- /dev/null +++ b/src/lib/python/isc/xfrin/diff.py @@ -0,0 +1,235 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" +This helps the XFR in process with accumulating parts of diff and applying +it to the datasource. + +The name of the module is not yet fully decided. We might want to move it +under isc.datasrc or somewhere else, because we might want to reuse it with +future DDNS process. But until then, it lives here. +""" + +import isc.dns +import isc.log +from isc.log_messages.libxfrin_messages import * + +class NoSuchZone(Exception): + """ + This is raised if a diff for non-existant zone is being created. + """ + pass + +""" +This is the amount of changes we accumulate before calling Diff.apply +automatically. + +The number 100 is just taken from BIND 9. We don't know the rationale +for exactly this amount, but we think it is just some randomly chosen +number. +""" +# If changing this, modify the tests accordingly as well. +DIFF_APPLY_TRESHOLD = 100 + +logger = isc.log.Logger('libxfrin') + +class Diff: + """ + The class represents a diff against current state of datasource on + one zone. The usual way of working with it is creating it, then putting + bunch of changes in and commiting at the end. + + If you change your mind, you can just stop using the object without + really commiting it. In that case no changes will happen in the data + sounce. + + The class works as a kind of a buffer as well, it does not direct + the changes to underlying data source right away, but keeps them for + a while. + """ + def __init__(self, ds_client, zone): + """ + Initializes the diff to a ready state. It checks the zone exists + in the datasource and if not, NoSuchZone is raised. This also creates + a transaction in the data source. + + The ds_client is the datasource client containing the zone. Zone is + isc.dns.Name object representing the name of the zone (its apex). + + You can also expect isc.datasrc.Error or isc.datasrc.NotImplemented + exceptions. + """ + self.__updater = ds_client.get_updater(zone, False) + if self.__updater is None: + # The no such zone case + raise NoSuchZone("Zone " + str(zone) + + " does not exist in the data source " + + str(ds_client)) + self.__buffer = [] + + def __check_commited(self): + """ + This checks if the diff is already commited or broken. If it is, it + raises ValueError. This check is for methods that need to work only on + yet uncommited diffs. + """ + if self.__updater is None: + raise ValueError("The diff is already commited or it has raised " + + "an exception, you come late") + + def __data_common(self, rr, operation): + """ + Schedules an operation with rr. + + It does all the real work of add_data and remove_data, including + all checks. + """ + self.__check_commited() + if rr.get_rdata_count() != 1: + raise ValueError('The rrset must contain exactly 1 Rdata, but ' + + 'it holds ' + str(rr.get_rdata_count())) + if rr.get_class() != self.__updater.get_class(): + raise ValueError("The rrset's class " + str(rr.get_class()) + + " does not match updater's " + + str(self.__updater.get_class())) + self.__buffer.append((operation, rr)) + if len(self.__buffer) >= DIFF_APPLY_TRESHOLD: + # Time to auto-apply, so the data don't accumulate too much + self.apply() + + def add_data(self, rr): + """ + Schedules addition of an RR into the zone in this diff. + + The rr is of isc.dns.RRset type and it must contain only one RR. + If this is not the case or if the diff was already commited, this + raises the ValueError exception. + + The rr class must match the one of the datasource client. If + it does not, ValueError is raised. + """ + self.__data_common(rr, 'add') + + def remove_data(self, rr): + """ + Schedules removal of an RR from the zone in this diff. + + The rr is of isc.dns.RRset type and it must contain only one RR. + If this is not the case or if the diff was already commited, this + raises the ValueError exception. + + The rr class must match the one of the datasource client. If + it does not, ValueError is raised. + """ + self.__data_common(rr, 'remove') + + def compact(self): + """ + Tries to compact the operations in buffer a little by putting some of + the operations together, forming RRsets with more than one RR. + + This is called by apply before putting the data into datasource. You + may, but not have to, call this manually. + + Currently it merges consecutive same operations on the same + domain/type. We could do more fancy things, like sorting by the domain + and do more merging, but such diffs should be rare in practice anyway, + so we don't bother and do it this simple way. + """ + buf = [] + for (op, rrset) in self.__buffer: + old = buf[-1][1] if len(buf) > 0 else None + if old is None or op != buf[-1][0] or \ + rrset.get_name() != old.get_name() or \ + rrset.get_type() != old.get_type(): + buf.append((op, isc.dns.RRset(rrset.get_name(), + rrset.get_class(), + rrset.get_type(), + rrset.get_ttl()))) + if rrset.get_ttl() != buf[-1][1].get_ttl(): + logger.warn(LIBXFRIN_DIFFERENT_TTL, rrset.get_ttl(), + buf[-1][1].get_ttl()) + for rdatum in rrset.get_rdata(): + buf[-1][1].add_rdata(rdatum) + self.__buffer = buf + + def apply(self): + """ + Push the buffered changes inside this diff down into the data source. + This does not stop you from adding more changes later through this + diff and it does not close the datasource transaction, so the changes + will not be shown to others yet. It just means the internal memory + buffer is flushed. + + This is called from time to time automatically, but you can call it + manually if you really want to. + + This raises ValueError if the diff was already commited. + + It also can raise isc.datasrc.Error. If that happens, you should stop + using this object and abort the modification. + """ + self.__check_commited() + # First, compact the data + self.compact() + try: + # Then pass the data inside the data source + for (operation, rrset) in self.__buffer: + if operation == 'add': + self.__updater.add_rrset(rrset) + elif operation == 'remove': + self.__updater.remove_rrset(rrset) + else: + raise ValueError('Unknown operation ' + operation) + # As everything is already in, drop the buffer + except: + # If there's a problem, we can't continue. + self.__updater = None + raise + + self.__buffer = [] + + def commit(self): + """ + Writes all the changes into the data source and makes them visible. + This closes the diff, you may not use it any more. If you try to use + it, you'll get ValueError. + + This might raise isc.datasrc.Error. + """ + self.__check_commited() + # Push the data inside the data source + self.apply() + # Make sure they are visible. + try: + self.__updater.commit() + finally: + # Remove the updater. That will free some resources for one, but + # mark this object as already commited, so we can check + + # We remove it even in case the commit failed, as that makes us + # unusable. + self.__updater = None + + def get_buffer(self): + """ + Returns the current buffer of changes not yet passed into the data + source. It is in a form like [('add', rrset), ('remove', rrset), + ('remove', rrset), ...]. + + Probably useful only for testing and introspection purposes. Don't + modify the list. + """ + return self.__buffer diff --git a/src/lib/python/isc/xfrin/libxfrin_messages.mes b/src/lib/python/isc/xfrin/libxfrin_messages.mes new file mode 100644 index 0000000000..be943c86d6 --- /dev/null +++ b/src/lib/python/isc/xfrin/libxfrin_messages.mes @@ -0,0 +1,21 @@ +# Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and/or distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +# PERFORMANCE OF THIS SOFTWARE. + +# No namespace declaration - these constants go in the global namespace +# of the libxfrin_messages python module. + +% LIBXFRIN_DIFFERENT_TTL multiple data with different TTLs (%1, %2) on %3/%4. Adjusting %2 -> %1. +The xfrin module received an update containing multiple rdata changes for the +same RRset. But the TTLs of these don't match each other. As we combine them +together, the later one get's overwritten to the earlier one in the sequence. diff --git a/src/lib/python/isc/xfrin/tests/Makefile.am b/src/lib/python/isc/xfrin/tests/Makefile.am new file mode 100644 index 0000000000..416d62b45e --- /dev/null +++ b/src/lib/python/isc/xfrin/tests/Makefile.am @@ -0,0 +1,24 @@ +PYCOVERAGE_RUN = @PYCOVERAGE_RUN@ +PYTESTS = diff_tests.py +EXTRA_DIST = $(PYTESTS) + +# If necessary (rare cases), explicitly specify paths to dynamic libraries +# required by loadable python modules. +LIBRARY_PATH_PLACEHOLDER = +if SET_ENV_LIBRARY_PATH +LIBRARY_PATH_PLACEHOLDER += $(ENV_LIBRARY_PATH)=$(abs_top_builddir)/src/lib/cryptolink/.libs:$(abs_top_builddir)/src/lib/dns/.libs:$(abs_top_builddir)/src/lib/dns/python/.libs:$(abs_top_builddir)/src/lib/cc/.libs:$(abs_top_builddir)/src/lib/config/.libs:$(abs_top_builddir)/src/lib/log/.libs:$(abs_top_builddir)/src/lib/util/.libs:$(abs_top_builddir)/src/lib/exceptions/.libs:$(abs_top_builddir)/src/lib/datasrc/.libs:$$$(ENV_LIBRARY_PATH) +endif + +# test using command-line arguments, so use check-local target instead of TESTS +check-local: +if ENABLE_PYTHON_COVERAGE + touch $(abs_top_srcdir)/.coverage + rm -f .coverage + ${LN_S} $(abs_top_srcdir)/.coverage .coverage +endif + for pytest in $(PYTESTS) ; do \ + echo Running test: $$pytest ; \ + $(LIBRARY_PATH_PLACEHOLDER) \ + PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/lib/dns/python/.libs \ + $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ + done diff --git a/src/lib/python/isc/xfrin/tests/diff_tests.py b/src/lib/python/isc/xfrin/tests/diff_tests.py new file mode 100644 index 0000000000..9652a1a772 --- /dev/null +++ b/src/lib/python/isc/xfrin/tests/diff_tests.py @@ -0,0 +1,437 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +import isc.log +import unittest +from isc.dns import Name, RRset, RRClass, RRType, RRTTL, Rdata +from isc.xfrin.diff import Diff, NoSuchZone + +class TestError(Exception): + """ + Just to have something to be raised during the tests. + Not used outside. + """ + pass + +class DiffTest(unittest.TestCase): + """ + Tests for the isc.xfrin.diff.Diff class. + + It also plays role of a data source and an updater, so it can manipulate + some test variables while being called. + """ + def setUp(self): + """ + This sets internal variables so we can see nothing was called yet. + + It also creates some variables used in multiple tests. + """ + # Track what was called already + self.__updater_requested = False + self.__compact_called = False + self.__data_operations = [] + self.__apply_called = False + self.__commit_called = False + self.__broken_called = False + self.__warn_called = False + # Some common values + self.__rrclass = RRClass.IN() + self.__type = RRType.A() + self.__ttl = RRTTL(3600) + # And RRsets + # Create two valid rrsets + self.__rrset1 = RRset(Name('a.example.org.'), self.__rrclass, + self.__type, self.__ttl) + self.__rdata = Rdata(self.__type, self.__rrclass, '192.0.2.1') + self.__rrset1.add_rdata(self.__rdata) + self.__rrset2 = RRset(Name('b.example.org.'), self.__rrclass, + self.__type, self.__ttl) + self.__rrset2.add_rdata(self.__rdata) + # And two invalid + self.__rrset_empty = RRset(Name('empty.example.org.'), self.__rrclass, + self.__type, self.__ttl) + self.__rrset_multi = RRset(Name('multi.example.org.'), self.__rrclass, + self.__type, self.__ttl) + self.__rrset_multi.add_rdata(self.__rdata) + self.__rrset_multi.add_rdata(Rdata(self.__type, self.__rrclass, + '192.0.2.2')) + + def __mock_compact(self): + """ + This can be put into the diff to hook into its compact method and see + if it gets called. + """ + self.__compact_called = True + + def __mock_apply(self): + """ + This can be put into the diff to hook into its apply method and see + it gets called. + """ + self.__apply_called = True + + def __broken_operation(self, *args): + """ + This can be used whenever an operation should fail. It raises TestError. + It should take whatever amount of parameters needed, so it can be put + quite anywhere. + """ + self.__broken_called = True + raise TestError("Test error") + + def warn(self, *args): + """ + This is for checking the warn function was called, we replace the logger + in the tested module. + """ + self.__warn_called = True + + def commit(self): + """ + This is part of pretending to be a zone updater. This notes the commit + was called. + """ + self.__commit_called = True + + def add_rrset(self, rrset): + """ + This one is part of pretending to be a zone updater. It writes down + addition of an rrset was requested. + """ + self.__data_operations.append(('add', rrset)) + + def remove_rrset(self, rrset): + """ + This one is part of pretending to be a zone updater. It writes down + removal of an rrset was requested. + """ + self.__data_operations.append(('remove', rrset)) + + def get_class(self): + """ + This one is part of pretending to be a zone updater. It returns + the IN class. + """ + return self.__rrclass + + def get_updater(self, zone_name, replace): + """ + This one pretends this is the data source client and serves + getting an updater. + + If zone_name is 'none.example.org.', it returns None, otherwise + it returns self. + """ + # The diff should not delete the old data. + self.assertFalse(replace) + self.__updater_requested = True + # Pretend this zone doesn't exist + if zone_name == Name('none.example.org.'): + return None + else: + return self + + def test_create(self): + """ + This test the case when the diff is successfuly created. It just + tries it does not throw and gets the updater. + """ + diff = Diff(self, Name('example.org.')) + self.assertTrue(self.__updater_requested) + self.assertEqual([], diff.get_buffer()) + + def test_create_nonexist(self): + """ + Try to create a diff on a zone that doesn't exist. This should + raise a correct exception. + """ + self.assertRaises(NoSuchZone, Diff, self, Name('none.example.org.')) + self.assertTrue(self.__updater_requested) + + def __data_common(self, diff, method, operation): + """ + Common part of test for test_add and test_remove. + """ + # Try putting there the bad data first + self.assertRaises(ValueError, method, self.__rrset_empty) + self.assertRaises(ValueError, method, self.__rrset_multi) + # They were not added + self.assertEqual([], diff.get_buffer()) + # Put some proper data into the diff + method(self.__rrset1) + method(self.__rrset2) + dlist = [(operation, self.__rrset1), (operation, self.__rrset2)] + self.assertEqual(dlist, diff.get_buffer()) + # Check the data are not destroyed by raising an exception because of + # bad data + self.assertRaises(ValueError, method, self.__rrset_empty) + self.assertEqual(dlist, diff.get_buffer()) + + def test_add(self): + """ + Try to add few items into the diff and see they are stored in there. + + Also try passing an rrset that has differnt amount of RRs than 1. + """ + diff = Diff(self, Name('example.org.')) + self.__data_common(diff, diff.add_data, 'add') + + def test_remove(self): + """ + Try scheduling removal of few items into the diff and see they are + stored in there. + + Also try passing an rrset that has different amount of RRs than 1. + """ + diff = Diff(self, Name('example.org.')) + self.__data_common(diff, diff.remove_data, 'remove') + + def test_apply(self): + """ + Schedule few additions and check the apply works by passing the + data into the updater. + """ + # Prepare the diff + diff = Diff(self, Name('example.org.')) + diff.add_data(self.__rrset1) + diff.remove_data(self.__rrset2) + dlist = [('add', self.__rrset1), ('remove', self.__rrset2)] + self.assertEqual(dlist, diff.get_buffer()) + # Do the apply, hook the compact method + diff.compact = self.__mock_compact + diff.apply() + # It should call the compact + self.assertTrue(self.__compact_called) + # And pass the data. Our local history of what happened is the same + # format, so we can check the same way + self.assertEqual(dlist, self.__data_operations) + # And the buffer in diff should become empty, as everything + # got inside. + self.assertEqual([], diff.get_buffer()) + + def test_commit(self): + """ + If we call a commit, it should first apply whatever changes are + left (we hook into that instead of checking the effect) and then + the commit on the updater should have been called. + + Then we check it raises value error for whatever operation we try. + """ + diff = Diff(self, Name('example.org.')) + diff.add_data(self.__rrset1) + orig_apply = diff.apply + diff.apply = self.__mock_apply + diff.commit() + self.assertTrue(self.__apply_called) + self.assertTrue(self.__commit_called) + # The data should be handled by apply which we replaced. + self.assertEqual([], self.__data_operations) + # Now check all range of other methods raise ValueError + self.assertRaises(ValueError, diff.commit) + self.assertRaises(ValueError, diff.add_data, self.__rrset2) + self.assertRaises(ValueError, diff.remove_data, self.__rrset1) + diff.apply = orig_apply + self.assertRaises(ValueError, diff.apply) + # This one does not state it should raise, so check it doesn't + # But it is NOP in this situation anyway + diff.compact() + + def test_autoapply(self): + """ + Test the apply is called all by itself after 100 tasks are added. + """ + diff = Diff(self, Name('example.org.')) + # A method to check the apply is called _after_ the 100th element + # is added. We don't use it anywhere else, so we define it locally + # as lambda function + def check(): + self.assertEqual(100, len(diff.get_buffer())) + self.__mock_apply() + orig_apply = diff.apply + diff.apply = check + # If we put 99, nothing happens yet + for i in range(0, 99): + diff.add_data(self.__rrset1) + expected = [('add', self.__rrset1)] * 99 + self.assertEqual(expected, diff.get_buffer()) + self.assertFalse(self.__apply_called) + # Now we push the 100th and it should call the apply method + # This will _not_ flush the data yet, as we replaced the method. + # It, however, would in the real life. + diff.add_data(self.__rrset1) + # Now the apply method (which is replaced by our check) should + # have been called. If it wasn't, this is false. If it was, but + # still with 99 elements, the check would complain + self.assertTrue(self.__apply_called) + # Reset the buffer by calling the original apply. + orig_apply() + self.assertEqual([], diff.get_buffer()) + # Similar with remove + self.__apply_called = False + for i in range(0, 99): + diff.remove_data(self.__rrset2) + expected = [('remove', self.__rrset2)] * 99 + self.assertEqual(expected, diff.get_buffer()) + self.assertFalse(self.__apply_called) + diff.remove_data(self.__rrset2) + self.assertTrue(self.__apply_called) + + def test_compact(self): + """ + Test the compaction works as expected, eg. it compacts only consecutive + changes of the same operation and on the same domain/type. + + The test case checks that it does merge them, but also puts some + different operations "in the middle", changes the type and name and + places the same kind of change further away of each other to see they + are not merged in that case. + """ + diff = Diff(self, Name('example.org.')) + # Check we can do a compact on empty data, it shouldn't break + diff.compact() + self.assertEqual([], diff.get_buffer()) + # This data is the way it should look like after the compact + # ('operation', 'domain.prefix', 'type', ['rdata', 'rdata']) + # The notes say why the each of consecutive can't be merged + data = [ + ('add', 'a', 'A', ['192.0.2.1', '192.0.2.2']), + # Different type. + ('add', 'a', 'AAAA', ['2001:db8::1', '2001:db8::2']), + # Different operation + ('remove', 'a', 'AAAA', ['2001:db8::3']), + # Different domain + ('remove', 'b', 'AAAA', ['2001:db8::4']), + # This does not get merged with the first, even if logically + # possible. We just don't do this. + ('add', 'a', 'A', ['192.0.2.3']) + ] + # Now, fill the data into the diff, in a "flat" way, one by one + for (op, nprefix, rrtype, rdata) in data: + name = Name(nprefix + '.example.org.') + rrtype_obj = RRType(rrtype) + for rdatum in rdata: + rrset = RRset(name, self.__rrclass, rrtype_obj, self.__ttl) + rrset.add_rdata(Rdata(rrtype_obj, self.__rrclass, rdatum)) + if op == 'add': + diff.add_data(rrset) + else: + diff.remove_data(rrset) + # Compact it + diff.compact() + # Now check they got compacted. They should be in the same order as + # pushed inside. So it should be the same as data modulo being in + # the rrsets and isc.dns objects. + def check(): + buf = diff.get_buffer() + self.assertEqual(len(data), len(buf)) + for (expected, received) in zip(data, buf): + (eop, ename, etype, edata) = expected + (rop, rrrset) = received + self.assertEqual(eop, rop) + ename_obj = Name(ename + '.example.org.') + self.assertEqual(ename_obj, rrrset.get_name()) + # We check on names to make sure they are printed nicely + self.assertEqual(etype, str(rrrset.get_type())) + rdata = rrrset.get_rdata() + self.assertEqual(len(edata), len(rdata)) + # It should also preserve the order + for (edatum, rdatum) in zip(edata, rdata): + self.assertEqual(edatum, str(rdatum)) + check() + # Try another compact does nothing, but survives + diff.compact() + check() + + def test_wrong_class(self): + """ + Test a wrong class of rrset is rejected. + """ + diff = Diff(self, Name('example.org.')) + rrset = RRset(Name('a.example.org.'), RRClass.CH(), RRType.NS(), + self.__ttl) + rrset.add_rdata(Rdata(RRType.NS(), RRClass.CH(), 'ns.example.org.')) + self.assertRaises(ValueError, diff.add_data, rrset) + self.assertRaises(ValueError, diff.remove_data, rrset) + + def __do_raise_test(self): + """ + Do a raise test. Expects that one of the operations is exchanged for + broken version. + """ + diff = Diff(self, Name('example.org.')) + diff.add_data(self.__rrset1) + diff.remove_data(self.__rrset2) + self.assertRaises(TestError, diff.commit) + self.assertTrue(self.__broken_called) + self.assertRaises(ValueError, diff.add_data, self.__rrset1) + self.assertRaises(ValueError, diff.remove_data, self.__rrset2) + self.assertRaises(ValueError, diff.commit) + self.assertRaises(ValueError, diff.apply) + + def test_raise_add(self): + """ + Test the exception from add_rrset is propagated and the diff can't be + used afterwards. + """ + self.add_rrset = self.__broken_operation + self.__do_raise_test() + + def test_raise_remove(self): + """ + Test the exception from remove_rrset is propagated and the diff can't be + used afterwards. + """ + self.remove_rrset = self.__broken_operation + self.__do_raise_test() + + def test_raise_commit(self): + """ + Test the exception from updater's commit gets propagated and it can't be + used afterwards. + """ + self.commit = self.__broken_operation + self.__do_raise_test() + + def test_ttl(self): + """ + Test the TTL handling. A warn function should have been called if they + differ, but that's all, it should not crash or raise. + """ + orig_logger = isc.xfrin.diff.logger + try: + isc.xfrin.diff.logger = self + diff = Diff(self, Name('example.org.')) + diff.add_data(self.__rrset1) + rrset2 = RRset(Name('a.example.org.'), self.__rrclass, + self.__type, RRTTL(120)) + rrset2.add_rdata(Rdata(self.__type, self.__rrclass, '192.10.2.2')) + diff.add_data(rrset2) + rrset2 = RRset(Name('a.example.org.'), self.__rrclass, + self.__type, RRTTL(6000)) + rrset2.add_rdata(Rdata(self.__type, self.__rrclass, '192.10.2.3')) + diff.add_data(rrset2) + # They should get compacted together and complain. + diff.compact() + self.assertEqual(1, len(diff.get_buffer())) + # The TTL stays on the first value, no matter if smaller or bigger + # ones come later. + self.assertEqual(self.__ttl, diff.get_buffer()[0][1].get_ttl()) + self.assertTrue(self.__warn_called) + finally: + isc.xfrin.diff.logger = orig_logger + +if __name__ == "__main__": + isc.log.init("bind10") + unittest.main() diff --git a/src/lib/resolve/resolve_messages.mes b/src/lib/resolve/resolve_messages.mes index 97c4d908cb..f702d9b7e8 100644 --- a/src/lib/resolve/resolve_messages.mes +++ b/src/lib/resolve/resolve_messages.mes @@ -123,11 +123,11 @@ called because a nameserver has been found, and that a query is being sent to the specified nameserver. % RESLIB_TEST_SERVER setting test server to %1(%2) -This is an internal debugging message and is only generated in unit tests. -It indicates that all upstream queries from the resolver are being routed to -the specified server, regardless of the address of the nameserver to which -the query would normally be routed. As it should never be seen in normal -operation, it is a warning message instead of a debug message. +This is a warning message only generated in unit tests. It indicates +that all upstream queries from the resolver are being routed to the +specified server, regardless of the address of the nameserver to which +the query would normally be routed. If seen during normal operation, +please submit a bug report. % RESLIB_TEST_UPSTREAM sending upstream query for <%1> to test server at %2 This is a debug message and should only be seen in unit tests. A query for @@ -135,8 +135,8 @@ the specified tuple is being sent to a test nameserver whose address is given in the message. % RESLIB_TIMEOUT query <%1> to %2 timed out -A debug message indicating that the specified query has timed out and as -there are no retries left, an error will be reported. +A debug message indicating that the specified upstream query has timed out and +there are no retries left. % RESLIB_TIMEOUT_RETRY query <%1> to %2 timed out, re-trying (retries left: %3) A debug message indicating that the specified query has timed out and that diff --git a/src/lib/resolve/tests/Makefile.am b/src/lib/resolve/tests/Makefile.am index ee311a6bac..cf05d9b08e 100644 --- a/src/lib/resolve/tests/Makefile.am +++ b/src/lib/resolve/tests/Makefile.am @@ -31,6 +31,7 @@ run_unittests_LDADD += $(top_builddir)/src/lib/asiolink/libasiolink.la run_unittests_LDADD += $(top_builddir)/src/lib/asiodns/libasiodns.la run_unittests_LDADD += $(top_builddir)/src/lib/resolve/libresolve.la run_unittests_LDADD += $(top_builddir)/src/lib/dns/libdns++.la +run_unittests_LDADD += $(top_builddir)/src/lib/util/libutil.la run_unittests_LDADD += $(top_builddir)/src/lib/log/liblog.la run_unittests_LDADD += $(top_builddir)/src/lib/util/unittests/libutil_unittests.la run_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/lib/server_common/Makefile.am b/src/lib/server_common/Makefile.am index d5761043f8..c2779b4466 100644 --- a/src/lib/server_common/Makefile.am +++ b/src/lib/server_common/Makefile.am @@ -20,6 +20,9 @@ lib_LTLIBRARIES = libserver_common.la libserver_common_la_SOURCES = client.h client.cc libserver_common_la_SOURCES += keyring.h keyring.cc libserver_common_la_SOURCES += portconfig.h portconfig.cc +libserver_common_la_SOURCES += logger.h logger.cc +nodist_libserver_common_la_SOURCES = server_common_messages.h +nodist_libserver_common_la_SOURCES += server_common_messages.cc libserver_common_la_LIBADD = $(top_builddir)/src/lib/exceptions/libexceptions.la libserver_common_la_LIBADD += $(top_builddir)/src/lib/asiolink/libasiolink.la libserver_common_la_LIBADD += $(top_builddir)/src/lib/cc/libcc.la @@ -27,5 +30,10 @@ libserver_common_la_LIBADD += $(top_builddir)/src/lib/config/libcfgclient.la libserver_common_la_LIBADD += $(top_builddir)/src/lib/log/liblog.la libserver_common_la_LIBADD += $(top_builddir)/src/lib/acl/libacl.la libserver_common_la_LIBADD += $(top_builddir)/src/lib/dns/libdns++.la +BUILT_SOURCES = server_common_messages.h server_common_messages.cc +server_common_messages.h server_common_messages.cc: server_common_messages.mes + $(top_builddir)/src/lib/log/compiler/message $(top_srcdir)/src/lib/server_common/server_common_messages.mes -CLEANFILES = *.gcno *.gcda +EXTRA_DIST = server_common_messages.mes + +CLEANFILES = *.gcno *.gcda server_common_messages.h server_common_messages.cc diff --git a/src/lib/server_common/client.cc b/src/lib/server_common/client.cc index 31dee88481..e6383d6352 100644 --- a/src/lib/server_common/client.cc +++ b/src/lib/server_common/client.cc @@ -66,10 +66,3 @@ std::ostream& isc::server_common::operator<<(std::ostream& os, const Client& client) { return (os << client.toText()); } - -template <> -bool -IPCheck::matches(const Client& client) const { - const IPAddress& request_src(client.getRequestSourceIPAddress()); - return (compare(request_src.getData(), request_src.getFamily())); -} diff --git a/src/lib/server_common/client.h b/src/lib/server_common/client.h index 148e0696e6..1c5928aff6 100644 --- a/src/lib/server_common/client.h +++ b/src/lib/server_common/client.h @@ -145,17 +145,6 @@ private: /// parameter \c os after the insertion operation. std::ostream& operator<<(std::ostream& os, const Client& client); } - -namespace acl { -/// The specialization of \c IPCheck for access control with \c Client. -/// -/// It returns \c true if the source IP address of the client's request -/// matches the expression encapsulated in the \c IPCheck, and returns -/// \c false if not. -template <> -bool IPCheck::matches( - const server_common::Client& client) const; -} } #endif // __CLIENT_H diff --git a/src/lib/server_common/keyring.cc b/src/lib/server_common/keyring.cc index b60e796f1c..501dfd9a08 100644 --- a/src/lib/server_common/keyring.cc +++ b/src/lib/server_common/keyring.cc @@ -13,6 +13,7 @@ // PERFORMANCE OF THIS SOFTWARE. #include +#include using namespace isc::dns; using namespace isc::data; @@ -31,6 +32,7 @@ updateKeyring(const std::string&, ConstElementPtr data, const isc::config::ConfigData&) { ConstElementPtr list(data->get("keys")); KeyringPtr load(new TSIGKeyRing); + LOG_DEBUG(logger, DBG_TRACE_BASIC, SRVCOMM_KEYS_UPDATE); // Note that 'data' only contains explicitly configured config parameters. // So if we use the default list is NULL, rather than an empty list, and @@ -50,6 +52,7 @@ initKeyring(config::ModuleCCSession& session) { // We are already initialized return; } + LOG_DEBUG(logger, DBG_TRACE_BASIC, SRVCOMM_KEYS_INIT); session.addRemoteConfig("tsig_keys", updateKeyring, false); } @@ -59,6 +62,7 @@ deinitKeyring(config::ModuleCCSession& session) { // Not initialized, ignore it return; } + LOG_DEBUG(logger, DBG_TRACE_BASIC, SRVCOMM_KEYS_DEINIT); keyring.reset(); session.removeRemoteConfig("tsig_keys"); } diff --git a/src/lib/server_common/logger.cc b/src/lib/server_common/logger.cc new file mode 100644 index 0000000000..0b9ab6e8ad --- /dev/null +++ b/src/lib/server_common/logger.cc @@ -0,0 +1,23 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +namespace isc { +namespace server_common { + +isc::log::Logger logger("server_common"); + +} +} diff --git a/src/lib/server_common/logger.h b/src/lib/server_common/logger.h new file mode 100644 index 0000000000..cfca1f3c45 --- /dev/null +++ b/src/lib/server_common/logger.h @@ -0,0 +1,44 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __SERVER_COMMON_LOGGER_H +#define __SERVER_COMMON_LOGGER_H + +#include +#include + +/// \file logger.h +/// \brief Server Common library global logger +/// +/// This holds the logger for the server common library. It is a private header +/// and should not be included in any publicly used header, only in local +/// cc files. + +namespace isc { +namespace server_common { + +/// \brief The logger for this library +extern isc::log::Logger logger; + +enum { + /// \brief Trace basic operations + DBG_TRACE_BASIC = 10, + /// \brief Print also values used + DBG_TRACE_VALUES = 40 +}; + +} +} + +#endif diff --git a/src/lib/server_common/portconfig.cc b/src/lib/server_common/portconfig.cc index 7b2b3ddc27..379a0a17c4 100644 --- a/src/lib/server_common/portconfig.cc +++ b/src/lib/server_common/portconfig.cc @@ -13,10 +13,10 @@ // PERFORMANCE OF THIS SOFTWARE. #include +#include #include #include -#include #include #include @@ -25,7 +25,6 @@ using namespace std; using namespace isc::data; using namespace isc::asiolink; using namespace isc::asiodns; -using isc::log::dlog; namespace isc { namespace server_common { @@ -43,6 +42,8 @@ parseAddresses(isc::data::ConstElementPtr addresses, ConstElementPtr addr(addrPair->get("address")); ConstElementPtr port(addrPair->get("port")); if (!addr || ! port) { + LOG_ERROR(logger, SRVCOMM_ADDRESS_MISSING). + arg(addrPair->str()); isc_throw(BadValue, "Address must contain both the IP" "address and port"); } @@ -50,6 +51,8 @@ parseAddresses(isc::data::ConstElementPtr addresses, IOAddress(addr->stringValue()); if (port->intValue() < 0 || port->intValue() > 0xffff) { + LOG_ERROR(logger, SRVCOMM_PORT_RANGE). + arg(port->intValue()).arg(addrPair->str()); isc_throw(BadValue, "Bad port value (" << port->intValue() << ")"); } @@ -57,11 +60,14 @@ parseAddresses(isc::data::ConstElementPtr addresses, port->intValue())); } catch (const TypeError &e) { // Better error message + LOG_ERROR(logger, SRVCOMM_ADDRESS_TYPE). + arg(addrPair->str()); isc_throw(TypeError, "Address must be a string and port an integer"); } } } else if (addresses->getType() != Element::null) { + LOG_ERROR(logger, SRVCOMM_ADDRESSES_NOT_LIST).arg(elemName); isc_throw(TypeError, elemName + " config element must be a list"); } } @@ -86,10 +92,10 @@ installListenAddresses(const AddressList& newAddresses, isc::asiodns::DNSService& service) { try { - dlog("Setting listen addresses:"); + LOG_DEBUG(logger, DBG_TRACE_BASIC, SRVCOMM_SET_LISTEN); BOOST_FOREACH(const AddressPair& addr, newAddresses) { - dlog(" " + addr.first + ":" + - boost::lexical_cast(addr.second)); + LOG_DEBUG(logger, DBG_TRACE_VALUES, SRVCOMM_ADDRESS_VALUE). + arg(addr.first).arg(addr.second); } setAddresses(service, newAddresses); addressStore = newAddresses; @@ -108,13 +114,12 @@ installListenAddresses(const AddressList& newAddresses, * user will get error info, command control can be used to set new * address. So we just catch the exception without propagating outside */ - dlog(string("Unable to set new address: ") + e.what(), true); + LOG_ERROR(logger, SRVCOMM_ADDRESS_FAIL).arg(e.what()); try { setAddresses(service, addressStore); } catch (const exception& e2) { - dlog("Unable to recover from error;", true); - dlog(string("Rollback failed with: ") + e2.what(), true); + LOG_FATAL(logger, SRVCOMM_ADDRESS_UNRECOVERABLE).arg(e2.what()); } //Anyway the new configure has problem, we need to notify configure //manager the new configure doesn't work diff --git a/src/lib/server_common/server_common_messages.mes b/src/lib/server_common/server_common_messages.mes new file mode 100644 index 0000000000..5fbbb0ba58 --- /dev/null +++ b/src/lib/server_common/server_common_messages.mes @@ -0,0 +1,73 @@ +# Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +# +# Permission to use, copy, modify, and/or distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +# AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +# PERFORMANCE OF THIS SOFTWARE. + +$NAMESPACE isc::server_common + +# \brief Messages for the server_common library + +% SRVCOMM_ADDRESSES_NOT_LIST the address and port specification is not a list in %1 +This points to an error in configuration. What was supposed to be a list of +IP address - port pairs isn't a list at all but something else. + +% SRVCOMM_ADDRESS_FAIL failed to listen on addresses (%1) +The server failed to bind to one of the address/port pair it should according +to configuration, for reason listed in the message (usually because that pair +is already used by other service or missing privileges). The server will try +to recover and bind the address/port pairs it was listening to before (if any). + +% SRVCOMM_ADDRESS_MISSING address specification is missing "address" or "port" element in %1 +This points to an error in configuration. An address specification in the +configuration is missing either an address or port and so cannot be used. The +specification causing the error is given in the message. + +% SRVCOMM_ADDRESS_TYPE address specification type is invalid in %1 +This points to an error in configuration. An address specification in the +configuration malformed. The specification causing the error is given in the +message. A valid specification contains an address part (which must be a string +and must represent a valid IPv4 or IPv6 address) and port (which must be an +integer in the range valid for TCP/UDP ports on your system). + +% SRVCOMM_ADDRESS_UNRECOVERABLE failed to recover original addresses also (%2) +The recovery of old addresses after SRVCOMM_ADDRESS_FAIL also failed for +the reason listed. + +The condition indicates problems with the server and/or the system on +which it is running. The server will continue running to allow +reconfiguration, but will not be listening on any address or port until +an administrator does so. + +% SRVCOMM_ADDRESS_VALUE address to set: %1#%2 +Debug message. This lists one address and port value of the set of +addresses we are going to listen on (eg. there will be one log message +per pair). This appears only after SRVCOMM_SET_LISTEN, but might +be hidden, as it has higher debug level. + +% SRVCOMM_KEYS_DEINIT deinitializing TSIG keyring +Debug message indicating that the server is deinitializing the TSIG keyring. + +% SRVCOMM_KEYS_INIT initializing TSIG keyring +Debug message indicating that the server is initializing the global TSIG +keyring. This should be seen only at server start. + +% SRVCOMM_KEYS_UPDATE updating TSIG keyring +Debug message indicating new keyring is being loaded from configuration (either +on startup or as a result of configuration update). + +% SRVCOMM_PORT_RANGE port out of valid range (%1 in %2) +This points to an error in configuration. The port in an address +specification is outside the valid range of 0 to 65535. + +% SRVCOMM_SET_LISTEN setting addresses to listen to +Debug message, noting that the server is about to start listening on a +different set of IP addresses and ports than before. diff --git a/src/lib/server_common/tests/Makefile.am b/src/lib/server_common/tests/Makefile.am index 3c061c27c5..d7e113af1c 100644 --- a/src/lib/server_common/tests/Makefile.am +++ b/src/lib/server_common/tests/Makefile.am @@ -38,8 +38,10 @@ run_unittests_LDADD += $(top_builddir)/src/lib/server_common/libserver_common.la run_unittests_LDADD += $(top_builddir)/src/lib/asiolink/libasiolink.la run_unittests_LDADD += $(top_builddir)/src/lib/asiodns/libasiodns.la run_unittests_LDADD += $(top_builddir)/src/lib/cc/libcc.la +run_unittests_LDADD += $(top_builddir)/src/lib/log/liblog.la run_unittests_LDADD += $(top_builddir)/src/lib/acl/libacl.la run_unittests_LDADD += $(top_builddir)/src/lib/util/libutil.la +run_unittests_LDADD += $(top_builddir)/src/lib/log/liblog.la run_unittests_LDADD += $(top_builddir)/src/lib/dns/libdns++.la run_unittests_LDADD += $(top_builddir)/src/lib/util/unittests/libutil_unittests.la run_unittests_LDADD += $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/lib/server_common/tests/client_unittest.cc b/src/lib/server_common/tests/client_unittest.cc index 34a90a2866..287a92660e 100644 --- a/src/lib/server_common/tests/client_unittest.cc +++ b/src/lib/server_common/tests/client_unittest.cc @@ -89,30 +89,6 @@ TEST_F(ClientTest, constructIPv6) { client6->getRequestSourceIPAddress().getData(), 16)); } -TEST_F(ClientTest, ACLCheckIPv4) { - // Exact match - EXPECT_TRUE(IPCheck("192.0.2.1").matches(*client4)); - // Exact match (negative) - EXPECT_FALSE(IPCheck("192.0.2.53").matches(*client4)); - // Prefix match - EXPECT_TRUE(IPCheck("192.0.2.0/24").matches(*client4)); - // Prefix match (negative) - EXPECT_FALSE(IPCheck("192.0.1.0/24").matches(*client4)); - // Address family mismatch (the first 4 bytes of the IPv6 address has the - // same binary representation as the client's IPv4 address, which - // shouldn't confuse the match logic) - EXPECT_FALSE(IPCheck("c000:0201::").matches(*client4)); -} - -TEST_F(ClientTest, ACLCheckIPv6) { - // The following are a set of tests of the same concept as ACLCheckIPv4 - EXPECT_TRUE(IPCheck("2001:db8::1").matches(*client6)); - EXPECT_FALSE(IPCheck("2001:db8::53").matches(*client6)); - EXPECT_TRUE(IPCheck("2001:db8::/64").matches(*client6)); - EXPECT_FALSE(IPCheck("2001:db8:1::/64").matches(*client6)); - EXPECT_FALSE(IPCheck("32.1.13.184").matches(*client6)); -} - TEST_F(ClientTest, toText) { EXPECT_EQ("192.0.2.1#53214", client4->toText()); EXPECT_EQ("2001:db8::1#53216", client6->toText()); diff --git a/src/lib/server_common/tests/keyring_test.cc b/src/lib/server_common/tests/keyring_test.cc index d79b541f97..dab43df7bc 100644 --- a/src/lib/server_common/tests/keyring_test.cc +++ b/src/lib/server_common/tests/keyring_test.cc @@ -38,7 +38,8 @@ public: specfile(std::string(TEST_DATA_PATH) + "/spec.spec") { session.getMessages()->add(createAnswer()); - mccs.reset(new ModuleCCSession(specfile, session, NULL, NULL, false)); + mccs.reset(new ModuleCCSession(specfile, session, NULL, NULL, + false, false)); } isc::cc::FakeSession session; std::auto_ptr mccs; diff --git a/src/lib/server_common/tests/run_unittests.cc b/src/lib/server_common/tests/run_unittests.cc index b982ef3b80..860cb77ba9 100644 --- a/src/lib/server_common/tests/run_unittests.cc +++ b/src/lib/server_common/tests/run_unittests.cc @@ -16,6 +16,7 @@ #include #include +#include #include @@ -23,5 +24,7 @@ int main(int argc, char* argv[]) { ::testing::InitGoogleTest(&argc, argv); + isc::log::initLogger(); + return (isc::util::unittests::run_all()); } diff --git a/src/lib/testutils/testdata/Makefile.am b/src/lib/testutils/testdata/Makefile.am index 93b9eb903c..918d5c55d3 100644 --- a/src/lib/testutils/testdata/Makefile.am +++ b/src/lib/testutils/testdata/Makefile.am @@ -32,4 +32,4 @@ EXTRA_DIST += test2.zone.in EXTRA_DIST += test2-new.zone.in .spec.wire: - $(abs_top_builddir)/src/lib/dns/tests/testdata/gen-wiredata.py -o $@ $< + $(PYTHON) $(top_builddir)/src/lib/util/python/gen_wiredata.py -o $@ $< diff --git a/src/lib/util/Makefile.am b/src/lib/util/Makefile.am index 3db9ac4cfa..0b78b295af 100644 --- a/src/lib/util/Makefile.am +++ b/src/lib/util/Makefile.am @@ -1,4 +1,4 @@ -SUBDIRS = . io unittests tests pyunittests +SUBDIRS = . io unittests tests pyunittests python AM_CPPFLAGS = -I$(top_srcdir)/src/lib -I$(top_builddir)/src/lib AM_CPPFLAGS += -I$(top_srcdir)/src/lib/util -I$(top_builddir)/src/lib/util diff --git a/src/lib/util/filename.cc b/src/lib/util/filename.cc index 1f2e5db43c..d7da9c81d4 100644 --- a/src/lib/util/filename.cc +++ b/src/lib/util/filename.cc @@ -132,6 +132,24 @@ Filename::useAsDefault(const string& name) const { return (retstring); } +void +Filename::setDirectory(const std::string& new_directory) { + std::string directory(new_directory); + + if (directory.length() > 0) { + // append '/' if necessary + size_t sep = directory.rfind('/'); + if (sep == std::string::npos || sep < directory.size() - 1) { + directory += "/"; + } + } + // and regenerate the full name + std::string full_name = directory + name_ + extension_; + + directory_.swap(directory); + full_name_.swap(full_name); +} + } // namespace log } // namespace isc diff --git a/src/lib/util/filename.h b/src/lib/util/filename.h index 984ecb08d5..f6259386ef 100644 --- a/src/lib/util/filename.h +++ b/src/lib/util/filename.h @@ -86,6 +86,13 @@ public: return (directory_); } + /// \brief Set directory for the file + /// + /// \param new_directory The directory to set. If this is an empty + /// string, the directory this filename object currently + /// has will be removed. + void setDirectory(const std::string& new_directory); + /// \return Name of Given File Name std::string name() const { return (name_); @@ -96,6 +103,11 @@ public: return (extension_); } + /// \return Name + extension of Given File Name + std::string nameAndExtension() const { + return (name_ + extension_); + } + /// \brief Expand Name with Default /// /// A default file specified is supplied and used to fill in any missing diff --git a/src/lib/util/python/Makefile.am b/src/lib/util/python/Makefile.am new file mode 100644 index 0000000000..81d528c5c2 --- /dev/null +++ b/src/lib/util/python/Makefile.am @@ -0,0 +1 @@ +noinst_SCRIPTS = gen_wiredata.py mkpywrapper.py diff --git a/src/lib/util/python/gen_wiredata.py.in b/src/lib/util/python/gen_wiredata.py.in new file mode 100755 index 0000000000..8bd2b3c2a6 --- /dev/null +++ b/src/lib/util/python/gen_wiredata.py.in @@ -0,0 +1,1232 @@ +#!@PYTHON@ + +# Copyright (C) 2010 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" +Generator of various types of DNS data in the hex format. + +This script reads a human readable specification file (called "spec +file" hereafter) that defines some type of DNS data (an RDATA, an RR, +or a complete message) and dumps the defined data to a separate file +as a "wire format" sequence parsable by the +UnitTestUtil::readWireData() function (currently defined as part of +libdns++ tests). Many DNS related tests involve wire format test +data, so it will be convenient if we can define the data in a more +intuitive way than writing the entire hex sequence by hand. + +Here is a simple example. Consider the following spec file: + + [custom] + sections: a + [a] + as_rr: True + +When the script reads this file, it detects the file specifies a single +component (called "section" here) that consists of a single A RDATA, +which must be dumped as an RR (not only the part of RDATA). It then +dumps the following content: + + # A RR (QNAME=example.com Class=IN(1) TTL=86400 RDLEN=4) + 076578616d706c6503636f6d00 0001 0001 00015180 0004 + # Address=192.0.2.1 + c0000201 + +As can be seen, the script automatically completes all variable +parameters of RRs: owner name, class, TTL, RDATA length and data. For +testing purposes many of these will be the same common one (like +"example.com" or 192.0.2.1), so it would be convenient if we only have +to specify non default parameters. To change the RDATA (i.e., the +IPv4 address), we should add the following line at the end of the spec +file: + + address: 192.0.2.2 + +Then the last two lines of the output file will be as follows: + + # Address=192.0.2.2 + c0000202 + +In some cases we would rather specify malformed data for tests. This +script has the ability to specify broken parameters for many types of +data. For example, we can generate data that would look like an A RR +but the RDLEN is 3 by adding the following line to the spec file: + + rdlen: 3 + +Then the first two lines of the output file will be as follows: + + # A RR (QNAME=example.com Class=IN(1) TTL=86400 RDLEN=3) + 076578616d706c6503636f6d00 0001 0001 00015180 0003 + +** USAGE ** + + gen_wiredata.py [-o output_file] spec_file + +If the -o option is missing, and if the spec_file has a suffix (such as +in the form of "data.spec"), the output file name will be the prefix +part of it (as in "data"); if -o is missing and the spec_file does not +have a suffix, the script will fail. + +** SPEC FILE SYNTAX ** + +A spec file accepted in this script should be in the form of a +configuration file that is parsable by the Python's standard +configparser module. In short, it consists of sections; each section +is identified in the form of [section_name] followed by "name: value" +entries. Lines beginning with # or ; will be treated as comments. +Refer to the configparser module documentation for further details of +the general syntax. + +This script has two major modes: the custom mode and the DNS query +mode. The former generates an arbitrary combination of DNS message +header, question section, RDATAs or RRs. It is mainly intended to +generate a test data for a single type of RDATA or RR, or for +complicated complete DNS messages. The DNS query mode is actually a +special case of the custom mode, which is a shortcut to generate a +simple DNS query message (with or without EDNS). + +* Custom mode syntax * + +By default this script assumes the DNS query mode. To specify the +custom mode, there must be a special "custom" section in the spec +file, which should contain 'sections' entry. This value of this +entryis colon-separated string fields, each of which is either +"header", "question", "edns", "name", or a string specifying an RR +type. For RR types the string is lower-cased string mnemonic that +identifies the type: 'a' for type A, 'ns' for type NS, and so on +(note: in the current implementation it's case sensitive, and must be +lower cased). + +Each of these fields is interpreted as a section name of the spec +(configuration), and in that section parameters specific to the +semantics of the field can be configured. + +A "header" section specifies the content of a DNS message header. +See the documentation of the DNSHeader class of this module for +configurable parameters. + +A "question" section specifies the content of a single question that +is normally to be placed in the Question section of a DNS message. +See the documentation of the DNSQuestion class of this module for +configurable parameters. + +An "edns" section specifies the content of an EDNS OPT RR. See the +documentation of the EDNS class of this module for configurable +parameters. + +A "name" section specifies a domain name with or without compression. +This is specifically intended to be used for testing name related +functionalities and would rarely be used with other sections. See the +documentation of the Name class of this module for configurable +parameters. + +In a specific section for an RR or RDATA, possible entries depend on +the type. But there are some common configurable entries. See the +description of the RR class. The most important one would be "as_rr". +It controls whether the entry should be treated as an RR (with name, +type, class and TTL) or only as an RDATA. By default as_rr is +"False", so if an entry is to be intepreted as an RR, an as_rr entry +must be explicitly specified with a value of "True". + +Another common entry is "rdlen". It specifies the RDLEN field value +of the RR (note: this is included when the entry is interpreted as +RDATA, too). By default this value is automatically determined by the +RR type and (it has a variable length) from other fields of RDATA, but +as shown in the above example, it can be explicitly set, possibly to a +bogus value for testing against invalid data. + +For type specific entries (and their defaults when provided), see the +documentation of the corresponding Python class defined in this +module. In general, there should be a class named the same mnemonic +of the corresponding RR type for each supported type, and they are a +subclass of the RR class. For example, the "NS" class is defined for +RR type NS. + +Look again at the A RR example shown at the beginning of this +description. There's a "custom" section, which consists of a +"sections" entry whose value is a single "a", which means the data to +be generated is an A RR or RDATA. There's a corresponding "a" +section, which only specifies that it should be interpreted as an RR +(all field values of the RR are derived from the default). + +If you want to generate a data sequence for two ore more RRs or +RDATAs, you can specify them in the form of colon-separated fields for +the "sections" entry. For example, to generate a sequence of A and NS +RRs in that order, the "custom" section would be something like this: + + [custom] + sections: a:ns + +and there must be an "ns" section in addtion to "a". + +If a sequence of two or more RRs/RDATAs of the same RR type should be +generated, these should be uniquely indexed with the "/" separator. +For example, to generate two A RRs, the "custom" section would be as +follows: + + [custom] + sections: a/1:a/2 + +and there must be "a/1" and "a/2" sections. + +Another practical example that would be used for many tests is to +generate data for a complete DNS ressponse message. The spec file of +such an example configuration would look like as follows: + + [custom] + sections: header:question:a + [header] + qr: 1 + ancount: 1 + [question] + [a] + as_rr: True + +With this configuration, this script will generate test data for a DNS +response to a query for example.com/IN/A containing one corresponding +A RR in the answer section. + +* DNS query mode syntax * + +If the spec file does not contain a "custom" section (that has a +"sections" entry), this script assumes the DNS query mode. This mode +is actually a special case of custom mode; it implicitly assumes the +"sections" entry whose value is "header:question:edns". + +In this mode it is expected that the spec file also contains at least +a "header" and "question" sections, and optionally an "edns" section. +But the script does not warn or fail even if the expected sections are +missing. + +* Entry value types * + +As described above, a section of the spec file accepts entries +specific to the semantics of the section. They generally correspond +to DNS message or RR fields. + +Many of them are expected to be integral values, for which either decimal or +hexadecimal representation is accepted, for example: + + rr_ttl: 3600 + tag: 0x1234 + +Some others are expected to be string. A string value does not have +to be quated: + + address: 192.0.2.2 + +but can also be quoated with single quotes: + + address: '192.0.2.2' + +Note 1: a string that can be interpreted as an integer must be quated. +For example, if you want to set a "string" entry to "3600", it should +be: + + string: '3600' + +instead of + + string: 3600 + +Note 2: a string enclosed with double quotes is not accepted: + + # This doesn't work: + address: "192.0.2.2" + +In general, string values are converted to hexadecimal sequences +according to the semantics of the entry. For instance, a textual IPv4 +address in the above example will be converted to a hexadecimal +sequence corresponding to a 4-byte integer. So, in many cases, the +acceptable syntax for a particular string entry value should be +obvious from the context. There are still some exceptional cases +especially for complicated RR field values, for which the +corresponding class documentation should be referenced. + +One special string syntax that would be worth noting is domain names, +which would natually be used in many kinds of entries. The simplest +form of acceptable syntax is a textual representation of domain names +such as "example.com" (note: names are always assumed to be +"absolute", so the trailing dot can be omitted). But a domain name in +the wire format can also contain a compression pointer. This script +provides a simple support for name compression with a special notation +of "ptr=nn" where nn is the numeric pointer value (decimal). For example, +if the NSDNAME field of an NS RDATA is specified as follows: + + nsname: ns.ptr=12 + +this script will generate the following output: + + # NS name=ns.ptr=12 + 026e73c00c + +** EXTEND THE SCRIPT ** + +This script is expected to be extended as we add more support for +various types of RR. It is encouraged to add support for a new type +of RR to this script as we see the need for testing that type. Here +is a simple instruction of how to do that. + +Assume you are adding support for "FOO" RR. Also assume that the FOO +RDATA contains a single field named "value". + +What you are expected to do is as follows: + +- Define a new class named "FOO" inherited from the RR class. Also + define a class variable named "value" for the FOO RDATA field (the + variable name can be different from the field name, but it's + convenient if it can be easily identifiable.) with an appropriate + default value (if possible): + + class FOO(RR): + value = 10 + + The name of the variable will be (automatically) used as the + corresponding entry name in the spec file. So, a spec file that + sets this field to 20 would look like this: + + [foo] + value: 20 + +- Define the "dump()" method for class FOO. It must call + self.dump_header() (which is derived from class RR) at the + beginning. It then prints the RDATA field values in an appropriate + way. Assuming the value is a 16-bit integer field, a complete + dump() method would look like this: + + def dump(self, f): + if self.rdlen is None: + self.rdlen = 2 + self.dump_header(f, self.rdlen) + f.write('# Value=%d\\n' % (self.value)) + f.write('%04x\\n' % (self.value)) + + The first f.write() call is not mandatory, but is encouraged to + be provided so that the generated files will be more human readable. + Depending on the complexity of the RDATA fields, the dump() + implementation would be more complicated. In particular, if the + RDATA length is variable and the RDLEN field value is not specified + in the spec file, the dump() method is normally expected to + calculate the correct length and pass it to dump_header(). See the + implementation of various derived classes of class RR for actual + examples. +""" + +import configparser, re, time, socket, sys +from datetime import datetime +from optparse import OptionParser + +re_hex = re.compile(r'^0x[0-9a-fA-F]+') +re_decimal = re.compile(r'^\d+$') +re_string = re.compile(r"\'(.*)\'$") + +dnssec_timefmt = '%Y%m%d%H%M%S' + +dict_qr = { 'query' : 0, 'response' : 1 } +dict_opcode = { 'query' : 0, 'iquery' : 1, 'status' : 2, 'notify' : 4, + 'update' : 5 } +rdict_opcode = dict([(dict_opcode[k], k.upper()) for k in dict_opcode.keys()]) +dict_rcode = { 'noerror' : 0, 'formerr' : 1, 'servfail' : 2, 'nxdomain' : 3, + 'notimp' : 4, 'refused' : 5, 'yxdomain' : 6, 'yxrrset' : 7, + 'nxrrset' : 8, 'notauth' : 9, 'notzone' : 10 } +rdict_rcode = dict([(dict_rcode[k], k.upper()) for k in dict_rcode.keys()]) +dict_rrtype = { 'none' : 0, 'a' : 1, 'ns' : 2, 'md' : 3, 'mf' : 4, 'cname' : 5, + 'soa' : 6, 'mb' : 7, 'mg' : 8, 'mr' : 9, 'null' : 10, + 'wks' : 11, 'ptr' : 12, 'hinfo' : 13, 'minfo' : 14, 'mx' : 15, + 'txt' : 16, 'rp' : 17, 'afsdb' : 18, 'x25' : 19, 'isdn' : 20, + 'rt' : 21, 'nsap' : 22, 'nsap_tr' : 23, 'sig' : 24, 'key' : 25, + 'px' : 26, 'gpos' : 27, 'aaaa' : 28, 'loc' : 29, 'nxt' : 30, + 'srv' : 33, 'naptr' : 35, 'kx' : 36, 'cert' : 37, 'a6' : 38, + 'dname' : 39, 'opt' : 41, 'apl' : 42, 'ds' : 43, 'sshfp' : 44, + 'ipseckey' : 45, 'rrsig' : 46, 'nsec' : 47, 'dnskey' : 48, + 'dhcid' : 49, 'nsec3' : 50, 'nsec3param' : 51, 'hip' : 55, + 'spf' : 99, 'unspec' : 103, 'tkey' : 249, 'tsig' : 250, + 'dlv' : 32769, 'ixfr' : 251, 'axfr' : 252, 'mailb' : 253, + 'maila' : 254, 'any' : 255 } +rdict_rrtype = dict([(dict_rrtype[k], k.upper()) for k in dict_rrtype.keys()]) +dict_rrclass = { 'in' : 1, 'ch' : 3, 'hs' : 4, 'any' : 255 } +rdict_rrclass = dict([(dict_rrclass[k], k.upper()) for k in \ + dict_rrclass.keys()]) +dict_algorithm = { 'rsamd5' : 1, 'dh' : 2, 'dsa' : 3, 'ecc' : 4, + 'rsasha1' : 5 } +dict_nsec3_algorithm = { 'reserved' : 0, 'sha1' : 1 } +rdict_algorithm = dict([(dict_algorithm[k], k.upper()) for k in \ + dict_algorithm.keys()]) +rdict_nsec3_algorithm = dict([(dict_nsec3_algorithm[k], k.upper()) for k in \ + dict_nsec3_algorithm.keys()]) + +header_xtables = { 'qr' : dict_qr, 'opcode' : dict_opcode, + 'rcode' : dict_rcode } +question_xtables = { 'rrtype' : dict_rrtype, 'rrclass' : dict_rrclass } + +def parse_value(value, xtable = {}): + if re.search(re_hex, value): + return int(value, 16) + if re.search(re_decimal, value): + return int(value) + m = re.match(re_string, value) + if m: + return m.group(1) + lovalue = value.lower() + if lovalue in xtable: + return xtable[lovalue] + return value + +def code_totext(code, dict): + if code in dict.keys(): + return dict[code] + '(' + str(code) + ')' + return str(code) + +def encode_name(name, absolute=True): + # make sure the name is dot-terminated. duplicate dots will be ignored + # below. + name += '.' + labels = name.split('.') + wire = '' + for l in labels: + if len(l) > 4 and l[0:4] == 'ptr=': + # special meta-syntax for compression pointer + wire += '%04x' % (0xc000 | int(l[4:])) + break + if absolute or len(l) > 0: + wire += '%02x' % len(l) + wire += ''.join(['%02x' % ord(ch) for ch in l]) + if len(l) == 0: + break + return wire + +def encode_string(name, len=None): + if type(name) is int and len is not None: + return '%0.*x' % (len * 2, name) + return ''.join(['%02x' % ord(ch) for ch in name]) + +def count_namelabels(name): + if name == '.': # special case + return 0 + m = re.match('^(.*)\.$', name) + if m: + name = m.group(1) + return len(name.split('.')) + +def get_config(config, section, configobj, xtables = {}): + try: + for field in config.options(section): + value = config.get(section, field) + if field in xtables.keys(): + xtable = xtables[field] + else: + xtable = {} + configobj.__dict__[field] = parse_value(value, xtable) + except configparser.NoSectionError: + return False + return True + +def print_header(f, input_file): + f.write('''### +### This data file was auto-generated from ''' + input_file + ''' +### +''') + +class Name: + '''Implements rendering a single domain name in the test data format. + + Configurable parameter is as follows (see the description of the + same name of attribute for the default value): + - name (string): A textual representation of the name, such as + 'example.com'. + - pointer (int): If specified, compression pointer will be + prepended to the generated data with the offset being the value + of this parameter. + ''' + + name = 'example.com' + pointer = None # no compression by default + def dump(self, f): + name = self.name + if self.pointer is not None: + if len(name) > 0 and name[-1] != '.': + name += '.' + name += 'ptr=%d' % self.pointer + name_wire = encode_name(name) + f.write('\n# DNS Name: %s' % self.name) + if self.pointer is not None: + f.write(' + compression pointer: %d' % self.pointer) + f.write('\n') + f.write('%s' % name_wire) + f.write('\n') + +class DNSHeader: + '''Implements rendering a DNS Header section in the test data format. + + Configurable parameter is as follows (see the description of the + same name of attribute for the default value): + - id (16-bit int): + - qr, aa, tc, rd, ra, ad, cd (0 or 1): Standard header bits as + defined in RFC1035 and RFC4035. If set to 1, the corresponding + bit will be set; if set to 0, it will be cleared. + - mbz (0-3): The reserved field of the 3rd and 4th octets of the + header. + - rcode (4-bit int or string): The RCODE field. If specified as a + string, it must be the commonly used textual mnemonic of the RCODEs + (NOERROR, FORMERR, etc, case insensitive). + - opcode (4-bit int or string): The OPCODE field. If specified as + a string, it must be the commonly used textual mnemonic of the + OPCODEs (QUERY, NOTIFY, etc, case insensitive). + - qdcount, ancount, nscount, arcount (16-bit int): The QD/AN/NS/AR + COUNT fields, respectively. + ''' + + id = 0x1035 + (qr, aa, tc, rd, ra, ad, cd) = 0, 0, 0, 0, 0, 0, 0 + mbz = 0 + rcode = 0 # noerror + opcode = 0 # query + (qdcount, ancount, nscount, arcount) = 1, 0, 0, 0 + + def dump(self, f): + f.write('\n# Header Section\n') + f.write('# ID=' + str(self.id)) + f.write(' QR=' + ('Response' if self.qr else 'Query')) + f.write(' Opcode=' + code_totext(self.opcode, rdict_opcode)) + f.write(' Rcode=' + code_totext(self.rcode, rdict_rcode)) + f.write('%s' % (' AA' if self.aa else '')) + f.write('%s' % (' TC' if self.tc else '')) + f.write('%s' % (' RD' if self.rd else '')) + f.write('%s' % (' AD' if self.ad else '')) + f.write('%s' % (' CD' if self.cd else '')) + f.write('\n') + f.write('%04x ' % self.id) + flag_and_code = 0 + flag_and_code |= (self.qr << 15 | self.opcode << 14 | self.aa << 10 | + self.tc << 9 | self.rd << 8 | self.ra << 7 | + self.mbz << 6 | self.ad << 5 | self.cd << 4 | + self.rcode) + f.write('%04x\n' % flag_and_code) + f.write('# QDCNT=%d, ANCNT=%d, NSCNT=%d, ARCNT=%d\n' % + (self.qdcount, self.ancount, self.nscount, self.arcount)) + f.write('%04x %04x %04x %04x\n' % (self.qdcount, self.ancount, + self.nscount, self.arcount)) + +class DNSQuestion: + '''Implements rendering a DNS question in the test data format. + + Configurable parameter is as follows (see the description of the + same name of attribute for the default value): + - name (string): The QNAME. The string must be interpreted as a + valid domain name. + - rrtype (int or string): The question type. If specified + as an integer, it must be the 16-bit RR type value of the + covered type. If specifed as a string, it must be the textual + mnemonic of the type. + - rrclass (int or string): The question class. If specified as an + integer, it must be the 16-bit RR class value of the covered + type. If specifed as a string, it must be the textual mnemonic + of the class. + ''' + name = 'example.com.' + rrtype = parse_value('A', dict_rrtype) + rrclass = parse_value('IN', dict_rrclass) + + def dump(self, f): + f.write('\n# Question Section\n') + f.write('# QNAME=%s QTYPE=%s QCLASS=%s\n' % + (self.name, + code_totext(self.rrtype, rdict_rrtype), + code_totext(self.rrclass, rdict_rrclass))) + f.write(encode_name(self.name)) + f.write(' %04x %04x\n' % (self.rrtype, self.rrclass)) + +class EDNS: + '''Implements rendering EDNS OPT RR in the test data format. + + Configurable parameter is as follows (see the description of the + same name of attribute for the default value): + - name (string): The owner name of the OPT RR. The string must be + interpreted as a valid domain name. + - udpsize (16-bit int): The UDP payload size (set as the RR class) + - extrcode (8-bit int): The upper 8 bits of the extended RCODE. + - version (8-bit int): The EDNS version. + - do (int): The DNSSEC DO bit. The bit will be set if this value + is 1; otherwise the bit will be unset. + - mbz (15-bit int): The rest of the flags field. + - rdlen (16-bit int): The RDLEN field. Note: right now specifying + a non 0 value (except for making bogus data) doesn't make sense + because there is no way to configure RDATA. + ''' + name = '.' + udpsize = 4096 + extrcode = 0 + version = 0 + do = 0 + mbz = 0 + rdlen = 0 + def dump(self, f): + f.write('\n# EDNS OPT RR\n') + f.write('# NAME=%s TYPE=%s UDPSize=%d ExtRcode=%s Version=%s DO=%d\n' % + (self.name, code_totext(dict_rrtype['opt'], rdict_rrtype), + self.udpsize, self.extrcode, self.version, + 1 if self.do else 0)) + + code_vers = (self.extrcode << 8) | (self.version & 0x00ff) + extflags = (self.do << 15) | (self.mbz & ~0x8000) + f.write('%s %04x %04x %04x %04x\n' % + (encode_name(self.name), dict_rrtype['opt'], self.udpsize, + code_vers, extflags)) + f.write('# RDLEN=%d\n' % self.rdlen) + f.write('%04x\n' % self.rdlen) + +class RR: + '''This is a base class for various types of RR test data. + For each RR type (A, AAAA, NS, etc), we define a derived class of RR + to dump type specific RDATA parameters. This class defines parameters + common to all types of RDATA, namely the owner name, RR class and TTL. + The dump() method of derived classes are expected to call dump_header(), + whose default implementation is provided in this class. This method + decides whether to dump the test data as an RR (with name, type, class) + or only as RDATA (with its length), and dumps the corresponding data + via the specified file object. + + By convention we assume derived classes are named after the common + standard mnemonic of the corresponding RR types. For example, the + derived class for the RR type SOA should be named "SOA". + + Configurable parameters are as follows: + - as_rr (bool): Whether or not the data is to be dumped as an RR. + False by default. + - rr_name (string): The owner name of the RR. The string must be + interpreted as a valid domain name (compression pointer can be + contained). Default is 'example.com.' + - rr_class (string): The RR class of the data. Only meaningful + when the data is dumped as an RR. Default is 'IN'. + - rr_ttl (int): The TTL value of the RR. Only meaningful when + the data is dumped as an RR. Default is 86400 (1 day). + - rdlen (int): 16-bit RDATA length. It can be None (i.e. omitted + in the spec file), in which case the actual length of the + generated RDATA is automatically determined and used; if + negative, the RDLEN field will be omitted from the output data. + (Note that omitting RDLEN with as_rr being True is mostly + meaningless, although the script doesn't complain about it). + Default is None. + ''' + + def __init__(self): + self.as_rr = False + # only when as_rr is True, same for class/TTL: + self.rr_name = 'example.com' + self.rr_class = 'IN' + self.rr_ttl = 86400 + self.rdlen = None + + def dump_header(self, f, rdlen): + type_txt = self.__class__.__name__ + type_code = parse_value(type_txt, dict_rrtype) + rdlen_spec = '' + rdlen_data = '' + if rdlen >= 0: + rdlen_spec = ', RDLEN=%d' % rdlen + rdlen_data = '%04x' % rdlen + if self.as_rr: + rrclass = parse_value(self.rr_class, dict_rrclass) + f.write('\n# %s RR (QNAME=%s Class=%s TTL=%d%s)\n' % + (type_txt, self.rr_name, + code_totext(rrclass, rdict_rrclass), self.rr_ttl, + rdlen_spec)) + f.write('%s %04x %04x %08x %s\n' % + (encode_name(self.rr_name), type_code, rrclass, + self.rr_ttl, rdlen_data)) + else: + f.write('\n# %s RDATA%s\n' % (type_txt, rdlen_spec)) + f.write('%s\n' % rdlen_data) + +class A(RR): + '''Implements rendering A RDATA (of class IN) in the test data format. + + Configurable parameter is as follows (see the description of the + same name of attribute for the default value): + - address (string): The address field. This must be a valid textual + IPv4 address. + ''' + RDLEN_DEFAULT = 4 # fixed by default + address = '192.0.2.1' + + def dump(self, f): + if self.rdlen is None: + self.rdlen = self.RDLEN_DEFAULT + self.dump_header(f, self.rdlen) + f.write('# Address=%s\n' % (self.address)) + bin_address = socket.inet_aton(self.address) + f.write('%02x%02x%02x%02x\n' % (bin_address[0], bin_address[1], + bin_address[2], bin_address[3])) + +class AAAA(RR): + '''Implements rendering AAAA RDATA (of class IN) in the test data + format. + + Configurable parameter is as follows (see the description of the + same name of attribute for the default value): + - address (string): The address field. This must be a valid textual + IPv6 address. + ''' + RDLEN_DEFAULT = 16 # fixed by default + address = '2001:db8::1' + + def dump(self, f): + if self.rdlen is None: + self.rdlen = self.RDLEN_DEFAULT + self.dump_header(f, self.rdlen) + f.write('# Address=%s\n' % (self.address)) + bin_address = socket.inet_pton(socket.AF_INET6, self.address) + [f.write('%02x' % x) for x in bin_address] + f.write('\n') + +class NS(RR): + '''Implements rendering NS RDATA in the test data format. + + Configurable parameter is as follows (see the description of the + same name of attribute for the default value): + - nsname (string): The NSDNAME field. The string must be + interpreted as a valid domain name. + ''' + + nsname = 'ns.example.com' + + def dump(self, f): + nsname_wire = encode_name(self.nsname) + if self.rdlen is None: + self.rdlen = len(nsname_wire) / 2 + self.dump_header(f, self.rdlen) + f.write('# NS name=%s\n' % (self.nsname)) + f.write('%s\n' % nsname_wire) + +class SOA(RR): + '''Implements rendering SOA RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - mname/rname (string): The MNAME/RNAME fields, respectively. The + string must be interpreted as a valid domain name. + - serial (32-bit int): The SERIAL field + - refresh (32-bit int): The REFRESH field + - retry (32-bit int): The RETRY field + - expire (32-bit int): The EXPIRE field + - minimum (32-bit int): The MINIMUM field + ''' + + mname = 'ns.example.com' + rname = 'root.example.com' + serial = 2010012601 + refresh = 3600 + retry = 300 + expire = 3600000 + minimum = 1200 + def dump(self, f): + mname_wire = encode_name(self.mname) + rname_wire = encode_name(self.rname) + if self.rdlen is None: + self.rdlen = int(20 + len(mname_wire) / 2 + len(str(rname_wire)) / 2) + self.dump_header(f, self.rdlen) + f.write('# NNAME=%s RNAME=%s\n' % (self.mname, self.rname)) + f.write('%s %s\n' % (mname_wire, rname_wire)) + f.write('# SERIAL(%d) REFRESH(%d) RETRY(%d) EXPIRE(%d) MINIMUM(%d)\n' % + (self.serial, self.refresh, self.retry, self.expire, + self.minimum)) + f.write('%08x %08x %08x %08x %08x\n' % (self.serial, self.refresh, + self.retry, self.expire, + self.minimum)) + +class TXT(RR): + '''Implements rendering TXT RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - nstring (int): number of character-strings + - stringlenN (int) (int, N = 0, ..., nstring-1): the length of the + N-th character-string. + - stringN (string, N = 0, ..., nstring-1): the N-th + character-string. + - stringlen (int): the default string. If nstring >= 1 and the + corresponding stringlenN isn't specified in the spec file, this + value will be used. If this parameter isn't specified either, + the length of the string will be used. Note that it means + this parameter (or any stringlenN) doesn't have to be specified + unless you want to intentially build a broken character string. + - string (string): the default string. If nstring >= 1 and the + corresponding stringN isn't specified in the spec file, this + string will be used. + ''' + + nstring = 1 + stringlen = None + string = 'Test String' + + def dump(self, f): + stringlen_list = [] + string_list = [] + wirestring_list = [] + for i in range(0, self.nstring): + key_string = 'string' + str(i) + if key_string in self.__dict__: + string_list.append(self.__dict__[key_string]) + else: + string_list.append(self.string) + wirestring_list.append(encode_string(string_list[-1])) + key_stringlen = 'stringlen' + str(i) + if key_stringlen in self.__dict__: + stringlen_list.append(self.__dict__[key_stringlen]) + else: + stringlen_list.append(self.stringlen) + if stringlen_list[-1] is None: + stringlen_list[-1] = int(len(wirestring_list[-1]) / 2) + if self.rdlen is None: + self.rdlen = int(len(''.join(wirestring_list)) / 2) + self.nstring + self.dump_header(f, self.rdlen) + for i in range(0, self.nstring): + f.write('# String Len=%d, String=\"%s\"\n' % + (stringlen_list[i], string_list[i])) + f.write('%02x%s%s\n' % (stringlen_list[i], + ' ' if len(wirestring_list[i]) > 0 else '', + wirestring_list[i])) + +class RP(RR): + '''Implements rendering RP RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - mailbox (string): The mailbox field. + - text (string): The text field. + These strings must be interpreted as a valid domain name. + ''' + mailbox = 'root.example.com' + text = 'rp-text.example.com' + def dump(self, f): + mailbox_wire = encode_name(self.mailbox) + text_wire = encode_name(self.text) + if self.rdlen is None: + self.rdlen = (len(mailbox_wire) + len(text_wire)) / 2 + else: + self.rdlen = int(self.rdlen) + self.dump_header(f, self.rdlen) + f.write('# MAILBOX=%s TEXT=%s\n' % (self.mailbox, self.text)) + f.write('%s %s\n' % (mailbox_wire, text_wire)) + +class MINFO(RR): + '''Implements rendering MINFO RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - rmailbox (string): The rmailbox field. + - emailbox (string): The emailbox field. + These strings must be interpreted as a valid domain name. + ''' + rmailbox = 'rmailbox.example.com' + emailbox = 'emailbox.example.com' + def dump(self, f): + rmailbox_wire = encode_name(self.rmailbox) + emailbox_wire = encode_name(self.emailbox) + if self.rdlen is None: + self.rdlen = (len(rmailbox_wire) + len(emailbox_wire)) / 2 + else: + self.rdlen = int(self.rdlen) + self.dump_header(f, self.rdlen) + f.write('# RMAILBOX=%s EMAILBOX=%s\n' % (self.rmailbox, self.emailbox)) + f.write('%s %s\n' % (rmailbox_wire, emailbox_wire)) + +class AFSDB(RR): + '''Implements rendering AFSDB RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - subtype (16 bit int): The subtype field. + - server (string): The server field. + The string must be interpreted as a valid domain name. + ''' + subtype = 1 + server = 'afsdb.example.com' + def dump(self, f): + server_wire = encode_name(self.server) + if self.rdlen is None: + self.rdlen = 2 + len(server_wire) / 2 + else: + self.rdlen = int(self.rdlen) + self.dump_header(f, self.rdlen) + f.write('# SUBTYPE=%d SERVER=%s\n' % (self.subtype, self.server)) + f.write('%04x %s\n' % (self.subtype, server_wire)) + +class NSECBASE(RR): + '''Implements rendering NSEC/NSEC3 type bitmaps commonly used for + these RRs. The NSEC and NSEC3 classes will be inherited from this + class. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - nbitmap (int): The number of type bitmaps. + The following three define the bitmaps. If suffixed with "N" + (0 <= N < nbitmaps), it means the definition for the N-th bitmap. + If there is no suffix (e.g., just "block", it means the default + for any unspecified values) + - block[N] (8-bit int): The Window Block. + - maplen[N] (8-bit int): The Bitmap Length. The default "maplen" + can also be unspecified (with being set to None), in which case + the corresponding length will be calculated from the bitmap. + - bitmap[N] (string): The Bitmap. This must be the hexadecimal + representation of the bitmap field. For example, for a bitmap + where the 7th and 15th bits (and only these bits) are set, it + must be '0101'. Note also that the value must be quated with + single quatations because it could also be interpreted as an + integer. + ''' + nbitmap = 1 # number of bitmaps + block = 0 + maplen = None # default bitmap length, auto-calculate + bitmap = '040000000003' # an arbtrarily chosen bitmap sample + def dump(self, f): + # first, construct the bitmpa data + block_list = [] + maplen_list = [] + bitmap_list = [] + for i in range(0, self.nbitmap): + key_bitmap = 'bitmap' + str(i) + if key_bitmap in self.__dict__: + bitmap_list.append(self.__dict__[key_bitmap]) + else: + bitmap_list.append(self.bitmap) + key_maplen = 'maplen' + str(i) + if key_maplen in self.__dict__: + maplen_list.append(self.__dict__[key_maplen]) + else: + maplen_list.append(self.maplen) + if maplen_list[-1] is None: # calculate it if not specified + maplen_list[-1] = int(len(bitmap_list[-1]) / 2) + key_block = 'block' + str(i) + if key_block in self.__dict__: + block_list.append(self.__dict__[key_block]) + else: + block_list.append(self.block) + + # dump RR-type specific part (NSEC or NSEC3) + self.dump_fixedpart(f, 2 * self.nbitmap + \ + int(len(''.join(bitmap_list)) / 2)) + + # dump the bitmap + for i in range(0, self.nbitmap): + f.write('# Bitmap: Block=%d, Length=%d\n' % + (block_list[i], maplen_list[i])) + f.write('%02x %02x %s\n' % + (block_list[i], maplen_list[i], bitmap_list[i])) + +class NSEC(NSECBASE): + '''Implements rendering NSEC RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - Type bitmap related parameters: see class NSECBASE + - nextname (string): The Next Domain Name field. The string must be + interpreted as a valid domain name. + ''' + + nextname = 'next.example.com' + def dump_fixedpart(self, f, bitmap_totallen): + name_wire = encode_name(self.nextname) + if self.rdlen is None: + # if rdlen needs to be calculated, it must be based on the bitmap + # length, because the configured maplen can be fake. + self.rdlen = int(len(name_wire) / 2) + bitmap_totallen + self.dump_header(f, self.rdlen) + f.write('# Next Name=%s (%d bytes)\n' % (self.nextname, + int(len(name_wire) / 2))) + f.write('%s\n' % name_wire) + +class NSEC3(NSECBASE): + '''Implements rendering NSEC3 RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - Type bitmap related parameters: see class NSECBASE + - hashalg (8-bit int): The Hash Algorithm field. Note that + currently the only defined algorithm is SHA-1, for which a value + of 1 will be used, and it's the default. So this implementation + does not support any string representation right now. + - optout (bool): The Opt-Out flag of the Flags field. + - mbz (7-bit int): The rest of the Flags field. This value will + be left shifted for 1 bit and then OR-ed with optout to + construct the complete Flags field. + - iterations (16-bit int): The Iterations field. + - saltlen (int): The Salt Length field. + - salt (string): The Salt field. It is converted to a sequence of + ascii codes and its hexadecimal representation will be used. + - hashlen (int): The Hash Length field. + - hash (string): The Next Hashed Owner Name field. This parameter + is interpreted as "salt". + ''' + + hashalg = 1 # SHA-1 + optout = False # opt-out flag + mbz = 0 # other flag fields (none defined yet) + iterations = 1 + saltlen = 5 + salt = 's' * saltlen + hashlen = 20 + hash = 'h' * hashlen + def dump_fixedpart(self, f, bitmap_totallen): + if self.rdlen is None: + # if rdlen needs to be calculated, it must be based on the bitmap + # length, because the configured maplen can be fake. + self.rdlen = 4 + 1 + len(self.salt) + 1 + len(self.hash) \ + + bitmap_totallen + self.dump_header(f, self.rdlen) + optout_val = 1 if self.optout else 0 + f.write('# Hash Alg=%s, Opt-Out=%d, Other Flags=%0x, Iterations=%d\n' % + (code_totext(self.hashalg, rdict_nsec3_algorithm), + optout_val, self.mbz, self.iterations)) + f.write('%02x %02x %04x\n' % + (self.hashalg, (self.mbz << 1) | optout_val, self.iterations)) + f.write("# Salt Len=%d, Salt='%s'\n" % (self.saltlen, self.salt)) + f.write('%02x%s%s\n' % (self.saltlen, + ' ' if len(self.salt) > 0 else '', + encode_string(self.salt))) + f.write("# Hash Len=%d, Hash='%s'\n" % (self.hashlen, self.hash)) + f.write('%02x%s%s\n' % (self.hashlen, + ' ' if len(self.hash) > 0 else '', + encode_string(self.hash))) + +class RRSIG(RR): + '''Implements rendering RRSIG RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - covered (int or string): The Type Covered field. If specified + as an integer, it must be the 16-bit RR type value of the + covered type. If specifed as a string, it must be the textual + mnemonic of the type. + - algorithm (int or string): The Algorithm field. If specified + as an integer, it must be the 8-bit algorithm number as defined + in RFC4034. If specifed as a string, it must be one of the keys + of dict_algorithm (case insensitive). + - labels (int): The Labels field. If omitted (the corresponding + variable being set to None), the number of labels of "signer" + (excluding the trailing null label as specified in RFC4034) will + be used. + - originalttl (32-bit int): The Original TTL field. + - expiration (32-bit int): The Expiration TTL field. + - inception (32-bit int): The Inception TTL field. + - tag (16-bit int): The Key Tag field. + - signer (string): The Signer's Name field. The string must be + interpreted as a valid domain name. + - signature (int): The Signature field. Right now only a simple + integer form is supported. A prefix of "0" will be prepended if + the resulting hexadecimal representation consists of an odd + number of characters. + ''' + + covered = 'A' + algorithm = 'RSASHA1' + labels = None # auto-calculate (#labels of signer) + originalttl = 3600 + expiration = int(time.mktime(datetime.strptime('20100131120000', + dnssec_timefmt).timetuple())) + inception = int(time.mktime(datetime.strptime('20100101120000', + dnssec_timefmt).timetuple())) + tag = 0x1035 + signer = 'example.com' + signature = 0x123456789abcdef123456789abcdef + + def dump(self, f): + name_wire = encode_name(self.signer) + sig_wire = '%x' % self.signature + if len(sig_wire) % 2 != 0: + sig_wire = '0' + sig_wire + if self.rdlen is None: + self.rdlen = int(18 + len(name_wire) / 2 + len(str(sig_wire)) / 2) + self.dump_header(f, self.rdlen) + + if type(self.covered) is str: + self.covered = dict_rrtype[self.covered.lower()] + if type(self.algorithm) is str: + self.algorithm = dict_algorithm[self.algorithm.lower()] + if self.labels is None: + self.labels = count_namelabels(self.signer) + f.write('# Covered=%s Algorithm=%s Labels=%d OrigTTL=%d\n' % + (code_totext(self.covered, rdict_rrtype), + code_totext(self.algorithm, rdict_algorithm), self.labels, + self.originalttl)) + f.write('%04x %02x %02x %08x\n' % (self.covered, self.algorithm, + self.labels, self.originalttl)) + f.write('# Expiration=%s, Inception=%s\n' % + (str(self.expiration), str(self.inception))) + f.write('%08x %08x\n' % (self.expiration, self.inception)) + f.write('# Tag=%d Signer=%s and Signature\n' % (self.tag, self.signer)) + f.write('%04x %s %s\n' % (self.tag, name_wire, sig_wire)) + +class TSIG(RR): + '''Implements rendering TSIG RDATA in the test data format. + + As a meta RR type TSIG uses some non common parameters. This + class overrides some of the default attributes of the RR class + accordingly: + - rr_class is set to 'ANY' + - rr_ttl is set to 0 + Like other derived classes these can be overridden via the spec + file. + + Other configurable parameters are as follows (see the description + of the same name of attribute for the default value): + - algorithm (string): The Algorithm Name field. The value is + generally interpreted as a domain name string, and will + typically be one of the standard algorithm names defined in + RFC4635. For convenience, however, a shortcut value "hmac-md5" + is allowed instead of the standard "hmac-md5.sig-alg.reg.int". + - time_signed (48-bit int): The Time Signed field. + - fudge (16-bit int): The Fudge field. + - mac_size (int): The MAC Size field. If omitted, the common value + determined by the algorithm will be used. + - mac (int or string): The MAC field. If specified as an integer, + the integer value is used as the MAC, possibly with prepended + 0's so that the total length will be mac_size. If specifed as a + string, it is converted to a sequence of ascii codes and its + hexadecimal representation will be used. So, for example, if + "mac" is set to 'abc', it will be converted to '616263'. Note + that in this case the length of "mac" may not be equal to + mac_size. If unspecified, the mac_size number of '78' (ascii + code of 'x') will be used. + - original_id (16-bit int): The Original ID field. + - error (16-bit int): The Error field. + - other_len (int): The Other Len field. + - other_data (int or string): The Other Data field. This is + interpreted just like "mac" except that other_len is used + instead of mac_size. If unspecified this will be empty unless + the "error" is set to 18 (which means the "BADTIME" error), in + which case a hexadecimal representation of "time_signed + fudge + + 1" will be used. + ''' + + algorithm = 'hmac-sha256' + time_signed = 1286978795 # arbitrarily chosen default + fudge = 300 + mac_size = None # use a common value for the algorithm + mac = None # use 'x' * mac_size + original_id = 2845 # arbitrarily chosen default + error = 0 + other_len = None # 6 if error is BADTIME; otherwise 0 + other_data = None # use time_signed + fudge + 1 for BADTIME + dict_macsize = { 'hmac-md5' : 16, 'hmac-sha1' : 20, 'hmac-sha256' : 32 } + + # TSIG has some special defaults + def __init__(self): + super().__init__() + self.rr_class = 'ANY' + self.rr_ttl = 0 + + def dump(self, f): + if str(self.algorithm) == 'hmac-md5': + name_wire = encode_name('hmac-md5.sig-alg.reg.int') + else: + name_wire = encode_name(self.algorithm) + mac_size = self.mac_size + if mac_size is None: + if self.algorithm in self.dict_macsize.keys(): + mac_size = self.dict_macsize[self.algorithm] + else: + raise RuntimeError('TSIG Mac Size cannot be determined') + mac = encode_string('x' * mac_size) if self.mac is None else \ + encode_string(self.mac, mac_size) + other_len = self.other_len + if other_len is None: + # 18 = BADTIME + other_len = 6 if self.error == 18 else 0 + other_data = self.other_data + if other_data is None: + other_data = '%012x' % (self.time_signed + self.fudge + 1) \ + if self.error == 18 else '' + else: + other_data = encode_string(self.other_data, other_len) + if self.rdlen is None: + self.rdlen = int(len(name_wire) / 2 + 16 + len(mac) / 2 + \ + len(other_data) / 2) + self.dump_header(f, self.rdlen) + f.write('# Algorithm=%s Time-Signed=%d Fudge=%d\n' % + (self.algorithm, self.time_signed, self.fudge)) + f.write('%s %012x %04x\n' % (name_wire, self.time_signed, self.fudge)) + f.write('# MAC Size=%d MAC=(see hex)\n' % mac_size) + f.write('%04x%s\n' % (mac_size, ' ' + mac if len(mac) > 0 else '')) + f.write('# Original-ID=%d Error=%d\n' % (self.original_id, self.error)) + f.write('%04x %04x\n' % (self.original_id, self.error)) + f.write('# Other-Len=%d Other-Data=(see hex)\n' % other_len) + f.write('%04x%s\n' % (other_len, + ' ' + other_data if len(other_data) > 0 else '')) + +# Build section-class mapping +config_param = { 'name' : (Name, {}), + 'header' : (DNSHeader, header_xtables), + 'question' : (DNSQuestion, question_xtables), + 'edns' : (EDNS, {}) } +for rrtype in dict_rrtype.keys(): + # For any supported RR types add the tuple of (RR_CLASS, {}). + # We expect KeyError as not all the types are supported, and simply + # ignore them. + try: + cur_mod = sys.modules[__name__] + config_param[rrtype] = (cur_mod.__dict__[rrtype.upper()], {}) + except KeyError: + pass + +def get_config_param(section): + s = section + m = re.match('^([^:]+)/\d+$', section) + if m: + s = m.group(1) + return config_param[s] + +usage = '''usage: %prog [options] input_file''' + +if __name__ == "__main__": + parser = OptionParser(usage=usage) + parser.add_option('-o', '--output', action='store', dest='output', + default=None, metavar='FILE', + help='output file name [default: prefix of input_file]') + (options, args) = parser.parse_args() + + if len(args) == 0: + parser.error('input file is missing') + configfile = args[0] + + outputfile = options.output + if not outputfile: + m = re.match('(.*)\.[^.]+$', configfile) + if m: + outputfile = m.group(1) + else: + raise ValueError('output file is not specified and input file is not in the form of "output_file.suffix"') + + config = configparser.SafeConfigParser() + config.read(configfile) + + output = open(outputfile, 'w') + + print_header(output, configfile) + + # First try the 'custom' mode; if it fails assume the query mode. + try: + sections = config.get('custom', 'sections').split(':') + except configparser.NoSectionError: + sections = ['header', 'question', 'edns'] + + for s in sections: + section_param = get_config_param(s) + (obj, xtables) = (section_param[0](), section_param[1]) + if get_config(config, s, obj, xtables): + obj.dump(output) + + output.close() diff --git a/src/lib/util/python/pycppwrapper_util.h b/src/lib/util/python/pycppwrapper_util.h index fd55c19039..462e7150cb 100644 --- a/src/lib/util/python/pycppwrapper_util.h +++ b/src/lib/util/python/pycppwrapper_util.h @@ -94,6 +94,22 @@ public: /// the reference to be decreased, the original bare pointer should be /// extracted using the \c release() method. /// +/// In some other cases, it would be convenient if it's possible to create +/// an "empty" container and reset it with a Python object later. +/// For example, we may want to create a temporary Python object in the +/// middle of a function and make sure that it's valid within the rest of +/// the function scope, while we want to make sure its reference is released +/// when the function returns (either normally or as a result of exception). +/// To allow this scenario, this class defines the default constructor +/// and the \c reset() method. The default constructor allows the class +/// object with an "empty" (NULL) Python object, while \c reset() allows +/// the stored object to be replaced with a new one. If there's a valid +/// object was already set, \c reset() releases its reference. +/// In general, it's safer to construct the container object with a valid +/// Python object pointer. The use of the default constructor and +/// \c reset() should therefore be restricted to cases where it's +/// absolutely necessary. +/// /// There are two convenience methods for commonly used operations: /// \c installAsClassVariable() to add the PyObject as a class variable /// and \c installToModule to add the PyObject to a specified python module. @@ -166,17 +182,28 @@ public: /// exception in a python biding written in C/C++. See the code comment /// of the method for more details. struct PyObjectContainer { + PyObjectContainer() : obj_(NULL) {} PyObjectContainer(PyObject* obj) : obj_(obj) { if (obj_ == NULL) { isc_throw(PyCPPWrapperException, "Unexpected NULL PyObject, " "probably due to short memory"); } } - virtual ~PyObjectContainer() { + ~PyObjectContainer() { if (obj_ != NULL) { Py_DECREF(obj_); } } + void reset(PyObject* obj) { + if (obj == NULL) { + isc_throw(PyCPPWrapperException, "Unexpected NULL PyObject, " + "probably due to short memory"); + } + if (obj_ != NULL) { + Py_DECREF(obj_); + } + obj_ = obj; + } PyObject* get() { return (obj_); } @@ -266,7 +293,7 @@ protected: /// \c PyObject_New() to the caller. template struct CPPPyObjectContainer : public PyObjectContainer { - CPPPyObjectContainer(PYSTRUCT* obj) : PyObjectContainer(obj) {} + explicit CPPPyObjectContainer(PYSTRUCT* obj) : PyObjectContainer(obj) {} // This method associates a C++ object with the corresponding python // object enclosed in this class. diff --git a/src/lib/util/python/wrapper_template.cc b/src/lib/util/python/wrapper_template.cc index 691e4bfb42..426ced557f 100644 --- a/src/lib/util/python/wrapper_template.cc +++ b/src/lib/util/python/wrapper_template.cc @@ -210,7 +210,7 @@ namespace python { // Most of the functions are not actually implemented and NULL here. PyTypeObject @cppclass@_type = { PyVarObject_HEAD_INIT(NULL, 0) - "pydnspp.@CPPCLASS@", + "@MODULE@.@CPPCLASS@", sizeof(s_@CPPCLASS@), // tp_basicsize 0, // tp_itemsize reinterpret_cast(@CPPCLASS@_destroy), // tp_dealloc @@ -222,7 +222,7 @@ PyTypeObject @cppclass@_type = { NULL, // tp_as_number NULL, // tp_as_sequence NULL, // tp_as_mapping - NULL, // tp_hash + NULL, // tp_hash NULL, // tp_call // THIS MAY HAVE TO BE CHANGED TO NULL: @CPPCLASS@_str, // tp_str @@ -299,8 +299,8 @@ initModulePart_@CPPCLASS@(PyObject* mod) { PyObject* create@CPPCLASS@Object(const @CPPCLASS@& source) { - @CPPCLASS@Container container = - PyObject_New(s_@CPPCLASS@, &@cppclass@_type); + @CPPCLASS@Container container(PyObject_New(s_@CPPCLASS@, + &@cppclass@_type)); container.set(new @CPPCLASS@(source)); return (container.release()); } diff --git a/src/lib/util/python/wrapper_template.h b/src/lib/util/python/wrapper_template.h index d68a658e55..be701e1b01 100644 --- a/src/lib/util/python/wrapper_template.h +++ b/src/lib/util/python/wrapper_template.h @@ -37,15 +37,15 @@ bool initModulePart_@CPPCLASS@(PyObject* mod); // Note: this utility function works only when @CPPCLASS@ is a copy // constructable. // And, it would only be useful when python binding needs to create this -// object frequently. Otherwise, it would (or should) probably better to +// object frequently. Otherwise, it would (or should) probably be better to // remove the declaration and definition of this function. // -/// This is A simple shortcut to create a python @CPPCLASS@ object (in the +/// This is a simple shortcut to create a python @CPPCLASS@ object (in the /// form of a pointer to PyObject) with minimal exception safety. /// On success, it returns a valid pointer to PyObject with a reference /// counter of 1; if something goes wrong it throws an exception (it never /// returns a NULL pointer). -/// This function is expected to be called with in a try block +/// This function is expected to be called within a try block /// followed by necessary setup for python exception. PyObject* create@CPPCLASS@Object(const @CPPCLASS@& source); diff --git a/src/lib/util/strutil.cc b/src/lib/util/strutil.cc index 161f9acd65..ed7fc9b171 100644 --- a/src/lib/util/strutil.cc +++ b/src/lib/util/strutil.cc @@ -132,6 +132,17 @@ format(const std::string& format, const std::vector& args) { return (result); } +std::string +getToken(std::istringstream& iss) { + string token; + iss >> token; + if (iss.bad() || iss.fail()) { + isc_throw(StringTokenError, "could not read token from string"); + } + return (token); +} + + } // namespace str } // namespace util } // namespace isc diff --git a/src/lib/util/strutil.h b/src/lib/util/strutil.h index e044c15ff5..021c236b5e 100644 --- a/src/lib/util/strutil.h +++ b/src/lib/util/strutil.h @@ -18,7 +18,10 @@ #include #include #include +#include #include +#include +#include namespace isc { namespace util { @@ -26,6 +29,16 @@ namespace str { /// \brief A Set of C++ Utilities for Manipulating Strings +/// +/// \brief A standard string util exception that is thrown if getToken or +/// numToToken are called with bad input data +/// +class StringTokenError : public Exception { +public: + StringTokenError(const char* file, size_t line, const char* what) : + isc::Exception(file, line, what) {} +}; + /// \brief Normalize Backslash /// /// Only relevant to Windows, this replaces all "\" in a string with "/" and @@ -140,6 +153,55 @@ std::string format(const std::string& format, const std::vector& args); +/// \brief Returns one token from the given stringstream +/// +/// Using the >> operator, with basic error checking +/// +/// \exception StringTokenError if the token cannot be read from the stream +/// +/// \param iss stringstream to read one token from +/// +/// \return the first token read from the stringstream +std::string getToken(std::istringstream& iss); + +/// \brief Converts a string token to an *unsigned* integer. +/// +/// The value is converted using a lexical cast, with error and bounds +/// checking. +/// +/// NumType is a *signed* integral type (e.g. int32_t) that is sufficiently +/// wide to store resulting integers. +/// +/// BitSize is the maximum number of bits that the resulting integer can take. +/// This function first checks whether the given token can be converted to +/// an integer of NumType type. It then confirms the conversion result is +/// within the valid range, i.e., [0, 2^BitSize - 1]. The second check is +/// necessary because lexical_cast where T is an unsigned integer type +/// doesn't correctly reject negative numbers when compiled with SunStudio. +/// +/// \exception StringTokenError if the value is out of range, or if it +/// could not be converted +/// +/// \param num_token the string token to convert +/// +/// \return the converted value, of type NumType +template +NumType +tokenToNum(const std::string& num_token) { + NumType num; + try { + num = boost::lexical_cast(num_token); + } catch (const boost::bad_lexical_cast& ex) { + isc_throw(StringTokenError, "Invalid SRV numeric parameter: " << + num_token); + } + if (num < 0 || num >= (static_cast(1) << BitSize)) { + isc_throw(StringTokenError, "Numeric SRV parameter out of range: " << + num); + } + return (num); +} + } // namespace str } // namespace util } // namespace isc diff --git a/src/lib/util/tests/filename_unittest.cc b/src/lib/util/tests/filename_unittest.cc index 33e6456413..07f3525a18 100644 --- a/src/lib/util/tests/filename_unittest.cc +++ b/src/lib/util/tests/filename_unittest.cc @@ -51,42 +51,49 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/alpha/beta/", fname.directory()); EXPECT_EQ("gamma", fname.name()); EXPECT_EQ(".delta", fname.extension()); + EXPECT_EQ("gamma.delta", fname.nameAndExtension()); // Directory only fname.setName("/gamma/delta/"); EXPECT_EQ("/gamma/delta/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // Filename only fname.setName("epsilon"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("epsilon", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("epsilon", fname.nameAndExtension()); // Extension only fname.setName(".zeta"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".zeta", fname.extension()); + EXPECT_EQ(".zeta", fname.nameAndExtension()); // Missing directory fname.setName("eta.theta"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("eta", fname.name()); EXPECT_EQ(".theta", fname.extension()); + EXPECT_EQ("eta.theta", fname.nameAndExtension()); // Missing filename fname.setName("/iota/.kappa"); EXPECT_EQ("/iota/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".kappa", fname.extension()); + EXPECT_EQ(".kappa", fname.nameAndExtension()); // Missing extension fname.setName("lambda/mu/nu"); EXPECT_EQ("lambda/mu/", fname.directory()); EXPECT_EQ("nu", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("nu", fname.nameAndExtension()); // Check that the decomposition can occur in the presence of leading and // trailing spaces @@ -94,18 +101,21 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("lambda/mu/", fname.directory()); EXPECT_EQ("nu", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("nu", fname.nameAndExtension()); // Empty string fname.setName(""); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // ... and just spaces fname.setName(" "); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // Check corner cases - where separators are present, but strings are // absent. @@ -113,16 +123,19 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); fname.setName("."); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(".", fname.nameAndExtension()); fname.setName("/."); EXPECT_EQ("/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(".", fname.nameAndExtension()); // Note that the space is a valid filename here; only leading and trailing // spaces should be trimmed. @@ -130,11 +143,13 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/", fname.directory()); EXPECT_EQ(" ", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(" .", fname.nameAndExtension()); fname.setName(" / . "); EXPECT_EQ("/", fname.directory()); EXPECT_EQ(" ", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(" .", fname.nameAndExtension()); } // Check that the expansion with a default works. @@ -177,3 +192,40 @@ TEST_F(FilenameTest, UseAsDefault) { EXPECT_EQ("/s/t/u", fname.useAsDefault("/s/t/u")); EXPECT_EQ("/a/b/c", fname.useAsDefault("")); } + +TEST_F(FilenameTest, setDirectory) { + Filename fname("a.b"); + EXPECT_EQ("", fname.directory()); + EXPECT_EQ("a.b", fname.fullName()); + EXPECT_EQ("a.b", fname.expandWithDefault("")); + + fname.setDirectory("/just/some/dir/"); + EXPECT_EQ("/just/some/dir/", fname.directory()); + EXPECT_EQ("/just/some/dir/a.b", fname.fullName()); + EXPECT_EQ("/just/some/dir/a.b", fname.expandWithDefault("")); + + fname.setDirectory("/just/some/dir"); + EXPECT_EQ("/just/some/dir/", fname.directory()); + EXPECT_EQ("/just/some/dir/a.b", fname.fullName()); + EXPECT_EQ("/just/some/dir/a.b", fname.expandWithDefault("")); + + fname.setDirectory("/"); + EXPECT_EQ("/", fname.directory()); + EXPECT_EQ("/a.b", fname.fullName()); + EXPECT_EQ("/a.b", fname.expandWithDefault("")); + + fname.setDirectory(""); + EXPECT_EQ("", fname.directory()); + EXPECT_EQ("a.b", fname.fullName()); + EXPECT_EQ("a.b", fname.expandWithDefault("")); + + fname = Filename("/first/a.b"); + EXPECT_EQ("/first/", fname.directory()); + EXPECT_EQ("/first/a.b", fname.fullName()); + EXPECT_EQ("/first/a.b", fname.expandWithDefault("")); + + fname.setDirectory("/just/some/dir"); + EXPECT_EQ("/just/some/dir/", fname.directory()); + EXPECT_EQ("/just/some/dir/a.b", fname.fullName()); + EXPECT_EQ("/just/some/dir/a.b", fname.expandWithDefault("")); +} diff --git a/src/lib/util/tests/strutil_unittest.cc b/src/lib/util/tests/strutil_unittest.cc index cd3a9ca811..74bc17d314 100644 --- a/src/lib/util/tests/strutil_unittest.cc +++ b/src/lib/util/tests/strutil_unittest.cc @@ -12,6 +12,8 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include + #include #include @@ -22,17 +24,9 @@ using namespace isc; using namespace isc::util; using namespace std; -class StringUtilTest : public ::testing::Test { -protected: - StringUtilTest() - { - } -}; - - // Check for slash replacement -TEST_F(StringUtilTest, Slash) { +TEST(StringUtilTest, Slash) { string instring = ""; isc::util::str::normalizeSlash(instring); @@ -49,7 +43,7 @@ TEST_F(StringUtilTest, Slash) { // Check that leading and trailing space trimming works -TEST_F(StringUtilTest, Trim) { +TEST(StringUtilTest, Trim) { // Empty and full string. EXPECT_EQ("", isc::util::str::trim("")); @@ -71,7 +65,7 @@ TEST_F(StringUtilTest, Trim) { // returned vector; if not as expected, the following references may be invalid // so should not be used. -TEST_F(StringUtilTest, Tokens) { +TEST(StringUtilTest, Tokens) { vector result; // Default delimiters @@ -157,7 +151,7 @@ TEST_F(StringUtilTest, Tokens) { // Changing case -TEST_F(StringUtilTest, ChangeCase) { +TEST(StringUtilTest, ChangeCase) { string mixed("abcDEFghiJKLmno123[]{=+--+]}"); string upper("ABCDEFGHIJKLMNO123[]{=+--+]}"); string lower("abcdefghijklmno123[]{=+--+]}"); @@ -173,7 +167,7 @@ TEST_F(StringUtilTest, ChangeCase) { // Formatting -TEST_F(StringUtilTest, Formatting) { +TEST(StringUtilTest, Formatting) { vector args; args.push_back("arg1"); @@ -213,3 +207,63 @@ TEST_F(StringUtilTest, Formatting) { string format9 = "%s %s"; EXPECT_EQ(format9, isc::util::str::format(format9, args)); } + +TEST(StringUtilTest, getToken) { + string s("a b c"); + istringstream ss(s); + EXPECT_EQ("a", isc::util::str::getToken(ss)); + EXPECT_EQ("b", isc::util::str::getToken(ss)); + EXPECT_EQ("c", isc::util::str::getToken(ss)); + EXPECT_THROW(isc::util::str::getToken(ss), isc::util::str::StringTokenError); +} + +int32_t tokenToNumCall_32_16(const string& token) { + return isc::util::str::tokenToNum(token); +} + +int16_t tokenToNumCall_16_8(const string& token) { + return isc::util::str::tokenToNum(token); +} + +TEST(StringUtilTest, tokenToNum) { + uint32_t num32 = tokenToNumCall_32_16("0"); + EXPECT_EQ(0, num32); + num32 = tokenToNumCall_32_16("123"); + EXPECT_EQ(123, num32); + num32 = tokenToNumCall_32_16("65535"); + EXPECT_EQ(65535, num32); + + EXPECT_THROW(tokenToNumCall_32_16(""), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_32_16("a"), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_32_16("-1"), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_32_16("65536"), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_32_16("1234567890"), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_32_16("-1234567890"), + isc::util::str::StringTokenError); + + uint16_t num16 = tokenToNumCall_16_8("123"); + EXPECT_EQ(123, num16); + num16 = tokenToNumCall_16_8("0"); + EXPECT_EQ(0, num16); + num16 = tokenToNumCall_16_8("255"); + EXPECT_EQ(255, num16); + + EXPECT_THROW(tokenToNumCall_16_8(""), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_16_8("a"), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_16_8("-1"), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_16_8("256"), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_16_8("1234567890"), + isc::util::str::StringTokenError); + EXPECT_THROW(tokenToNumCall_16_8("-1234567890"), + isc::util::str::StringTokenError); + +} diff --git a/tests/system/bindctl/tests.sh b/tests/system/bindctl/tests.sh index 6923c4167c..49ef0f17b0 100755 --- a/tests/system/bindctl/tests.sh +++ b/tests/system/bindctl/tests.sh @@ -24,6 +24,10 @@ SYSTEMTESTTOP=.. status=0 n=0 +# TODO: consider consistency with statistics definition in auth.spec +auth_queries_tcp="\" +auth_queries_udp="\" + echo "I:Checking b10-auth is working by default ($n)" $DIG +norec @10.53.0.1 -p 53210 ns.example.com. A >dig.out.$n || status=1 # perform a simple check on the output (digcomp would be too much for this) @@ -40,8 +44,8 @@ echo 'Stats show --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # the server should have received 1 UDP and 1 TCP queries (TCP query was # sent from the server startup script) -grep "\"auth.queries.tcp\": 1," bindctl.out.$n > /dev/null || status=1 -grep "\"auth.queries.udp\": 1," bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_tcp".*\<1\>" bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_udp".*\<1\>" bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` @@ -73,8 +77,8 @@ echo 'Stats show ' | $RUN_BINDCTL \ --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # The statistics counters should have been reset while stop/start. -grep "\"auth.queries.tcp\": 0," bindctl.out.$n > /dev/null || status=1 -grep "\"auth.queries.udp\": 1," bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_tcp".*\<0\>" bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_udp".*\<1\>" bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` @@ -97,8 +101,8 @@ echo 'Stats show ' | $RUN_BINDCTL \ --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # The statistics counters shouldn't be reset due to hot-swapping datasource. -grep "\"auth.queries.tcp\": 0," bindctl.out.$n > /dev/null || status=1 -grep "\"auth.queries.udp\": 2," bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_tcp".*\<0\>" bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_udp".*\<2\>" bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` diff --git a/tests/system/cleanall.sh b/tests/system/cleanall.sh index 17c3d4a6eb..d23d103882 100755 --- a/tests/system/cleanall.sh +++ b/tests/system/cleanall.sh @@ -27,7 +27,10 @@ find . -type f \( \ status=0 -for d in `find . -type d -maxdepth 1 -mindepth 1 -print` +for d in ./.* ./* do + case $d in ./.|./..) continue ;; esac + test -d $d || continue + test ! -f $d/clean.sh || ( cd $d && sh clean.sh ) done diff --git a/tools/system_messages.py b/tools/system_messages.py index 6cf3ce9411..7b0d60cc5a 100644 --- a/tools/system_messages.py +++ b/tools/system_messages.py @@ -58,6 +58,12 @@ SEC_HEADER=""" %version; ]> +