From c136060da6a43da5db7e45b6a32da83f0f7d0820 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Wed, 29 Jun 2011 21:43:57 -0500 Subject: [PATCH 001/175] [jreed-docs-2] remove some spaces at ends of lines in guide --- doc/guide/bind10-guide.xml | 48 +++++++++++++++++++------------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 7d1a006545..3e03ed2e5c 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -129,7 +129,7 @@ The processes started by the bind10 command have names starting with "b10-", including: - + @@ -224,7 +224,7 @@
Managing BIND 10 - + Once BIND 10 is running, a few commands are used to interact directly with the system: @@ -263,7 +263,7 @@ In addition, manual pages are also provided in the default installation. - + - + Starting BIND10 with <command>bind10</command> - BIND 10 provides the bind10 command which + BIND 10 provides the bind10 command which starts up the required processes. bind10 will also restart processes that exit unexpectedly. @@ -694,7 +694,7 @@ Debian and Ubuntu: After starting the b10-msgq communications channel, - bind10 connects to it, + bind10 connects to it, runs the configuration manager, and reads its own configuration. Then it starts the other modules. @@ -752,7 +752,7 @@ Debian and Ubuntu: b10-msgq service. It listens on 127.0.0.1. - + The configuration data item is: - + database_file - + This is an optional string to define the path to find the SQLite3 database file. @@ -1103,7 +1103,7 @@ This may be a temporary setting until then. shutdown - + Stop the authoritative DNS server. @@ -1159,7 +1159,7 @@ This may be a temporary setting until then. $INCLUDE - + Loads an additional zone file. This may be recursive. @@ -1167,7 +1167,7 @@ This may be a temporary setting until then. $ORIGIN - + Defines the relative domain name. @@ -1175,7 +1175,7 @@ This may be a temporary setting until then. $TTL - + Defines the time-to-live value used for following records that don't include a TTL. @@ -1240,7 +1240,7 @@ TODO The current development release of BIND 10 only supports - AXFR. (IXFR is not supported.) + AXFR. (IXFR is not supported.) @@ -1287,7 +1287,7 @@ what if a NOTIFY is sent? The current development release of BIND 10 only supports - AXFR. (IXFR is not supported.) + AXFR. (IXFR is not supported.) Access control is not yet provided. From 688d0a641d4fa7a018fb4f9e131ed1454c68dd15 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Wed, 29 Jun 2011 21:45:12 -0500 Subject: [PATCH 002/175] [jreed-docs-2] add start of access control section and some comments todo wrote about access control for resolver added many comments for things to document. --- doc/guide/bind10-guide.xml | 84 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 84 insertions(+) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 3e03ed2e5c..c894f9cf9b 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1374,6 +1374,67 @@ what is XfroutClient xfr_client?? +
+ Access Control + + + The b10-resolver daemon only accepts + DNS queries from the localhost (127.0.0.1 and ::1). + The configuration may + be used to reject, drop, or allow specific IPs or networks. + This configuration list is first match. + + + + The configuration's item may be + set to ACCEPT to allow the incoming query, + REJECT to respond with a DNS REFUSED return + code, or DROP to ignore the query without + any response (such as a blackhole). For more information, + see the respective debugging messages: RESOLVER_QUERY_ACCEPTED, + RESOLVER_QUERY_REJECTED, + and RESOLVER_QUERY_DROPPED. + + + + The required configuration's item is set + to an IPv4 or IPv6 address, addresses with an network mask, or to + the special lowercase keywords any6 (for + any IPv6 address) or any4 (for any IPv4 + address). + + + + + + For example to allow the 192.168.1.0/24 + network to use your recursive name server, at the + bindctl prompt run: + + + +> config add Resolver/query_acl +> config set Resolver/query_acl[2]/action "ACCEPT" +> config set Resolver/query_acl[2]/from "192.168.1.0/24" +> config commit + + + (Replace the 2 + as needed; run config show + Resolver/query_acl if needed.) + + + This prototype access control configuration + syntax may be changed. + +
+
Forwarding @@ -1533,6 +1594,29 @@ then change those defaults with config set Resolver/forward_addresses[0]/address + + From b4007e4b25d21ba3b693674ca19ead7d202b7de0 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 30 Jun 2011 10:26:10 -0500 Subject: [PATCH 003/175] [bind10-20110705-release] update version to 20110705 --- configure.ac | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/configure.ac b/configure.ac index 348708fde1..dcedf95b83 100644 --- a/configure.ac +++ b/configure.ac @@ -2,7 +2,7 @@ # Process this file with autoconf to produce a configure script. AC_PREREQ([2.59]) -AC_INIT(bind10-devel, 20110519, bind10-dev@isc.org) +AC_INIT(bind10-devel, 20110705, bind10-dev@isc.org) AC_CONFIG_SRCDIR(README) AM_INIT_AUTOMAKE AC_CONFIG_HEADERS([config.h]) From 07708b4325680c4731f0d3dc24bca9da3c962d80 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 30 Jun 2011 10:37:50 -0500 Subject: [PATCH 004/175] [bind10-20110705-release][master] add a comment to not edit this xml file. As briefly mentioned in jabber. --- tools/system_messages.py | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/system_messages.py b/tools/system_messages.py index 6cf3ce9411..7b0d60cc5a 100644 --- a/tools/system_messages.py +++ b/tools/system_messages.py @@ -58,6 +58,12 @@ SEC_HEADER=""" %version; ]> + From 734cae300ccd13aacec1f32b283d4d21b5de8fb5 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 30 Jun 2011 11:16:08 -0500 Subject: [PATCH 005/175] [bind10-20110705-release][master] cleanup changelog use a tab before the keyword type. use two tabs before the committer username. --- ChangeLog | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/ChangeLog b/ChangeLog index 4616678cf2..451a1c0a11 100644 --- a/ChangeLog +++ b/ChangeLog @@ -52,7 +52,7 @@ Now builds and runs with Python 3.2 (Trac #710, git dae1d2e24f993e1eef9ab429326652f40a006dfb) -257. [bug] y-aharen +257. [bug] y-aharen Fixed a bug an instance of IntervalTimerImpl may be destructed while deadline_timer is holding the handler. This fix addresses occasional failure of IntervalTimerTest.destructIntervalTimer. @@ -72,12 +72,12 @@ b10-xfrout: failed to send notifies over IPv6 correctly. (Trac964, git 3255c92714737bb461fb67012376788530f16e40) -253. [func] jelte +253. [func] jelte Add configuration options for logging through the virtual module Logging. (Trac 736, git 9fa2a95177265905408c51d13c96e752b14a0824) -252. [func] stephen +252. [func] stephen Add syslog as destination for logging. (Trac976, git 31a30f5485859fd3df2839fc309d836e3206546e) @@ -90,36 +90,36 @@ their permissions must be adjusted by hand (if necessary). (Trac870, git 461fc3cb6ebabc9f3fa5213749956467a14ebfd4) -250. [bug] ocean +250. [bug] ocean src/lib/util/encode, in some conditions, the DecodeNormalizer's iterator may reach the end() and when later being dereferenced it will cause crash on some platform. (Trac838, git 83e33ec80c0c6485d8b116b13045b3488071770f) -249. [func] jerry +249. [func] jerry xfrout: add support for TSIG verification. (Trac816, git 3b2040e2af2f8139c1c319a2cbc429035d93f217) -248. [func] stephen +248. [func] stephen Add file and stderr as destinations for logging. (Trac555, git 38b3546867425bd64dbc5920111a843a3330646b) -247. [func] jelte +247. [func] jelte Upstream queries from the resolver now set EDNS0 buffer size. (Trac834, git 48e10c2530fe52c9bde6197db07674a851aa0f5d) -246. [func] stephen +246. [func] stephen Implement logging using log4cplus (http://log4cplus.sourceforge.net) (Trac899, git 31d3f525dc01638aecae460cb4bc2040c9e4df10) -245. [func] vorner +245. [func] vorner Authoritative server can now sign the answers using TSIG (configured in tsig_keys/keys, list of strings like "name::sha1-hmac"). It doesn't use them for ACL yet, only verifies them and signs if the request is signed. (Trac875, git fe5e7003544e4e8f18efa7b466a65f336d8c8e4d) -244. [func] stephen +244. [func] stephen In unit tests, allow the choice of whether unhandled exceptions are caught in the unit test program (and details printed) or allowed to propagate to the default exception handler. See the bind10-dev thread From a5cf5c7b3a6ac9be60a8737f0e36a61897d32acd Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 30 Jun 2011 11:29:34 -0500 Subject: [PATCH 006/175] [bind10-20110705-release][master] use a space and # hash mark before the Trac number in the ChangeLog --- ChangeLog | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/ChangeLog b/ChangeLog index 451a1c0a11..0054c24171 100644 --- a/ChangeLog +++ b/ChangeLog @@ -61,25 +61,25 @@ 256. [bug] jerry src/bin/xfrin: update xfrin to check TSIG before other part of incoming message. - (Trac955, git 261450e93af0b0406178e9ef121f81e721e0855c) + (Trac #955, git 261450e93af0b0406178e9ef121f81e721e0855c) 255. [func] zhang likun src/lib/cache: remove empty code in lib/cache and the corresponding suppression rule in src/cppcheck-suppress.lst. - (Trac639, git 4f714bac4547d0a025afd314c309ca5cb603e212) + (Trac #639, git 4f714bac4547d0a025afd314c309ca5cb603e212) 254. [bug] jinmei b10-xfrout: failed to send notifies over IPv6 correctly. - (Trac964, git 3255c92714737bb461fb67012376788530f16e40) + (Trac #964, git 3255c92714737bb461fb67012376788530f16e40) 253. [func] jelte Add configuration options for logging through the virtual module Logging. - (Trac 736, git 9fa2a95177265905408c51d13c96e752b14a0824) + (Trac #736, git 9fa2a95177265905408c51d13c96e752b14a0824) 252. [func] stephen Add syslog as destination for logging. - (Trac976, git 31a30f5485859fd3df2839fc309d836e3206546e) + (Trac #976, git 31a30f5485859fd3df2839fc309d836e3206546e) 251. [bug]* jinmei Make sure bindctl private files are non readable to anyone except @@ -88,36 +88,36 @@ group will have to be adjusted. Also note that this change is only effective for a fresh install; if these files already exist, their permissions must be adjusted by hand (if necessary). - (Trac870, git 461fc3cb6ebabc9f3fa5213749956467a14ebfd4) + (Trac #870, git 461fc3cb6ebabc9f3fa5213749956467a14ebfd4) 250. [bug] ocean src/lib/util/encode, in some conditions, the DecodeNormalizer's iterator may reach the end() and when later being dereferenced it will cause crash on some platform. - (Trac838, git 83e33ec80c0c6485d8b116b13045b3488071770f) + (Trac #838, git 83e33ec80c0c6485d8b116b13045b3488071770f) 249. [func] jerry xfrout: add support for TSIG verification. - (Trac816, git 3b2040e2af2f8139c1c319a2cbc429035d93f217) + (Trac #816, git 3b2040e2af2f8139c1c319a2cbc429035d93f217) 248. [func] stephen Add file and stderr as destinations for logging. - (Trac555, git 38b3546867425bd64dbc5920111a843a3330646b) + (Trac #555, git 38b3546867425bd64dbc5920111a843a3330646b) 247. [func] jelte Upstream queries from the resolver now set EDNS0 buffer size. - (Trac834, git 48e10c2530fe52c9bde6197db07674a851aa0f5d) + (Trac #834, git 48e10c2530fe52c9bde6197db07674a851aa0f5d) 246. [func] stephen Implement logging using log4cplus (http://log4cplus.sourceforge.net) - (Trac899, git 31d3f525dc01638aecae460cb4bc2040c9e4df10) + (Trac #899, git 31d3f525dc01638aecae460cb4bc2040c9e4df10) 245. [func] vorner Authoritative server can now sign the answers using TSIG (configured in tsig_keys/keys, list of strings like "name::sha1-hmac"). It doesn't use them for ACL yet, only verifies them and signs if the request is signed. - (Trac875, git fe5e7003544e4e8f18efa7b466a65f336d8c8e4d) + (Trac #875, git fe5e7003544e4e8f18efa7b466a65f336d8c8e4d) 244. [func] stephen In unit tests, allow the choice of whether unhandled exceptions are @@ -129,7 +129,7 @@ 243. [func]* feng Add optional hmac algorithm SHA224/384/812. - (Trac#782, git 77d792c9d7c1a3f95d3e6a8b721ac79002cd7db1) + (Trac #782, git 77d792c9d7c1a3f95d3e6a8b721ac79002cd7db1) bind10-devel-20110519 released on May 19, 2011 @@ -176,7 +176,7 @@ bind10-devel-20110519 released on May 19, 2011 stats module and stats-httpd module, and maybe with other statistical modules in future. "stats.spec" has own configuration and commands of stats module, if it requires. - (Trac#719, git a234b20dc6617392deb8a1e00eb0eed0ff353c0a) + (Trac #719, git a234b20dc6617392deb8a1e00eb0eed0ff353c0a) 236. [func] jelte C++ client side of configuration now uses BIND10 logging system. @@ -219,13 +219,13 @@ bind10-devel-20110519 released on May 19, 2011 instead of '%s,%d', which allows us to cope better with mismatched placeholders and allows reordering of them in case of translation. - (Trac901, git 4903410e45670b30d7283f5d69dc28c2069237d6) + (Trac #901, git 4903410e45670b30d7283f5d69dc28c2069237d6) 230. [bug] naokikambe Removed too repeated verbose messages in two cases of: - when auth sends statistics data to stats - when stats receives statistics data from other modules - (Trac#620, git 0ecb807011196eac01f281d40bc7c9d44565b364) + (Trac #620, git 0ecb807011196eac01f281d40bc7c9d44565b364) 229. [doc] jreed Add manual page for b10-host. From 6c3401b4a9fb79bdee7484e1e3c05758d1b0c0ca Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 30 Jun 2011 11:31:57 -0500 Subject: [PATCH 007/175] [bind10-20110705-release][master] add a changelog entry for the many trac tickets for log conversions --- ChangeLog | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/ChangeLog b/ChangeLog index 0054c24171..90db8148b1 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,11 @@ +266. [func] Multiple developers + Convert various error messages, debugging and other output + to the new logging interface, including for b10-resolver, + the resolver library, the CC library, b10-auth, b10-cfgmgr, + b10-xfrin, and b10-xfrout. This includes a lot of new + documentation describing the new log messages. + (Trac #738, #739, #742, #746, #759, #761, #762) + 265. [func]* jinmei b10-resolver: Introduced ACL on incoming queries. By default the resolver accepts queries from ::1 and 127.0.0.1 and rejects all From 85b53414c2c8f70e541447ee204e004693289956 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Mon, 4 Jul 2011 22:54:42 -0500 Subject: [PATCH 008/175] [bind10-20110705-release] regenerate some docs regenerate the guide HTML (catch up on some software dependencies). regenerate messages xml and html --- doc/guide/bind10-guide.html | 56 +- doc/guide/bind10-messages.html | 1027 ++++++++++++----- doc/guide/bind10-messages.xml | 1935 +++++++++++++++++++++++++------- 3 files changed, 2325 insertions(+), 693 deletions(-) diff --git a/doc/guide/bind10-guide.html b/doc/guide/bind10-guide.html index 5754cf001e..94adf4aa92 100644 --- a/doc/guide/bind10-guide.html +++ b/doc/guide/bind10-guide.html @@ -1,24 +1,24 @@ -BIND 10 Guide

BIND 10 Guide

Administrator Reference for BIND 10

This is the reference guide for BIND 10 version - 20110519.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by +BIND 10 Guide

BIND 10 Guide

Administrator Reference for BIND 10

This is the reference guide for BIND 10 version + 20110705.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by Internet Systems Consortium (ISC). It includes DNS libraries and modular components for controlling authoritative and recursive DNS servers.

- This is the reference guide for BIND 10 version 20110519. + This is the reference guide for BIND 10 version 20110705. The most up-to-date version of this document, along with - other documents for BIND 10, can be found at http://bind10.isc.org/docs.


Chapter 1. Introduction

+ other documents for BIND 10, can be found at http://bind10.isc.org/docs.


Chapter 1. Introduction

BIND is the popular implementation of a DNS server, developer interfaces, and DNS tools. BIND 10 is a rewrite of BIND 9. BIND 10 is written in C++ and Python and provides a modular environment for serving and maintaining DNS.

Note

This guide covers the experimental prototype of - BIND 10 version 20110519. + BIND 10 version 20110705.

Note

BIND 10 provides a EDNS0- and DNSSEC-capable authoritative DNS server and a caching recursive name server which also provides forwarding. -

Supported Platforms

+

Supported Platforms

BIND 10 builds have been tested on Debian GNU/Linux 5, Ubuntu 9.10, NetBSD 5, Solaris 10, FreeBSD 7 and 8, and CentOS Linux 5.3. @@ -28,13 +28,15 @@ It is planned for BIND 10 to build, install and run on Windows and standard Unix-type platforms. -

Required Software

+

Required Software

BIND 10 requires Python 3.1. Later versions may work, but Python 3.1 is the minimum version which will work.

BIND 10 uses the Botan crypto library for C++. It requires - at least Botan version 1.8. To build BIND 10, install the - Botan libraries and development include headers. + at least Botan version 1.8. +

+ BIND 10 uses the log4cplus C++ logging library. It requires + at least log4cplus version 1.0.3.

The authoritative server requires SQLite 3.3.9 or newer. The b10-xfrin, b10-xfrout, @@ -136,7 +138,10 @@ and, of course, DNS. These include detailed developer documentation and code examples. -

Chapter 2. Installation

Building Requirements

Note

+

Chapter 2. Installation

Building Requirements

+ In addition to the run-time requirements, building BIND 10 + from source code requires various development include headers. +

Note

Some operating systems have split their distribution packages into a run-time and a development package. You will need to install the development package versions, which include header files and @@ -147,6 +152,11 @@

+ To build BIND 10, also install the Botan (at least version + 1.8) and the log4cplus (at least version 1.0.3) + development include headers. +

+ The Python Library and Python _sqlite3 module are required to enable the Xfrout and Xfrin support.

Note

@@ -156,7 +166,7 @@ Building BIND 10 also requires a C++ compiler and standard development headers, make, and pkg-config. BIND 10 builds have been tested with GCC g++ 3.4.3, 4.1.2, - 4.1.3, 4.2.1, 4.3.2, and 4.4.1. + 4.1.3, 4.2.1, 4.3.2, and 4.4.1; Clang++ 2.8; and Sun C++ 5.10.

Quick start

Note

This quickly covers the standard steps for installing and deploying BIND 10 as an authoritative name server using @@ -192,14 +202,14 @@ the Git code revision control system or as a downloadable tar file. It may also be available in pre-compiled ready-to-use packages from operating system vendors. -

Download Tar File

+

Download Tar File

Downloading a release tar file is the recommended method to obtain the source code.

The BIND 10 releases are available as tar file downloads from ftp://ftp.isc.org/isc/bind10/. Periodic development snapshots may also be available. -

Retrieve from Git

+

Retrieve from Git

Downloading this "bleeding edge" code is recommended only for developers or advanced users. Using development code in a production environment is not recommended. @@ -233,7 +243,7 @@ autoheader, automake, and related commands. -

Configure before the build

+

Configure before the build

BIND 10 uses the GNU Build System to discover build environment details. To generate the makefiles using the defaults, simply run: @@ -264,16 +274,16 @@

If the configure fails, it may be due to missing or old dependencies. -

Build

+

Build

After the configure step is complete, to build the executables from the C++ code and prepare the Python scripts, run:

$ make

-

Install

+

Install

To install the BIND 10 executables, support files, and documentation, run:

$ make install

-

Note

The install step may require superuser privileges.

Install Hierarchy

+

Note

The install step may require superuser privileges.

Install Hierarchy

The following is the layout of the complete BIND 10 installation:

  • bin/ — @@ -490,12 +500,12 @@ shutdown the details and relays (over a b10-msgq command channel) the configuration on to the specified module.

    -

Chapter 8. Authoritative Server

+

Chapter 8. Authoritative Server

The b10-auth is the authoritative DNS server. It supports EDNS0 and DNSSEC. It supports IPv6. Normally it is started by the bind10 master process. -

Server Configurations

+

Server Configurations

b10-auth is configured via the b10-cfgmgr configuration manager. The module name is Auth. @@ -515,7 +525,7 @@ This may be a temporary setting until then.

shutdown
Stop the authoritative DNS server.

-

Data Source Backends

Note

+

Data Source Backends

Note

For the development prototype release, b10-auth supports a SQLite3 data source backend and in-memory data source backend. @@ -529,7 +539,7 @@ This may be a temporary setting until then. The default is /usr/local/var/.) This data file location may be changed by defining the database_file configuration. -

Loading Master Zones Files

+

Loading Master Zones Files

RFC 1035 style DNS master zone files may imported into a BIND 10 data source by using the b10-loadzone utility. @@ -607,7 +617,7 @@ This may be a temporary setting until then.

Note

Access control (such as allowing notifies) is not yet provided. The primary/secondary service is not yet complete. -

Chapter 12. Recursive Name Server

Table of Contents

Forwarding

+

Chapter 12. Recursive Name Server

Table of Contents

Forwarding

The b10-resolver process is started by bind10. @@ -636,7 +646,7 @@ This may be a temporary setting until then. > config set Resolver/listen_on [{ "address": "127.0.0.1", "port": 53 }] > config commit

-

Forwarding

+

Forwarding

To enable forwarding, the upstream address and port must be configured to forward queries to, such as: diff --git a/doc/guide/bind10-messages.html b/doc/guide/bind10-messages.html index b075e96eb3..ecebcd825c 100644 --- a/doc/guide/bind10-messages.html +++ b/doc/guide/bind10-messages.html @@ -1,10 +1,10 @@ -BIND 10 Messages Manual

BIND 10 Messages Manual

This is the messages manual for BIND 10 version - 20110519.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by +BIND 10 Messages Manual

BIND 10 Messages Manual

This is the messages manual for BIND 10 version + 20110705.

Abstract

BIND 10 is a Domain Name System (DNS) suite managed by Internet Systems Consortium (ISC). It includes DNS libraries and modular components for controlling authoritative and recursive DNS servers.

- This is the messages manual for BIND 10 version 20110519. + This is the messages manual for BIND 10 version 20110705. The most up-to-date version of this document, along with other documents for BIND 10, can be found at http://bind10.isc.org/docs. @@ -26,38 +26,337 @@ For information on configuring and using BIND 10 logging, refer to the BIND 10 Guide.

Chapter 2. BIND 10 Messages

-

ASIODNS_FETCHCOMP upstream fetch to %1(%2) has now completed

-A debug message, this records the the upstream fetch (a query made by the +

ASIODNS_FETCH_COMPLETED upstream fetch to %1(%2) has now completed

+A debug message, this records that the upstream fetch (a query made by the resolver on behalf of its client) to the specified address has completed. -

ASIODNS_FETCHSTOP upstream fetch to %1(%2) has been stopped

+

ASIODNS_FETCH_STOPPED upstream fetch to %1(%2) has been stopped

An external component has requested the halting of an upstream fetch. This is an allowed operation, and the message should only appear if debug is enabled. -

ASIODNS_OPENSOCK error %1 opening %2 socket to %3(%4)

+

ASIODNS_OPEN_SOCKET error %1 opening %2 socket to %3(%4)

The asynchronous I/O code encountered an error when trying to open a socket of the specified protocol in order to send a message to the target address. -The the number of the system error that cause the problem is given in the +The number of the system error that cause the problem is given in the message. -

ASIODNS_RECVSOCK error %1 reading %2 data from %3(%4)

-The asynchronous I/O code encountered an error when trying read data from -the specified address on the given protocol. The the number of the system +

ASIODNS_READ_DATA error %1 reading %2 data from %3(%4)

+The asynchronous I/O code encountered an error when trying to read data from +the specified address on the given protocol. The number of the system error that cause the problem is given in the message. -

ASIODNS_RECVTMO receive timeout while waiting for data from %1(%2)

+

ASIODNS_READ_TIMEOUT receive timeout while waiting for data from %1(%2)

An upstream fetch from the specified address timed out. This may happen for any number of reasons and is most probably a problem at the remote server or a problem on the network. The message will only appear if debug is enabled. -

ASIODNS_SENDSOCK error %1 sending data using %2 to %3(%4)

+

ASIODNS_SEND_DATA error %1 sending data using %2 to %3(%4)

The asynchronous I/O code encountered an error when trying send data to the specified address on the given protocol. The the number of the system error that cause the problem is given in the message. -

ASIODNS_UNKORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3)

-This message should not appear and indicates an internal error if it does. -Please enter a bug report. -

ASIODNS_UNKRESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3)

-The termination method of the resolver's upstream fetch class was called with -an unknown result code (which is given in the message). This message should -not appear and may indicate an internal error. Please enter a bug report. +

ASIODNS_UNKNOWN_ORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3)

+An internal consistency check on the origin of a message from the +asynchronous I/O module failed. This may indicate an internal error; +please submit a bug report. +

ASIODNS_UNKNOWN_RESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3)

+An internal error indicating that the termination method of the resolver's +upstream fetch class was called with an unknown result code (which is +given in the message). Please submit a bug report. +

AUTH_AXFR_ERROR error handling AXFR request: %1

+This is a debug message produced by the authoritative server when it +has encountered an error processing an AXFR request. The message gives +the reason for the error, and the server will return a SERVFAIL code to +the sender. +

AUTH_AXFR_UDP AXFR query received over UDP

+This is a debug message output when the authoritative server has received +an AXFR query over UDP. Use of UDP for AXFRs is not permitted by the +protocol, so the server will return a FORMERR error to the sender. +

AUTH_COMMAND_FAILED execution of command channel instruction '%1' failed: %2

+Execution of the specified command by the authoritative server failed. The +message contains the reason for the failure. +

AUTH_CONFIG_CHANNEL_CREATED configuration session channel created

+This is a debug message indicating that authoritative server has created +the channel to the configuration manager. It is issued during server +startup is an indication that the initialization is proceeding normally. +

AUTH_CONFIG_CHANNEL_ESTABLISHED configuration session channel established

+This is a debug message indicating that authoritative server +has established communication the configuration manager over the +previously-created channel. It is issued during server startup is an +indication that the initialization is proceeding normally. +

AUTH_CONFIG_CHANNEL_STARTED configuration session channel started

+This is a debug message, issued when the authoritative server has +posted a request to be notified when new configuration information is +available. It is issued during server startup is an indication that +the initialization is proceeding normally. +

AUTH_CONFIG_LOAD_FAIL load of configuration failed: %1

+An attempt to configure the server with information from the configuration +database during the startup sequence has failed. (The reason for +the failure is given in the message.) The server will continue its +initialization although it may not be configured in the desired way. +

AUTH_CONFIG_UPDATE_FAIL update of configuration failed: %1

+At attempt to update the configuration the server with information +from the configuration database has failed, the reason being given in +the message. +

AUTH_DATA_SOURCE data source database file: %1

+This is a debug message produced by the authoritative server when it accesses a +datebase data source, listing the file that is being accessed. +

AUTH_DNS_SERVICES_CREATED DNS services created

+This is a debug message indicating that the component that will handling +incoming queries for the authoritiative server (DNSServices) has been +successfully created. It is issued during server startup is an indication +that the initialization is proceeding normally. +

AUTH_HEADER_PARSE_FAIL unable to parse header in received DNS packet: %1

+This is a debug message, generated by the authoritative server when an +attempt to parse the header of a received DNS packet has failed. (The +reason for the failure is given in the message.) The server will drop the +packet. +

AUTH_LOAD_TSIG loading TSIG keys

+This is a debug message indicating that the authoritiative server +has requested the keyring holding TSIG keys from the configuration +database. It is issued during server startup is an indication that the +initialization is proceeding normally. +

AUTH_LOAD_ZONE loaded zone %1/%2

+This debug message is issued during the processing of the 'loadzone' command +when the authoritative server has successfully loaded the named zone of the +named class. +

AUTH_MEM_DATASRC_DISABLED memory data source is disabled for class %1

+This is a debug message reporting that the authoritative server has +discovered that the memory data source is disabled for the given class. +

AUTH_MEM_DATASRC_ENABLED memory data source is enabled for class %1

+This is a debug message reporting that the authoritative server has +discovered that the memory data source is enabled for the given class. +

AUTH_NOTIFY_QUESTIONS invalid number of questions (%1) in incoming NOTIFY

+This debug message is logged by the authoritative server when it receives +a NOTIFY packet that contains zero or more than one question. (A valid +NOTIFY packet contains one question.) The server will return a FORMERR +error to the sender. +

AUTH_NOTIFY_RRTYPE invalid question RR type (%1) in incoming NOTIFY

+This debug message is logged by the authoritative server when it receives +a NOTIFY packet that an RR type of something other than SOA in the +question section. (The RR type received is included in the message.) The +server will return a FORMERR error to the sender. +

AUTH_NO_STATS_SESSION session interface for statistics is not available

+The authoritative server had no session with the statistics module at the +time it attempted to send it data: the attempt has been abandoned. This +could be an error in configuration. +

AUTH_NO_XFRIN received NOTIFY but XFRIN session is not running

+This is a debug message produced by the authoritative server when it receives +a NOTIFY packet but the XFRIN process is not running. The packet will be +dropped and nothing returned to the sender. +

AUTH_PACKET_PARSE_ERROR unable to parse received DNS packet: %1

+This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to something other +than a protocol error. The reason for the failure is given in the message; +the server will return a SERVFAIL error code to the sender. +

AUTH_PACKET_PROTOCOL_ERROR DNS packet protocol error: %1. Returning %2

+This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to a protocol error. +The reason for the failure is given in the message, as is the error code +that will be returned to the sender. +

AUTH_PACKET_RECEIVED message received:\n%1

+This is a debug message output by the authoritative server when it +receives a valid DNS packet. +

+Note: This message includes the packet received, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_PROCESS_FAIL message processing failure: %1

+This message is generated by the authoritative server when it has +encountered an internal error whilst processing a received packet: +the cause of the error is included in the message. +

+The server will return a SERVFAIL error code to the sender of the packet. +However, this message indicates a potential error in the server. +Please open a bug ticket for this issue. +

AUTH_RECEIVED_COMMAND command '%1' received

+This is a debug message issued when the authoritative server has received +a command on the command channel. +

AUTH_RECEIVED_SENDSTATS command 'sendstats' received

+This is a debug message issued when the authoritative server has received +a command from the statistics module to send it data. The 'sendstats' +command is handled differently to other commands, which is why the debug +message associated with it has its own code. +

AUTH_RESPONSE_RECEIVED received response message, ignoring

+This is a debug message, this is output if the authoritative server +receives a DNS packet with the QR bit set, i.e. a DNS response. The +server ignores the packet as it only responds to question packets. +

AUTH_SEND_ERROR_RESPONSE sending an error response (%1 bytes):\n%2

+This is a debug message recording that the authoritative server is sending +an error response to the originator of the query. A previous message will +have recorded details of the failure. +

+Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_SEND_NORMAL_RESPONSE sending an error response (%1 bytes):\n%2

+This is a debug message recording that the authoritative server is sending +a response to the originator of a query. +

+Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. +

AUTH_SERVER_CREATED server created

+An informational message indicating that the authoritative server process has +been created and is initializing. The AUTH_SERVER_STARTED message will be +output when initialization has successfully completed and the server starts +accepting queries. +

AUTH_SERVER_FAILED server failed: %1

+The authoritative server has encountered a fatal error and is terminating. The +reason for the failure is included in the message. +

AUTH_SERVER_STARTED server started

+Initialization of the authoritative server has completed successfully +and it is entering the main loop, waiting for queries to arrive. +

AUTH_SQLITE3 nothing to do for loading sqlite3

+This is a debug message indicating that the authoritative server has +found that the data source it is loading is an SQLite3 data source, +so no further validation is needed. +

AUTH_STATS_CHANNEL_CREATED STATS session channel created

+This is a debug message indicating that the authoritative server has +created a channel to the statistics process. It is issued during server +startup is an indication that the initialization is proceeding normally. +

AUTH_STATS_CHANNEL_ESTABLISHED STATS session channel established

+This is a debug message indicating that the authoritative server +has established communication over the previously created statistics +channel. It is issued during server startup is an indication that the +initialization is proceeding normally. +

AUTH_STATS_COMMS communication error in sending statistics data: %1

+An error was encountered when the authoritiative server tried to send data +to the statistics daemon. The message includes additional information +describing the reason for the failure. +

AUTH_STATS_TIMEOUT timeout while sending statistics data: %1

+The authoritative server sent data to the statistics daemon but received +no acknowledgement within the specified time. The message includes +additional information describing the reason for the failure. +

AUTH_STATS_TIMER_DISABLED statistics timer has been disabled

+This is a debug message indicating that the statistics timer has been +disabled in the authoritative server and no statistics information is +being produced. +

AUTH_STATS_TIMER_SET statistics timer set to %1 second(s)

+This is a debug message indicating that the statistics timer has been +enabled and that the authoritative server will produce statistics data +at the specified interval. +

AUTH_UNSUPPORTED_OPCODE unsupported opcode: %1

+This is a debug message, produced when a received DNS packet being +processed by the authoritative server has been found to contain an +unsupported opcode. (The opcode is included in the message.) The server +will return an error code of NOTIMPL to the sender. +

AUTH_XFRIN_CHANNEL_CREATED XFRIN session channel created

+This is a debug message indicating that the authoritative server has +created a channel to the XFRIN (Transfer-in) process. It is issued +during server startup is an indication that the initialization is +proceeding normally. +

AUTH_XFRIN_CHANNEL_ESTABLISHED XFRIN session channel established

+This is a debug message indicating that the authoritative server has +established communication over the previously-created channel to the +XFRIN (Transfer-in) process. It is issued during server startup is an +indication that the initialization is proceeding normally. +

AUTH_ZONEMGR_COMMS error communicating with zone manager: %1

+This is a debug message output during the processing of a NOTIFY request. +An error (listed in the message) has been encountered whilst communicating +with the zone manager. The NOTIFY request will not be honored. +

AUTH_ZONEMGR_ERROR received error response from zone manager: %1

+This is a debug message output during the processing of a NOTIFY +request. The zone manager component has been informed of the request, +but has returned an error response (which is included in the message). The +NOTIFY request will not be honored. +

CC_ASYNC_READ_FAILED asynchronous read failed

+This marks a low level error, we tried to read data from the message queue +daemon asynchronously, but the ASIO library returned an error. +

CC_CONN_ERROR error connecting to message queue (%1)

+It is impossible to reach the message queue daemon for the reason given. It +is unlikely there'll be reason for whatever program this currently is to +continue running, as the communication with the rest of BIND 10 is vital +for the components. +

CC_DISCONNECT disconnecting from message queue daemon

+The library is disconnecting from the message queue daemon. This debug message +indicates that the program is trying to shut down gracefully. +

CC_ESTABLISH trying to establish connection with message queue daemon at %1

+This debug message indicates that the command channel library is about to +connect to the message queue daemon, which should be listening on the UNIX-domain +socket listed in the output. +

CC_ESTABLISHED successfully connected to message queue daemon

+This debug message indicates that the connection was successfully made, this +should follow CC_ESTABLISH. +

CC_GROUP_RECEIVE trying to receive a message

+Debug message, noting that a message is expected to come over the command +channel. +

CC_GROUP_RECEIVED message arrived ('%1', '%2')

+Debug message, noting that we successfully received a message (its envelope and +payload listed). This follows CC_GROUP_RECEIVE, but might happen some time +later, depending if we waited for it or just polled. +

CC_GROUP_SEND sending message '%1' to group '%2'

+Debug message, we're about to send a message over the command channel. +

CC_INVALID_LENGTHS invalid length parameters (%1, %2)

+This happens when garbage comes over the command channel or some kind of +confusion happens in the program. The data received from the socket make no +sense if we interpret it as lengths of message. The first one is total length +of message, the second length of the header. The header and it's length +(2 bytes) is counted in the total length. +

CC_LENGTH_NOT_READY length not ready

+There should be data representing length of message on the socket, but it +is not there. +

CC_NO_MESSAGE no message ready to be received yet

+The program polled for incoming messages, but there was no message waiting. +This is a debug message which may happen only after CC_GROUP_RECEIVE. +

CC_NO_MSGQ unable to connect to message queue (%1)

+It isn't possible to connect to the message queue daemon, for reason listed. +It is unlikely any program will be able continue without the communication. +

CC_READ_ERROR error reading data from command channel (%1)

+A low level error happened when the library tried to read data from the +command channel socket. The reason is listed. +

CC_READ_EXCEPTION error reading data from command channel (%1)

+We received an exception while trying to read data from the command +channel socket. The reason is listed. +

CC_REPLY replying to message from '%1' with '%2'

+Debug message, noting we're sending a response to the original message +with the given envelope. +

CC_SET_TIMEOUT setting timeout to %1ms

+Debug message. A timeout for which the program is willing to wait for a reply +is being set. +

CC_START_READ starting asynchronous read

+Debug message. From now on, when a message (or command) comes, it'll wake the +program and the library will automatically pass it over to correct place. +

CC_SUBSCRIBE subscribing to communication group %1

+Debug message. The program wants to receive messages addressed to this group. +

CC_TIMEOUT timeout reading data from command channel

+The program waited too long for data from the command channel (usually when it +sent a query to different program and it didn't answer for whatever reason). +

CC_UNSUBSCRIBE unsubscribing from communication group %1

+Debug message. The program no longer wants to receive messages addressed to +this group. +

CC_WRITE_ERROR error writing data to command channel (%1)

+A low level error happened when the library tried to write data to the command +channel socket. +

CC_ZERO_LENGTH invalid message length (0)

+The library received a message length being zero, which makes no sense, since +all messages must contain at least the envelope. +

CFGMGR_AUTOMATIC_CONFIG_DATABASE_UPDATE Updating configuration database from version %1 to %2

+An older version of the configuration database has been found, from which +there was an automatic upgrade path to the current version. These changes +are now applied, and no action from the administrator is necessary. +

CFGMGR_CC_SESSION_ERROR Error connecting to command channel: %1

+The configuration manager daemon was unable to connect to the messaging +system. The most likely cause is that msgq is not running. +

CFGMGR_DATA_READ_ERROR error reading configuration database from disk: %1

+There was a problem reading the persistent configuration data as stored +on disk. The file may be corrupted, or it is of a version from where +there is no automatic upgrade path. The file needs to be repaired or +removed. The configuration manager daemon will now shut down. +

CFGMGR_IOERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1

+There was an IO error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the directory where +the file is stored does not exist, or is not writable. The updated +configuration is not stored. +

CFGMGR_OSERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1

+There was an OS error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the system does not have +write access to the configuration database file. The updated +configuration is not stored. +

CFGMGR_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the cfgmgr daemon. The +daemon will now shut down.

CONFIG_CCSESSION_MSG error in CC session message: %1

There was a problem with an incoming message on the command and control channel. The message does not appear to be a valid command, and is @@ -65,33 +364,36 @@ missing a required element or contains an unknown data format. This most likely means that another BIND10 module is sending a bad message. The message itself is ignored by this module.

CONFIG_CCSESSION_MSG_INTERNAL error handling CC session message: %1

-There was an internal problem handling an incoming message on the -command and control channel. An unexpected exception was thrown. This -most likely points to an internal inconsistency in the module code. The -exception message is appended to the log error, and the module will -continue to run, but will not send back an answer. -

CONFIG_FOPEN_ERR error opening %1: %2

-There was an error opening the given file. -

CONFIG_JSON_PARSE JSON parse error in %1: %2

-There was a parse error in the JSON file. The given file does not appear -to be in valid JSON format. Please verify that the filename is correct -and that the contents are valid JSON. -

CONFIG_MANAGER_CONFIG error getting configuration from cfgmgr: %1

+There was an internal problem handling an incoming message on the command +and control channel. An unexpected exception was thrown, details of +which are appended to the message. The module will continue to run, +but will not send back an answer. +

+The most likely cause of this error is a programming error. Please raise +a bug report. +

CONFIG_GET_FAIL error getting configuration from cfgmgr: %1

The configuration manager returned an error when this module requested the configuration. The full error message answer from the configuration manager is appended to the log error. The most likely cause is that the module is of a different (command specification) version than the running configuration manager. -

CONFIG_MANAGER_MOD_SPEC module specification not accepted by cfgmgr: %1

-The module specification file for this module was rejected by the -configuration manager. The full error message answer from the -configuration manager is appended to the log error. The most likely -cause is that the module is of a different (specification file) version -than the running configuration manager. -

CONFIG_MODULE_SPEC module specification error in %1: %2

-The given file does not appear to be a valid specification file. Please -verify that the filename is correct and that its contents are a valid -BIND10 module specification. +

CONFIG_JSON_PARSE JSON parse error in %1: %2

+There was an error parsing the JSON file. The given file does not appear +to be in valid JSON format. Please verify that the filename is correct +and that the contents are valid JSON. +

CONFIG_MOD_SPEC_FORMAT module specification error in %1: %2

+The given file does not appear to be a valid specification file: details +are included in the message. Please verify that the filename is correct +and that its contents are a valid BIND10 module specification. +

CONFIG_MOD_SPEC_REJECT module specification rejected by cfgmgr: %1

+The specification file for this module was rejected by the configuration +manager. The full error message answer from the configuration manager is +appended to the log error. The most likely cause is that the module is of +a different (specification file) version than the running configuration +manager. +

CONFIG_OPEN_FAIL error opening %1: %2

+There was an error opening the given file. The reason for the failure +is included in the message.

DATASRC_CACHE_CREATE creating the hotspot cache

Debug information that the hotspot cache was created at startup.

DATASRC_CACHE_DESTROY destroying the hotspot cache

@@ -146,7 +448,7 @@ Debug information. The requested domain is an alias to a different domain, returning the CNAME instead.

DATASRC_MEM_CNAME_COEXIST can't add data to CNAME in domain '%1'

This is the same problem as in MEM_CNAME_TO_NONEMPTY, but it happened the -other way around -- adding some outher data to CNAME. +other way around -- adding some other data to CNAME.

DATASRC_MEM_CNAME_TO_NONEMPTY can't add CNAME to domain with other data in '%1'

Someone or something tried to add a CNAME into a domain that already contains some other data. But the protocol forbids coexistence of CNAME with anything @@ -164,7 +466,7 @@ encountered on the way. This may lead to redirection to a different domain and stop the search.

DATASRC_MEM_DNAME_FOUND DNAME found at '%1'

Debug information. A DNAME was found instead of the requested information. -

DATASRC_MEM_DNAME_NS dNAME and NS can't coexist in non-apex domain '%1'

+

DATASRC_MEM_DNAME_NS DNAME and NS can't coexist in non-apex domain '%1'

It was requested for DNAME and NS records to be put into the same domain which is not the apex (the top of the zone). This is forbidden by RFC 2672, section 3. This indicates a problem with provided data. @@ -222,12 +524,12 @@ destroyed. Debug information. A domain above wildcard was reached, but there's something below the requested domain. Therefore the wildcard doesn't apply here. This behaviour is specified by RFC 1034, section 4.3.3 -

DATASRC_MEM_WILDCARD_DNAME dNAME record in wildcard domain '%1'

+

DATASRC_MEM_WILDCARD_DNAME DNAME record in wildcard domain '%1'

The software refuses to load DNAME records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should behave and BIND 9 refuses that as well. Please describe your intention using different tools. -

DATASRC_MEM_WILDCARD_NS nS record in wildcard domain '%1'

+

DATASRC_MEM_WILDCARD_NS NS record in wildcard domain '%1'

The software refuses to load NS records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should behave and BIND 9 refuses that as well. Please describe your intention using @@ -269,7 +571,7 @@ response message.

DATASRC_QUERY_DELEGATION looking for delegation on the path to '%1'

Debug information. The software is trying to identify delegation points on the way down to the given domain. -

DATASRC_QUERY_EMPTY_CNAME cNAME at '%1' is empty

+

DATASRC_QUERY_EMPTY_CNAME CNAME at '%1' is empty

There was an CNAME and it was being followed. But it contains no records, so there's nowhere to go. There will be no answer. This indicates a problem with supplied data. @@ -363,7 +665,7 @@ DNAMEs will be synthesized.

DATASRC_QUERY_TASK_FAIL task failed with %1

The query subtask failed. The reason should have been reported by the subtask already. The code is 1 for error, 2 for not implemented. -

DATASRC_QUERY_TOO_MANY_CNAMES cNAME chain limit exceeded at '%1'

+

DATASRC_QUERY_TOO_MANY_CNAMES CNAME chain limit exceeded at '%1'

A CNAME led to another CNAME and it led to another, and so on. After 16 CNAMEs, the software gave up. Long CNAME chains are discouraged, and this might possibly be a loop as well. Note that some of the CNAMEs might have @@ -385,15 +687,15 @@ While processing a wildcard, a referral was met. But it wasn't possible to get enough information for it. The code is 1 for error, 2 for not implemented.

DATASRC_SQLITE_CLOSE closing SQLite database

Debug information. The SQLite data source is closing the database file. -

DATASRC_SQLITE_CREATE sQLite data source created

+

DATASRC_SQLITE_CREATE SQLite data source created

Debug information. An instance of SQLite data source is being created. -

DATASRC_SQLITE_DESTROY sQLite data source destroyed

+

DATASRC_SQLITE_DESTROY SQLite data source destroyed

Debug information. An instance of SQLite data source is being destroyed.

DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1'

-Debug information. The SQLite data source is trying to identify, which zone +Debug information. The SQLite data source is trying to identify which zone should hold this domain.

DATASRC_SQLITE_ENCLOSURE_NOTFOUND no zone contains it

-Debug information. The last SQLITE_ENCLOSURE query was unsuccessful, there's +Debug information. The last SQLITE_ENCLOSURE query was unsuccessful; there's no such zone in our data.

DATASRC_SQLITE_FIND looking for RRset '%1/%2'

Debug information. The SQLite data source is looking up a resource record @@ -417,7 +719,7 @@ and type in the database. Debug information. The SQLite data source is identifying if this domain is a referral and where it goes.

DATASRC_SQLITE_FINDREF_BAD_CLASS class mismatch looking for referral ('%1' and '%2')

-The SQLite data source was trying to identify, if there's a referral. But +The SQLite data source was trying to identify if there's a referral. But it contains different class than the query was for.

DATASRC_SQLITE_FIND_BAD_CLASS class mismatch looking for an RRset ('%1' and '%2')

The SQLite data source was looking up an RRset, but the data source contains @@ -452,142 +754,173 @@ data source.

DATASRC_UNEXPECTED_QUERY_STATE unexpected query state

This indicates a programming error. An internal task of unknown type was generated. -

LOGIMPL_ABOVEDBGMAX debug level of %1 is too high and will be set to the maximum of %2

-A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is above the maximum allowed value and has -been reduced to that value. -

LOGIMPL_BADDEBUG debug string is '%1': must be of the form DEBUGn

-The string indicating the extended logging level (used by the underlying -logger implementation code) is not of the stated form. In particular, -it starts DEBUG but does not end with an integer. -

LOGIMPL_BELOWDBGMIN debug level of %1 is too low and will be set to the minimum of %2

-A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is below the minimum allowed value and has -been increased to that value. -

MSG_BADDESTINATION unrecognized log destination: %1

+

LOGIMPL_ABOVE_MAX_DEBUG debug level of %1 is too high and will be set to the maximum of %2

+A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is above the maximum allowed value and has +been reduced to that value. The appearance of this message may indicate +a programming error - please submit a bug report. +

LOGIMPL_BAD_DEBUG_STRING debug string '%1' has invalid format

+A message from the interface to the underlying logger implementation +reporting that an internally-created string used to set the debug level +is not of the correct format (it should be of the form DEBUGn, where n +is an integer, e.g. DEBUG22). The appearance of this message indicates +a programming error - please submit a bug report. +

LOGIMPL_BELOW_MIN_DEBUG debug level of %1 is too low and will be set to the minimum of %2

+A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is below the minimum allowed value and has +been increased to that value. The appearance of this message may indicate +a programming error - please submit a bug report. +

LOG_BAD_DESTINATION unrecognized log destination: %1

A logger destination value was given that was not recognized. The destination should be one of "console", "file", or "syslog". -

MSG_BADSEVERITY unrecognized log severity: %1

+

LOG_BAD_SEVERITY unrecognized log severity: %1

A logger severity value was given that was not recognized. The severity should be one of "DEBUG", "INFO", "WARN", "ERROR", or "FATAL". -

MSG_BADSTREAM bad log console output stream: %1

-A log console output stream was given that was not recognized. The -output stream should be one of "stdout", or "stderr" -

MSG_DUPLNS line %1: duplicate $NAMESPACE directive found

-When reading a message file, more than one $NAMESPACE directive was found. In -this version of the code, such a condition is regarded as an error and the -read will be abandoned. -

MSG_DUPMSGID duplicate message ID (%1) in compiled code

-Indicative of a programming error, when it started up, BIND10 detected that -the given message ID had been registered by one or more modules. (All message -IDs should be unique throughout BIND10.) This has no impact on the operation -of the server other that erroneous messages may be logged. (When BIND10 loads -the message IDs (and their associated text), if a duplicate ID is found it is -discarded. However, when the module that supplied the duplicate ID logs that -particular message, the text supplied by the module that added the original -ID will be output - something that may bear no relation to the condition being -logged. -

MSG_IDNOTFND could not replace message text for '%1': no such message

-During start-up a local message file was read. A line with the listed -message identification was found in the file, but the identification is not -one contained in the compiled-in message dictionary. Either the message -identification has been mis-spelled in the file, or the local file was used -for an earlier version of the software and the message with that -identification has been removed. +

LOG_BAD_STREAM bad log console output stream: %1

+A log console output stream was given that was not recognized. The output +stream should be one of "stdout", or "stderr" +

LOG_DUPLICATE_MESSAGE_ID duplicate message ID (%1) in compiled code

+During start-up, BIND10 detected that the given message identification had +been defined multiple times in the BIND10 code.

-This message may appear a number of times in the file, once for every such -unknown message identification. -

MSG_INVMSGID line %1: invalid message identification '%2'

-The concatenation of the prefix and the message identification is used as -a symbol in the C++ module; as such it may only contain -

MSG_NOMSGID line %1: message definition line found without a message ID

-Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -indicates the message compiler found a line in the message file comprising -just the "%" and nothing else. -

MSG_NOMSGTXT line %1: line found containing a message ID ('%2') and no text

-Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -is generated when a line is found in the message file that contains the -leading "%" and the message identification but no text. -

MSG_NSEXTRARG line %1: $NAMESPACE directive has too many arguments

-The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with more than one argument. -

MSG_NSINVARG line %1: $NAMESPACE directive has an invalid argument ('%2')

-The $NAMESPACE argument should be a valid C++ namespace. The reader does a -cursory check on its validity, checking that the characters in the namespace -are correct. The error is generated when the reader finds an invalid -character. (Valid are alphanumeric characters, underscores and colons.) -

MSG_NSNOARG line %1: no arguments were given to the $NAMESPACE directive

-The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with no arguments. -

MSG_OPENIN unable to open message file %1 for input: %2

-The program was not able to open the specified input message file for the -reason given. -

MSG_OPENOUT unable to open %1 for output: %2

-The program was not able to open the specified output file for the reason -given. -

MSG_PRFEXTRARG line %1: $PREFIX directive has too many arguments

-The $PREFIX directive takes a single argument, a prefix to be added to the -symbol names when a C++ .h file is created. This error is generated when the -compiler finds a $PREFIX directive with more than one argument. -

MSG_PRFINVARG line %1: $PREFIX directive has an invalid argument ('%2')

-The $PREFIX argument is used in a symbol name in a C++ header file. As such, -it must adhere to restrictions on C++ symbol names (e.g. may only contain -alphanumeric characters or underscores, and may nor start with a digit). -A $PREFIX directive was found with an argument (given in the message) that -violates those restictions. -

MSG_RDLOCMES reading local message file %1

-This is an informational message output by BIND10 when it starts to read a -local message file. (A local message file may replace the text of one of more -messages; the ID of the message will not be changed though.) -

MSG_READERR error reading from message file %1: %2

+This has no ill-effects other than the possibility that an erronous +message may be logged. However, as it is indicative of a programming +error, please log a bug report. +

LOG_DUPLICATE_NAMESPACE line %1: duplicate $NAMESPACE directive found

+When reading a message file, more than one $NAMESPACE directive was found. +Such a condition is regarded as an error and the read will be abandoned. +

LOG_INPUT_OPEN_FAIL unable to open message file %1 for input: %2

+The program was not able to open the specified input message file for +the reason given. +

LOG_INVALID_MESSAGE_ID line %1: invalid message identification '%2'

+An invalid message identification (ID) has been found during the read of +a message file. Message IDs should comprise only alphanumeric characters +and the underscore, and should not start with a digit. +

LOG_NAMESPACE_EXTRA_ARGS line %1: $NAMESPACE directive has too many arguments

+The $NAMESPACE directive in a message file takes a single argument, a +namespace in which all the generated symbol names are placed. This error +is generated when the compiler finds a $NAMESPACE directive with more +than one argument. +

LOG_NAMESPACE_INVALID_ARG line %1: $NAMESPACE directive has an invalid argument ('%2')

+The $NAMESPACE argument in a message file should be a valid C++ namespace. +This message is output if the simple check on the syntax of the string +carried out by the reader fails. +

LOG_NAMESPACE_NO_ARGS line %1: no arguments were given to the $NAMESPACE directive

+The $NAMESPACE directive in a message file takes a single argument, +a C++ namespace in which all the generated symbol names are placed. +This error is generated when the compiler finds a $NAMESPACE directive +with no arguments. +

LOG_NO_MESSAGE_ID line %1: message definition line found without a message ID

+Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line in +the message file comprising just the "%" and nothing else. +

LOG_NO_MESSAGE_TEXT line %1: line found containing a message ID ('%2') and no text

+Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line +in the message file comprising just the "%" and message identification, +but no text. +

LOG_NO_SUCH_MESSAGE could not replace message text for '%1': no such message

+During start-up a local message file was read. A line with the listed +message identification was found in the file, but the identification is +not one contained in the compiled-in message dictionary. This message +may appear a number of times in the file, once for every such unknown +message identification. +

+There may be several reasons why this message may appear: +

+- The message ID has been mis-spelled in the local message file. +

+- The program outputting the message may not use that particular message +(e.g. it originates in a module not used by the program.) +

+- The local file was written for an earlier version of the BIND10 software +and the later version no longer generates that message. +

+Whatever the reason, there is no impact on the operation of BIND10. +

LOG_OPEN_OUTPUT_FAIL unable to open %1 for output: %2

+Originating within the logging code, the program was not able to open +the specified output file for the reason given. +

LOG_PREFIX_EXTRA_ARGS line %1: $PREFIX directive has too many arguments

+Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +This error is generated when the compiler finds a $PREFIX directive with +more than one argument. +

+Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND10. +

LOG_PREFIX_INVALID_ARG line %1: $PREFIX directive has an invalid argument ('%2')

+Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +As such, it must adhere to restrictions on C++ symbol names (e.g. may +only contain alphanumeric characters or underscores, and may nor start +with a digit). A $PREFIX directive was found with an argument (given +in the message) that violates those restictions. +

+Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND10. +

LOG_READING_LOCAL_FILE reading local message file %1

+This is an informational message output by BIND10 when it starts to read +a local message file. (A local message file may replace the text of +one of more messages; the ID of the message will not be changed though.) +

LOG_READ_ERROR error reading from message file %1: %2

The specified error was encountered reading from the named message file. -

MSG_UNRECDIR line %1: unrecognised directive '%2'

-A line starting with a dollar symbol was found, but the first word on the line -(shown in the message) was not a recognised message compiler directive. -

MSG_WRITERR error writing to %1: %2

-The specified error was encountered by the message compiler when writing to -the named output file. -

NSAS_INVRESPSTR queried for %1 but got invalid response

-This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for a RR for the -specified nameserver but received an invalid response. Either the success -function was called without a DNS message or the message was invalid on some -way. (In the latter case, the error should have been picked up elsewhere in -the processing logic, hence the raising of the error here.) -

NSAS_INVRESPTC queried for %1 RR of type/class %2/%3, received response %4/%5

-This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for the given RR -type and class, but instead received an answer with the given type and class. -

NSAS_LOOKUPCANCEL lookup for zone %1 has been cancelled

-A debug message, this is output when a NSAS (nameserver address store - -part of the resolver) lookup for a zone has been cancelled. -

NSAS_LOOKUPZONE searching NSAS for nameservers for zone %1

-A debug message, this is output when a call is made to the nameserver address -store (part of the resolver) to obtain the nameservers for the specified zone. -

NSAS_NSADDR asking resolver to obtain A and AAAA records for %1

-A debug message, the NSAS (nameserver address store - part of the resolver) is -making a callback into the resolver to retrieve the address records for the -specified nameserver. -

NSAS_NSLKUPFAIL failed to lookup any %1 for %2

-A debug message, the NSAS (nameserver address store - part of the resolver) -has been unable to retrieve the specified resource record for the specified -nameserver. This is not necessarily a problem - the nameserver may be -unreachable, in which case the NSAS will try other nameservers in the zone. -

NSAS_NSLKUPSUCC found address %1 for %2

-A debug message, the NSAS (nameserver address store - part of the resolver) -has retrieved the given address for the specified nameserver through an -external query. -

NSAS_SETRTT reporting RTT for %1 as %2; new value is now %3

+

LOG_UNRECOGNISED_DIRECTIVE line %1: unrecognised directive '%2'

+Within a message file, a line starting with a dollar symbol was found +(indicating the presence of a directive) but the first word on the line +(shown in the message) was not recognised. +

LOG_WRITE_ERROR error writing to %1: %2

+The specified error was encountered by the message compiler when writing +to the named output file. +

NSAS_FIND_NS_ADDRESS asking resolver to obtain A and AAAA records for %1

+A debug message issued when the NSAS (nameserver address store - part +of the resolver) is making a callback into the resolver to retrieve the +address records for the specified nameserver. +

NSAS_FOUND_ADDRESS found address %1 for %2

+A debug message issued when the NSAS (nameserver address store - part +of the resolver) has retrieved the given address for the specified +nameserver through an external query. +

NSAS_INVALID_RESPONSE queried for %1 but got invalid response

+The NSAS (nameserver address store - part of the resolver) made a query +for a RR for the specified nameserver but received an invalid response. +Either the success function was called without a DNS message or the +message was invalid on some way. (In the latter case, the error should +have been picked up elsewhere in the processing logic, hence the raising +of the error here.) +

+This message indicates an internal error in the NSAS. Please raise a +bug report. +

NSAS_LOOKUP_CANCEL lookup for zone %1 has been canceled

+A debug message issued when an NSAS (nameserver address store - part of +the resolver) lookup for a zone has been canceled. +

NSAS_NS_LOOKUP_FAIL failed to lookup any %1 for %2

+A debug message issued when the NSAS (nameserver address store - part of +the resolver) has been unable to retrieve the specified resource record +for the specified nameserver. This is not necessarily a problem - the +nameserver may be unreachable, in which case the NSAS will try other +nameservers in the zone. +

NSAS_SEARCH_ZONE_NS searching NSAS for nameservers for zone %1

+A debug message output when a call is made to the NSAS (nameserver +address store - part of the resolver) to obtain the nameservers for +the specified zone. +

NSAS_UPDATE_RTT update RTT for %1: was %2 ms, is now %3 ms

A NSAS (nameserver address store - part of the resolver) debug message -reporting the round-trip time (RTT) for a query made to the specified -nameserver. The RTT has been updated using the value given and the new RTT is -displayed. (The RTT is subject to a calculation that damps out sudden -changes. As a result, the new RTT is not necessarily equal to the RTT -reported.) +reporting the update of a round-trip time (RTT) for a query made to the +specified nameserver. The RTT has been updated using the value given +and the new RTT is displayed. (The RTT is subject to a calculation that +damps out sudden changes. As a result, the new RTT used by the NSAS in +future decisions of which nameserver to use is not necessarily equal to +the RTT reported.) +

NSAS_WRONG_ANSWER queried for %1 RR of type/class %2/%3, received response %4/%5

+A NSAS (nameserver address store - part of the resolver) made a query for +a resource record of a particular type and class, but instead received +an answer with a different given type and class. +

+This message indicates an internal error in the NSAS. Please raise a +bug report.

RESLIB_ANSWER answer received in response to query for <%1>

A debug message recording that an answer has been received to an upstream query for the specified question. Previous debug messages will have indicated @@ -599,95 +932,95 @@ the server to which the question was sent.

RESLIB_DEEPEST did not find <%1> in cache, deepest delegation found is %2

A debug message, a cache lookup did not find the specified <name, class, type> tuple in the cache; instead, the deepest delegation found is indicated. -

RESLIB_FOLLOWCNAME following CNAME chain to <%1>

+

RESLIB_FOLLOW_CNAME following CNAME chain to <%1>

A debug message, a CNAME response was received and another query is being issued for the <name, class, type> tuple. -

RESLIB_LONGCHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded

+

RESLIB_LONG_CHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded

A debug message recording that a CNAME response has been received to an upstream query for the specified question (Previous debug messages will have indicated the server to which the question was sent). However, receipt of this CNAME has meant that the resolver has exceeded the CNAME chain limit (a CNAME chain is where on CNAME points to another) and so an error is being returned. -

RESLIB_NONSRRSET no NS RRSet in referral response received to query for <%1>

+

RESLIB_NO_NS_RRSET no NS RRSet in referral response received to query for <%1>

A debug message, this indicates that a response was received for the specified -query and was categorised as a referral. However, the received message did +query and was categorized as a referral. However, the received message did not contain any NS RRsets. This may indicate a programming error in the response classification code. -

RESLIB_NSASLOOK looking up nameserver for zone %1 in the NSAS

+

RESLIB_NSAS_LOOKUP looking up nameserver for zone %1 in the NSAS

A debug message, the RunningQuery object is querying the NSAS for the nameservers for the specified zone. -

RESLIB_NXDOMRR NXDOMAIN/NXRRSET received in response to query for <%1>

+

RESLIB_NXDOM_NXRR NXDOMAIN/NXRRSET received in response to query for <%1>

A debug message recording that either a NXDOMAIN or an NXRRSET response has been received to an upstream query for the specified question. Previous debug messages will have indicated the server to which the question was sent.

RESLIB_PROTOCOL protocol error in answer for %1: %3

A debug message indicating that a protocol error was received. As there are no retries left, an error will be reported. -

RESLIB_PROTOCOLRTRY protocol error in answer for %1: %2 (retries left: %3)

+

RESLIB_PROTOCOL_RETRY protocol error in answer for %1: %2 (retries left: %3)

A debug message indicating that a protocol error was received and that the resolver is repeating the query to the same nameserver. After this repeated query, there will be the indicated number of retries left. -

RESLIB_RCODERR RCODE indicates error in response to query for <%1>

+

RESLIB_RCODE_ERR RCODE indicates error in response to query for <%1>

A debug message, the response to the specified query indicated an error that is not covered by a specific code path. A SERVFAIL will be returned. -

RESLIB_REFERRAL referral received in response to query for <%1>

-A debug message recording that a referral response has been received to an -upstream query for the specified question. Previous debug messages will -have indicated the server to which the question was sent. -

RESLIB_REFERZONE referred to zone %1

-A debug message indicating that the last referral message was to the specified -zone. -

RESLIB_RESCAFND found <%1> in the cache (resolve() instance %2)

+

RESLIB_RECQ_CACHE_FIND found <%1> in the cache (resolve() instance %2)

This is a debug message and indicates that a RecursiveQuery object found the the specified <name, class, type> tuple in the cache. The instance number at the end of the message indicates which of the two resolve() methods has been called. -

RESLIB_RESCANOTFND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2)

+

RESLIB_RECQ_CACHE_NO_FIND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2)

This is a debug message and indicates that the look in the cache made by the RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery object has been created to resolve the question. The instance number at the end of the message indicates which of the two resolve() methods has been called. +

RESLIB_REFERRAL referral received in response to query for <%1>

+A debug message recording that a referral response has been received to an +upstream query for the specified question. Previous debug messages will +have indicated the server to which the question was sent. +

RESLIB_REFER_ZONE referred to zone %1

+A debug message indicating that the last referral message was to the specified +zone.

RESLIB_RESOLVE asked to resolve <%1> (resolve() instance %2)

A debug message, the RecursiveQuery::resolve method has been called to resolve the specified <name, class, type> tuple. The first action will be to lookup the specified tuple in the cache. The instance number at the end of the message indicates which of the two resolve() methods has been called. -

RESLIB_RRSETFND found single RRset in the cache when querying for <%1> (resolve() instance %2)

+

RESLIB_RRSET_FOUND found single RRset in the cache when querying for <%1> (resolve() instance %2)

A debug message, indicating that when RecursiveQuery::resolve queried the cache, a single RRset was found which was put in the answer. The instance number at the end of the message indicates which of the two resolve() methods has been called.

RESLIB_RTT round-trip time of last query calculated as %1 ms

A debug message giving the round-trip time of the last query and response. -

RESLIB_RUNCAFND found <%1> in the cache

+

RESLIB_RUNQ_CACHE_FIND found <%1> in the cache

This is a debug message and indicates that a RunningQuery object found the specified <name, class, type> tuple in the cache. -

RESLIB_RUNCALOOK looking up up <%1> in the cache

+

RESLIB_RUNQ_CACHE_LOOKUP looking up up <%1> in the cache

This is a debug message and indicates that a RunningQuery object has made a call to its doLookup() method to look up the specified <name, class, type> tuple, the first action of which will be to examine the cache. -

RESLIB_RUNQUFAIL failure callback - nameservers are unreachable

+

RESLIB_RUNQ_FAIL failure callback - nameservers are unreachable

A debug message indicating that a RunningQuery's failure callback has been called because all nameservers for the zone in question are unreachable. -

RESLIB_RUNQUSUCC success callback - sending query to %1

+

RESLIB_RUNQ_SUCCESS success callback - sending query to %1

A debug message indicating that a RunningQuery's success callback has been called because a nameserver has been found, and that a query is being sent to the specified nameserver. -

RESLIB_TESTSERV setting test server to %1(%2)

+

RESLIB_TEST_SERVER setting test server to %1(%2)

This is an internal debugging message and is only generated in unit tests. It indicates that all upstream queries from the resolver are being routed to the specified server, regardless of the address of the nameserver to which the query would normally be routed. As it should never be seen in normal operation, it is a warning message instead of a debug message. -

RESLIB_TESTUPSTR sending upstream query for <%1> to test server at %2

+

RESLIB_TEST_UPSTREAM sending upstream query for <%1> to test server at %2

This is a debug message and should only be seen in unit tests. A query for the specified <name, class, type> tuple is being sent to a test nameserver whose address is given in the message.

RESLIB_TIMEOUT query <%1> to %2 timed out

A debug message indicating that the specified query has timed out and as there are no retries left, an error will be reported. -

RESLIB_TIMEOUTRTRY query <%1> to %2 timed out, re-trying (retries left: %3)

+

RESLIB_TIMEOUT_RETRY query <%1> to %2 timed out, re-trying (retries left: %3)

A debug message indicating that the specified query has timed out and that the resolver is repeating the query to the same nameserver. After this repeated query, there will be the indicated number of retries left. @@ -699,118 +1032,134 @@ gives no cause for concern.

RESLIB_UPSTREAM sending upstream query for <%1> to %2

A debug message indicating that a query for the specified <name, class, type> tuple is being sent to a nameserver whose address is given in the message. -

RESOLVER_AXFRTCP AXFR request received over TCP

+

RESOLVER_AXFR_TCP AXFR request received over TCP

A debug message, the resolver received a NOTIFY message over TCP. The server cannot process it and will return an error message to the sender with the RCODE set to NOTIMP. -

RESOLVER_AXFRUDP AXFR request received over UDP

+

RESOLVER_AXFR_UDP AXFR request received over UDP

A debug message, the resolver received a NOTIFY message over UDP. The server cannot process it (and in any case, an AXFR request should be sent over TCP) and will return an error message to the sender with the RCODE set to FORMERR. -

RESOLVER_CLTMOSMALL client timeout of %1 is too small

+

RESOLVER_CLIENT_TIME_SMALL client timeout of %1 is too small

An error indicating that the configuration value specified for the query timeout is too small. -

RESOLVER_CONFIGCHAN configuration channel created

+

RESOLVER_CONFIG_CHANNEL configuration channel created

A debug message, output when the resolver has successfully established a connection to the configuration channel. -

RESOLVER_CONFIGERR error in configuration: %1

+

RESOLVER_CONFIG_ERROR error in configuration: %1

An error was detected in a configuration update received by the resolver. This may be in the format of the configuration message (in which case this is a programming error) or it may be in the data supplied (in which case it is a user error). The reason for the error, given as a parameter in the message, will give more details. -

RESOLVER_CONFIGLOAD configuration loaded

+

RESOLVER_CONFIG_LOADED configuration loaded

A debug message, output when the resolver configuration has been successfully loaded. -

RESOLVER_CONFIGUPD configuration updated: %1

+

RESOLVER_CONFIG_UPDATED configuration updated: %1

A debug message, the configuration has been updated with the specified information.

RESOLVER_CREATED main resolver object created

A debug message, output when the Resolver() object has been created. -

RESOLVER_DNSMSGRCVD DNS message received: %1

+

RESOLVER_DNS_MESSAGE_RECEIVED DNS message received: %1

A debug message, this always precedes some other logging message and is the formatted contents of the DNS packet that the other message refers to. -

RESOLVER_DNSMSGSENT DNS message of %1 bytes sent: %2

+

RESOLVER_DNS_MESSAGE_SENT DNS message of %1 bytes sent: %2

A debug message, this contains details of the response sent back to the querying system.

RESOLVER_FAILED resolver failed, reason: %1

This is an error message output when an unhandled exception is caught by the resolver. All it can do is to shut down. -

RESOLVER_FWDADDR setting forward address %1(%2)

+

RESOLVER_FORWARD_ADDRESS setting forward address %1(%2)

This message may appear multiple times during startup, and it lists the forward addresses used by the resolver when running in forwarding mode. -

RESOLVER_FWDQUERY processing forward query

+

RESOLVER_FORWARD_QUERY processing forward query

The received query has passed all checks and is being forwarded to upstream servers. -

RESOLVER_HDRERR message received, exception when processing header: %1

+

RESOLVER_HEADER_ERROR message received, exception when processing header: %1

A debug message noting that an exception occurred during the processing of a received packet. The packet has been dropped.

RESOLVER_IXFR IXFR request received

The resolver received a NOTIFY message over TCP. The server cannot process it and will return an error message to the sender with the RCODE set to NOTIMP. -

RESOLVER_LKTMOSMALL lookup timeout of %1 is too small

+

RESOLVER_LOOKUP_TIME_SMALL lookup timeout of %1 is too small

An error indicating that the configuration value specified for the lookup timeout is too small. -

RESOLVER_NFYNOTAUTH NOTIFY arrived but server is not authoritative

-The resolver received a NOTIFY message. As the server is not authoritative it -cannot process it, so it returns an error message to the sender with the RCODE -set to NOTAUTH. -

RESOLVER_NORMQUERY processing normal query

-The received query has passed all checks and is being processed by the resolver. -

RESOLVER_NOROOTADDR no root addresses available

-A warning message during startup, indicates that no root addresses have been -set. This may be because the resolver will get them from a priming query. -

RESOLVER_NOTIN non-IN class request received, returning REFUSED message

+

RESOLVER_MESSAGE_ERROR error parsing received message: %1 - returning %2

+A debug message noting that the resolver received a message and the +parsing of the body of the message failed due to some error (although +the parsing of the header succeeded). The message parameters give a +textual description of the problem and the RCODE returned. +

RESOLVER_NEGATIVE_RETRIES negative number of retries (%1) specified in the configuration

+An error message indicating that the resolver configuration has specified a +negative retry count. Only zero or positive values are valid. +

RESOLVER_NON_IN_PACKET non-IN class request received, returning REFUSED message

A debug message, the resolver has received a DNS packet that was not IN class. The resolver cannot handle such packets, so is returning a REFUSED response to the sender. -

RESOLVER_NOTONEQUES query contained %1 questions, exactly one question was expected

+

RESOLVER_NORMAL_QUERY processing normal query

+The received query has passed all checks and is being processed by the resolver. +

RESOLVER_NOTIFY_RECEIVED NOTIFY arrived but server is not authoritative

+The resolver received a NOTIFY message. As the server is not authoritative it +cannot process it, so it returns an error message to the sender with the RCODE +set to NOTAUTH. +

RESOLVER_NOT_ONE_QUESTION query contained %1 questions, exactly one question was expected

A debug message, the resolver received a query that contained the number of entires in the question section detailed in the message. This is a malformed message, as a DNS query must contain only one question. The resolver will return a message to the sender with the RCODE set to FORMERR. -

RESOLVER_OPCODEUNS opcode %1 not supported by the resolver

-A debug message, the resolver received a message with an unsupported opcode -(it can only process QUERY opcodes). It will return a message to the sender -with the RCODE set to NOTIMP. -

RESOLVER_PARSEERR error parsing received message: %1 - returning %2

+

RESOLVER_NO_ROOT_ADDRESS no root addresses available

+A warning message during startup, indicates that no root addresses have been +set. This may be because the resolver will get them from a priming query. +

RESOLVER_PARSE_ERROR error parsing received message: %1 - returning %2

A debug message noting that the resolver received a message and the parsing of the body of the message failed due to some non-protocol related reason (although the parsing of the header succeeded). The message parameters give a textual description of the problem and the RCODE returned. -

RESOLVER_PRINTMSG print message command, aeguments are: %1

+

RESOLVER_PRINT_COMMAND print message command, arguments are: %1

This message is logged when a "print_message" command is received over the command channel. -

RESOLVER_PROTERR protocol error parsing received message: %1 - returning %2

+

RESOLVER_PROTOCOL_ERROR protocol error parsing received message: %1 - returning %2

A debug message noting that the resolver received a message and the parsing of the body of the message failed due to some protocol error (although the parsing of the header succeeded). The message parameters give a textual description of the problem and the RCODE returned. -

RESOLVER_QUSETUP query setup

+

RESOLVER_QUERY_ACCEPTED query accepted: '%1/%2/%3' from %4

+A debug message that indicates an incoming query is accepted in terms of +the query ACL. The log message shows the query in the form of +<query name>/<query type>/<query class>, and the client that sends the +query in the form of <Source IP address>#<source port>. +

RESOLVER_QUERY_DROPPED query dropped: '%1/%2/%3' from %4

+An informational message that indicates an incoming query is dropped +in terms of the query ACL. Unlike the RESOLVER_QUERY_REJECTED +case, the server does not return any response. The log message +shows the query in the form of <query name>/<query type>/<query +class>, and the client that sends the query in the form of <Source +IP address>#<source port>. +

RESOLVER_QUERY_REJECTED query rejected: '%1/%2/%3' from %4

+An informational message that indicates an incoming query is rejected +in terms of the query ACL. This results in a response with an RCODE of +REFUSED. The log message shows the query in the form of <query +name>/<query type>/<query class>, and the client that sends the +query in the form of <Source IP address>#<source port>. +

RESOLVER_QUERY_SETUP query setup

A debug message noting that the resolver is creating a RecursiveQuery object. -

RESOLVER_QUSHUT query shutdown

+

RESOLVER_QUERY_SHUTDOWN query shutdown

A debug message noting that the resolver is destroying a RecursiveQuery object. -

RESOLVER_QUTMOSMALL query timeout of %1 is too small

+

RESOLVER_QUERY_TIME_SMALL query timeout of %1 is too small

An error indicating that the configuration value specified for the query timeout is too small. -

RESOLVER_RECURSIVE running in recursive mode

-This is an informational message that appears at startup noting that the -resolver is running in recursive mode. -

RESOLVER_RECVMSG resolver has received a DNS message

+

RESOLVER_RECEIVED_MESSAGE resolver has received a DNS message

A debug message indicating that the resolver has received a message. Depending on the debug settings, subsequent log output will indicate the nature of the message. -

RESOLVER_RETRYNEG negative number of retries (%1) specified in the configuration

-An error message indicating that the resolver configuration has specified a -negative retry count. Only zero or positive values are valid. -

RESOLVER_ROOTADDR setting root address %1(%2)

-This message may appear multiple times during startup; it lists the root -addresses used by the resolver. -

RESOLVER_SERVICE service object created

+

RESOLVER_RECURSIVE running in recursive mode

+This is an informational message that appears at startup noting that the +resolver is running in recursive mode. +

RESOLVER_SERVICE_CREATED service object created

A debug message, output when the main service object (which handles the received queries) is created. -

RESOLVER_SETPARAM query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4

-A debug message, lists the parameters associated with the message. These are: +

RESOLVER_SET_PARAMS query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4

+A debug message, lists the parameters being set for the resolver. These are: query timeout: the timeout (in ms) used for queries originated by the resolver to upstream servers. Client timeout: the interval to resolver a query by a client: after this time, the resolver sends back a SERVFAIL to the client @@ -819,14 +1168,20 @@ resolver gives up trying to resolve a query. Retry count: the number of times the resolver will retry a query to an upstream server if it gets a timeout.

The client and lookup timeouts require a bit more explanation. The -resolution of the clent query might require a large number of queries to +resolution of the client query might require a large number of queries to upstream nameservers. Even if none of these queries timeout, the total time taken to perform all the queries may exceed the client timeout. When this happens, a SERVFAIL is returned to the client, but the resolver continues with the resolution process. Data received is added to the cache. However, -there comes a time - the lookup timeout - when even the resolve gives up. +there comes a time - the lookup timeout - when even the resolver gives up. At this point it will wait for pending upstream queries to complete or timeout and drop the query. +

RESOLVER_SET_QUERY_ACL query ACL is configured

+A debug message that appears when a new query ACL is configured for the +resolver. +

RESOLVER_SET_ROOT_ADDRESS setting root address %1(%2)

+This message may appear multiple times during startup; it lists the root +addresses used by the resolver.

RESOLVER_SHUTDOWN resolver shutdown complete

This information message is output when the resolver has shut down.

RESOLVER_STARTED resolver started

@@ -834,8 +1189,166 @@ This informational message is output by the resolver when all initialization has been completed and it is entering its main loop.

RESOLVER_STARTING starting resolver with command line '%1'

An informational message, this is output when the resolver starts up. -

RESOLVER_UNEXRESP received unexpected response, ignoring

+

RESOLVER_UNEXPECTED_RESPONSE received unexpected response, ignoring

A debug message noting that the server has received a response instead of a query and is ignoring it. +

RESOLVER_UNSUPPORTED_OPCODE opcode %1 not supported by the resolver

+A debug message, the resolver received a message with an unsupported opcode +(it can only process QUERY opcodes). It will return a message to the sender +with the RCODE set to NOTIMP. +

XFRIN_AXFR_DATABASE_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to a database problem. +The error is shown in the log message. +

XFRIN_AXFR_INTERNAL_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to an internal +problem in the bind10 python wrapper library. +The error is shown in the log message. +

XFRIN_AXFR_TRANSFER_FAILURE AXFR transfer of zone %1 failed: %2

+The AXFR transfer for the given zone has failed due to a protocol error. +The error is shown in the log message. +

XFRIN_AXFR_TRANSFER_STARTED AXFR transfer of zone %1 started

+A connection to the master server has been made, the serial value in +the SOA record has been checked, and a zone transfer has been started. +

XFRIN_AXFR_TRANSFER_SUCCESS AXFR transfer of zone %1 succeeded

+The AXFR transfer of the given zone was successfully completed. +

XFRIN_BAD_MASTER_ADDR_FORMAT bad format for master address: %1

+The given master address is not a valid IP address. +

XFRIN_BAD_MASTER_PORT_FORMAT bad format for master port: %1

+The master port as read from the configuration is not a valid port number. +

XFRIN_BAD_TSIG_KEY_STRING bad TSIG key string: %1

+The TSIG key string as read from the configuration does not represent +a valid TSIG key. +

XFRIN_BAD_ZONE_CLASS Invalid zone class: %1

+The zone class as read from the configuration is not a valid DNS class. +

XFRIN_CC_SESSION_ERROR error reading from cc channel: %1

+There was a problem reading from the command and control channel. The +most likely cause is that xfrin the msgq daemon is not running. +

XFRIN_COMMAND_ERROR error while executing command '%1': %2

+There was an error while the given command was being processed. The +error is given in the log message. +

XFRIN_CONNECT_MASTER error connecting to master at %1: %2

+There was an error opening a connection to the master. The error is +shown in the log message. +

XFRIN_IMPORT_DNS error importing python DNS module: %1

+There was an error importing the python DNS module pydnspp. The most +likely cause is a PYTHONPATH problem. +

XFRIN_MSGQ_SEND_ERROR error while contacting %1 and %2

+There was a problem sending a message to the xfrout module or the +zone manager. This most likely means that the msgq daemon has quit or +was killed. +

XFRIN_MSGQ_SEND_ERROR_ZONE_MANAGER error while contacting %1

+There was a problem sending a message to the zone manager. This most +likely means that the msgq daemon has quit or was killed. +

XFRIN_RETRANSFER_UNKNOWN_ZONE got notification to retransfer unknown zone %1

+There was an internal command to retransfer the given zone, but the +zone is not known to the system. This may indicate that the configuration +for xfrin is incomplete, or there was a typographical error in the +zone name in the configuration. +

XFRIN_STARTING starting resolver with command line '%1'

+An informational message, this is output when the resolver starts up. +

XFRIN_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the xfrin daemon. The +daemon will now shut down. +

XFRIN_UNKNOWN_ERROR unknown error: %1

+An uncaught exception was raised while running the xfrin daemon. The +exception message is printed in the log message. +

XFROUT_AXFR_TRANSFER_DONE transfer of %1/%2 complete

+The transfer of the given zone has been completed successfully, or was +aborted due to a shutdown event. +

XFROUT_AXFR_TRANSFER_ERROR error transferring zone %1/%2: %3

+An uncaught exception was encountered while sending the response to +an AXFR query. The error message of the exception is included in the +log message, but this error most likely points to incomplete exception +handling in the code. +

XFROUT_AXFR_TRANSFER_FAILED transfer of %1/%2 failed, rcode: %3

+A transfer out for the given zone failed. An error response is sent +to the client. The given rcode is the rcode that is set in the error +response. This is either NOTAUTH (we are not authoritative for the +zone), SERVFAIL (our internal database is missing the SOA record for +the zone), or REFUSED (the limit of simultaneous outgoing AXFR +transfers, as specified by the configuration value +Xfrout/max_transfers_out, has been reached). +

XFROUT_AXFR_TRANSFER_STARTED transfer of zone %1/%2 has started

+A transfer out of the given zone has started. +

XFROUT_BAD_TSIG_KEY_STRING bad TSIG key string: %1

+The TSIG key string as read from the configuration does not represent +a valid TSIG key. +

XFROUT_CC_SESSION_ERROR error reading from cc channel: %1

+There was a problem reading from the command and control channel. The +most likely cause is that the msgq daemon is not running. +

XFROUT_CC_SESSION_TIMEOUT_ERROR timeout waiting for cc response

+There was a problem reading a response from antoher module over the +command and control channel. The most likely cause is that the +configuration manager b10-cfgmgr is not running. +

XFROUT_FETCH_REQUEST_ERROR socket error while fetching a request from the auth daemon

+There was a socket error while contacting the b10-auth daemon to +fetch a transfer request. The auth daemon may have shutdown. +

XFROUT_HANDLE_QUERY_ERROR error while handling query: %1

+There was a general error handling an xfrout query. The error is shown +in the message. In principle this error should not appear, and points +to an oversight catching exceptions in the right place. However, to +ensure the daemon keeps running, this error is caught and reported. +

XFROUT_IMPORT error importing python module: %1

+There was an error importing a python module. One of the modules needed +by xfrout could not be found. This suggests that either some libraries +are missing on the system, or the PYTHONPATH variable is not correct. +The specific place where this library needs to be depends on your +system and your specific installation. +

XFROUT_NEW_CONFIG Update xfrout configuration

+New configuration settings have been sent from the configuration +manager. The xfrout daemon will now apply them. +

XFROUT_NEW_CONFIG_DONE Update xfrout configuration done

+The xfrout daemon is now done reading the new configuration settings +received from the configuration manager. +

XFROUT_NOTIFY_COMMAND received command to send notifies for %1/%2

+The xfrout daemon received a command on the command channel that +NOTIFY packets should be sent for the given zone. +

XFROUT_PARSE_QUERY_ERROR error parsing query: %1

+There was a parse error while reading an incoming query. The parse +error is shown in the log message. A remote client sent a packet we +do not understand or support. The xfrout request will be ignored. +In general, this should only occur for unexpected problems like +memory allocation failures, as the query should already have been +parsed by the b10-auth daemon, before it was passed here. +

XFROUT_PROCESS_REQUEST_ERROR error processing transfer request: %2

+There was an error processing a transfer request. The error is included +in the log message, but at this point no specific information other +than that could be given. This points to incomplete exception handling +in the code. +

XFROUT_RECEIVED_SHUTDOWN_COMMAND shutdown command received

+The xfrout daemon received a shutdown command from the command channel +and will now shut down. +

XFROUT_RECEIVE_FILE_DESCRIPTOR_ERROR error receiving the file descriptor for an XFR connection

+There was an error receiving the file descriptor for the transfer +request. Normally, the request is received by b10-auth, and passed on +to the xfrout daemon, so it can answer directly. However, there was a +problem receiving this file descriptor. The request will be ignored. +

XFROUT_REMOVE_OLD_UNIX_SOCKET_FILE_ERROR error removing unix socket file %1: %2

+The unix socket file xfrout needs for contact with the auth daemon +already exists, and needs to be removed first, but there is a problem +removing it. It is likely that we do not have permission to remove +this file. The specific error is show in the log message. The xfrout +daemon will shut down. +

XFROUT_REMOVE_UNIX_SOCKET_FILE_ERROR error clearing unix socket file %1: %2

+When shutting down, the xfrout daemon tried to clear the unix socket +file used for communication with the auth daemon. It failed to remove +the file. The reason for the failure is given in the error message. +

XFROUT_SOCKET_SELECT_ERROR error while calling select() on request socket: %1

+There was an error while calling select() on the socket that informs +the xfrout daemon that a new xfrout request has arrived. This should +be a result of rare local error such as memory allocation failure and +shouldn't happen under normal conditions. The error is included in the +log message. +

XFROUT_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down

+There was a keyboard interrupt signal to stop the xfrout daemon. The +daemon will now shut down. +

XFROUT_STOPPING the xfrout daemon is shutting down

+The current transfer is aborted, as the xfrout daemon is shutting down. +

XFROUT_UNIX_SOCKET_FILE_IN_USE another xfrout process seems to be using the unix socket file %1

+While starting up, the xfrout daemon tried to clear the unix domain +socket needed for contacting the b10-auth daemon to pass requests +on, but the file is in use. The most likely cause is that another +xfrout daemon process is still running. This xfrout daemon (the one +printing this message) will not start.

diff --git a/doc/guide/bind10-messages.xml b/doc/guide/bind10-messages.xml index eaa8bb99a1..d146a9ca56 100644 --- a/doc/guide/bind10-messages.xml +++ b/doc/guide/bind10-messages.xml @@ -5,6 +5,12 @@ %version; ]> + @@ -62,16 +68,16 @@ - -ASIODNS_FETCHCOMP upstream fetch to %1(%2) has now completed + +ASIODNS_FETCH_COMPLETED upstream fetch to %1(%2) has now completed -A debug message, this records the the upstream fetch (a query made by the +A debug message, this records that the upstream fetch (a query made by the resolver on behalf of its client) to the specified address has completed. - -ASIODNS_FETCHSTOP upstream fetch to %1(%2) has been stopped + +ASIODNS_FETCH_STOPPED upstream fetch to %1(%2) has been stopped An external component has requested the halting of an upstream fetch. This is an allowed operation, and the message should only appear if debug is @@ -79,27 +85,27 @@ enabled. - -ASIODNS_OPENSOCK error %1 opening %2 socket to %3(%4) + +ASIODNS_OPEN_SOCKET error %1 opening %2 socket to %3(%4) The asynchronous I/O code encountered an error when trying to open a socket of the specified protocol in order to send a message to the target address. -The the number of the system error that cause the problem is given in the +The number of the system error that cause the problem is given in the message. - -ASIODNS_RECVSOCK error %1 reading %2 data from %3(%4) + +ASIODNS_READ_DATA error %1 reading %2 data from %3(%4) -The asynchronous I/O code encountered an error when trying read data from -the specified address on the given protocol. The the number of the system +The asynchronous I/O code encountered an error when trying to read data from +the specified address on the given protocol. The number of the system error that cause the problem is given in the message. - -ASIODNS_RECVTMO receive timeout while waiting for data from %1(%2) + +ASIODNS_READ_TIMEOUT receive timeout while waiting for data from %1(%2) An upstream fetch from the specified address timed out. This may happen for any number of reasons and is most probably a problem at the remote server @@ -108,8 +114,8 @@ enabled. - -ASIODNS_SENDSOCK error %1 sending data using %2 to %3(%4) + +ASIODNS_SEND_DATA error %1 sending data using %2 to %3(%4) The asynchronous I/O code encountered an error when trying send data to the specified address on the given protocol. The the number of the system @@ -117,20 +123,674 @@ error that cause the problem is given in the message. - -ASIODNS_UNKORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3) + +ASIODNS_UNKNOWN_ORIGIN unknown origin for ASIO error code %1 (protocol: %2, address %3) -This message should not appear and indicates an internal error if it does. -Please enter a bug report. +An internal consistency check on the origin of a message from the +asynchronous I/O module failed. This may indicate an internal error; +please submit a bug report. - -ASIODNS_UNKRESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3) + +ASIODNS_UNKNOWN_RESULT unknown result (%1) when IOFetch::stop() was executed for I/O to %2(%3) -The termination method of the resolver's upstream fetch class was called with -an unknown result code (which is given in the message). This message should -not appear and may indicate an internal error. Please enter a bug report. +An internal error indicating that the termination method of the resolver's +upstream fetch class was called with an unknown result code (which is +given in the message). Please submit a bug report. + + + + +AUTH_AXFR_ERROR error handling AXFR request: %1 + +This is a debug message produced by the authoritative server when it +has encountered an error processing an AXFR request. The message gives +the reason for the error, and the server will return a SERVFAIL code to +the sender. + + + + +AUTH_AXFR_UDP AXFR query received over UDP + +This is a debug message output when the authoritative server has received +an AXFR query over UDP. Use of UDP for AXFRs is not permitted by the +protocol, so the server will return a FORMERR error to the sender. + + + + +AUTH_COMMAND_FAILED execution of command channel instruction '%1' failed: %2 + +Execution of the specified command by the authoritative server failed. The +message contains the reason for the failure. + + + + +AUTH_CONFIG_CHANNEL_CREATED configuration session channel created + +This is a debug message indicating that authoritative server has created +the channel to the configuration manager. It is issued during server +startup is an indication that the initialization is proceeding normally. + + + + +AUTH_CONFIG_CHANNEL_ESTABLISHED configuration session channel established + +This is a debug message indicating that authoritative server +has established communication the configuration manager over the +previously-created channel. It is issued during server startup is an +indication that the initialization is proceeding normally. + + + + +AUTH_CONFIG_CHANNEL_STARTED configuration session channel started + +This is a debug message, issued when the authoritative server has +posted a request to be notified when new configuration information is +available. It is issued during server startup is an indication that +the initialization is proceeding normally. + + + + +AUTH_CONFIG_LOAD_FAIL load of configuration failed: %1 + +An attempt to configure the server with information from the configuration +database during the startup sequence has failed. (The reason for +the failure is given in the message.) The server will continue its +initialization although it may not be configured in the desired way. + + + + +AUTH_CONFIG_UPDATE_FAIL update of configuration failed: %1 + +At attempt to update the configuration the server with information +from the configuration database has failed, the reason being given in +the message. + + + + +AUTH_DATA_SOURCE data source database file: %1 + +This is a debug message produced by the authoritative server when it accesses a +datebase data source, listing the file that is being accessed. + + + + +AUTH_DNS_SERVICES_CREATED DNS services created + +This is a debug message indicating that the component that will handling +incoming queries for the authoritiative server (DNSServices) has been +successfully created. It is issued during server startup is an indication +that the initialization is proceeding normally. + + + + +AUTH_HEADER_PARSE_FAIL unable to parse header in received DNS packet: %1 + +This is a debug message, generated by the authoritative server when an +attempt to parse the header of a received DNS packet has failed. (The +reason for the failure is given in the message.) The server will drop the +packet. + + + + +AUTH_LOAD_TSIG loading TSIG keys + +This is a debug message indicating that the authoritiative server +has requested the keyring holding TSIG keys from the configuration +database. It is issued during server startup is an indication that the +initialization is proceeding normally. + + + + +AUTH_LOAD_ZONE loaded zone %1/%2 + +This debug message is issued during the processing of the 'loadzone' command +when the authoritative server has successfully loaded the named zone of the +named class. + + + + +AUTH_MEM_DATASRC_DISABLED memory data source is disabled for class %1 + +This is a debug message reporting that the authoritative server has +discovered that the memory data source is disabled for the given class. + + + + +AUTH_MEM_DATASRC_ENABLED memory data source is enabled for class %1 + +This is a debug message reporting that the authoritative server has +discovered that the memory data source is enabled for the given class. + + + + +AUTH_NOTIFY_QUESTIONS invalid number of questions (%1) in incoming NOTIFY + +This debug message is logged by the authoritative server when it receives +a NOTIFY packet that contains zero or more than one question. (A valid +NOTIFY packet contains one question.) The server will return a FORMERR +error to the sender. + + + + +AUTH_NOTIFY_RRTYPE invalid question RR type (%1) in incoming NOTIFY + +This debug message is logged by the authoritative server when it receives +a NOTIFY packet that an RR type of something other than SOA in the +question section. (The RR type received is included in the message.) The +server will return a FORMERR error to the sender. + + + + +AUTH_NO_STATS_SESSION session interface for statistics is not available + +The authoritative server had no session with the statistics module at the +time it attempted to send it data: the attempt has been abandoned. This +could be an error in configuration. + + + + +AUTH_NO_XFRIN received NOTIFY but XFRIN session is not running + +This is a debug message produced by the authoritative server when it receives +a NOTIFY packet but the XFRIN process is not running. The packet will be +dropped and nothing returned to the sender. + + + + +AUTH_PACKET_PARSE_ERROR unable to parse received DNS packet: %1 + +This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to something other +than a protocol error. The reason for the failure is given in the message; +the server will return a SERVFAIL error code to the sender. + + + + +AUTH_PACKET_PROTOCOL_ERROR DNS packet protocol error: %1. Returning %2 + +This is a debug message, generated by the authoritative server when an +attempt to parse a received DNS packet has failed due to a protocol error. +The reason for the failure is given in the message, as is the error code +that will be returned to the sender. + + + + +AUTH_PACKET_RECEIVED message received:\n%1 + +This is a debug message output by the authoritative server when it +receives a valid DNS packet. + +Note: This message includes the packet received, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_PROCESS_FAIL message processing failure: %1 + +This message is generated by the authoritative server when it has +encountered an internal error whilst processing a received packet: +the cause of the error is included in the message. + +The server will return a SERVFAIL error code to the sender of the packet. +However, this message indicates a potential error in the server. +Please open a bug ticket for this issue. + + + + +AUTH_RECEIVED_COMMAND command '%1' received + +This is a debug message issued when the authoritative server has received +a command on the command channel. + + + + +AUTH_RECEIVED_SENDSTATS command 'sendstats' received + +This is a debug message issued when the authoritative server has received +a command from the statistics module to send it data. The 'sendstats' +command is handled differently to other commands, which is why the debug +message associated with it has its own code. + + + + +AUTH_RESPONSE_RECEIVED received response message, ignoring + +This is a debug message, this is output if the authoritative server +receives a DNS packet with the QR bit set, i.e. a DNS response. The +server ignores the packet as it only responds to question packets. + + + + +AUTH_SEND_ERROR_RESPONSE sending an error response (%1 bytes):\n%2 + +This is a debug message recording that the authoritative server is sending +an error response to the originator of the query. A previous message will +have recorded details of the failure. + +Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_SEND_NORMAL_RESPONSE sending an error response (%1 bytes):\n%2 + +This is a debug message recording that the authoritative server is sending +a response to the originator of a query. + +Note: This message includes the packet sent, rendered in the form of +multiple lines of text. For this reason, it is suggested that this log message +not be routed to the syslog file, where the multiple lines could confuse +programs that expect a format of one message per line. + + + + +AUTH_SERVER_CREATED server created + +An informational message indicating that the authoritative server process has +been created and is initializing. The AUTH_SERVER_STARTED message will be +output when initialization has successfully completed and the server starts +accepting queries. + + + + +AUTH_SERVER_FAILED server failed: %1 + +The authoritative server has encountered a fatal error and is terminating. The +reason for the failure is included in the message. + + + + +AUTH_SERVER_STARTED server started + +Initialization of the authoritative server has completed successfully +and it is entering the main loop, waiting for queries to arrive. + + + + +AUTH_SQLITE3 nothing to do for loading sqlite3 + +This is a debug message indicating that the authoritative server has +found that the data source it is loading is an SQLite3 data source, +so no further validation is needed. + + + + +AUTH_STATS_CHANNEL_CREATED STATS session channel created + +This is a debug message indicating that the authoritative server has +created a channel to the statistics process. It is issued during server +startup is an indication that the initialization is proceeding normally. + + + + +AUTH_STATS_CHANNEL_ESTABLISHED STATS session channel established + +This is a debug message indicating that the authoritative server +has established communication over the previously created statistics +channel. It is issued during server startup is an indication that the +initialization is proceeding normally. + + + + +AUTH_STATS_COMMS communication error in sending statistics data: %1 + +An error was encountered when the authoritiative server tried to send data +to the statistics daemon. The message includes additional information +describing the reason for the failure. + + + + +AUTH_STATS_TIMEOUT timeout while sending statistics data: %1 + +The authoritative server sent data to the statistics daemon but received +no acknowledgement within the specified time. The message includes +additional information describing the reason for the failure. + + + + +AUTH_STATS_TIMER_DISABLED statistics timer has been disabled + +This is a debug message indicating that the statistics timer has been +disabled in the authoritative server and no statistics information is +being produced. + + + + +AUTH_STATS_TIMER_SET statistics timer set to %1 second(s) + +This is a debug message indicating that the statistics timer has been +enabled and that the authoritative server will produce statistics data +at the specified interval. + + + + +AUTH_UNSUPPORTED_OPCODE unsupported opcode: %1 + +This is a debug message, produced when a received DNS packet being +processed by the authoritative server has been found to contain an +unsupported opcode. (The opcode is included in the message.) The server +will return an error code of NOTIMPL to the sender. + + + + +AUTH_XFRIN_CHANNEL_CREATED XFRIN session channel created + +This is a debug message indicating that the authoritative server has +created a channel to the XFRIN (Transfer-in) process. It is issued +during server startup is an indication that the initialization is +proceeding normally. + + + + +AUTH_XFRIN_CHANNEL_ESTABLISHED XFRIN session channel established + +This is a debug message indicating that the authoritative server has +established communication over the previously-created channel to the +XFRIN (Transfer-in) process. It is issued during server startup is an +indication that the initialization is proceeding normally. + + + + +AUTH_ZONEMGR_COMMS error communicating with zone manager: %1 + +This is a debug message output during the processing of a NOTIFY request. +An error (listed in the message) has been encountered whilst communicating +with the zone manager. The NOTIFY request will not be honored. + + + + +AUTH_ZONEMGR_ERROR received error response from zone manager: %1 + +This is a debug message output during the processing of a NOTIFY +request. The zone manager component has been informed of the request, +but has returned an error response (which is included in the message). The +NOTIFY request will not be honored. + + + + +CC_ASYNC_READ_FAILED asynchronous read failed + +This marks a low level error, we tried to read data from the message queue +daemon asynchronously, but the ASIO library returned an error. + + + + +CC_CONN_ERROR error connecting to message queue (%1) + +It is impossible to reach the message queue daemon for the reason given. It +is unlikely there'll be reason for whatever program this currently is to +continue running, as the communication with the rest of BIND 10 is vital +for the components. + + + + +CC_DISCONNECT disconnecting from message queue daemon + +The library is disconnecting from the message queue daemon. This debug message +indicates that the program is trying to shut down gracefully. + + + + +CC_ESTABLISH trying to establish connection with message queue daemon at %1 + +This debug message indicates that the command channel library is about to +connect to the message queue daemon, which should be listening on the UNIX-domain +socket listed in the output. + + + + +CC_ESTABLISHED successfully connected to message queue daemon + +This debug message indicates that the connection was successfully made, this +should follow CC_ESTABLISH. + + + + +CC_GROUP_RECEIVE trying to receive a message + +Debug message, noting that a message is expected to come over the command +channel. + + + + +CC_GROUP_RECEIVED message arrived ('%1', '%2') + +Debug message, noting that we successfully received a message (its envelope and +payload listed). This follows CC_GROUP_RECEIVE, but might happen some time +later, depending if we waited for it or just polled. + + + + +CC_GROUP_SEND sending message '%1' to group '%2' + +Debug message, we're about to send a message over the command channel. + + + + +CC_INVALID_LENGTHS invalid length parameters (%1, %2) + +This happens when garbage comes over the command channel or some kind of +confusion happens in the program. The data received from the socket make no +sense if we interpret it as lengths of message. The first one is total length +of message, the second length of the header. The header and it's length +(2 bytes) is counted in the total length. + + + + +CC_LENGTH_NOT_READY length not ready + +There should be data representing length of message on the socket, but it +is not there. + + + + +CC_NO_MESSAGE no message ready to be received yet + +The program polled for incoming messages, but there was no message waiting. +This is a debug message which may happen only after CC_GROUP_RECEIVE. + + + + +CC_NO_MSGQ unable to connect to message queue (%1) + +It isn't possible to connect to the message queue daemon, for reason listed. +It is unlikely any program will be able continue without the communication. + + + + +CC_READ_ERROR error reading data from command channel (%1) + +A low level error happened when the library tried to read data from the +command channel socket. The reason is listed. + + + + +CC_READ_EXCEPTION error reading data from command channel (%1) + +We received an exception while trying to read data from the command +channel socket. The reason is listed. + + + + +CC_REPLY replying to message from '%1' with '%2' + +Debug message, noting we're sending a response to the original message +with the given envelope. + + + + +CC_SET_TIMEOUT setting timeout to %1ms + +Debug message. A timeout for which the program is willing to wait for a reply +is being set. + + + + +CC_START_READ starting asynchronous read + +Debug message. From now on, when a message (or command) comes, it'll wake the +program and the library will automatically pass it over to correct place. + + + + +CC_SUBSCRIBE subscribing to communication group %1 + +Debug message. The program wants to receive messages addressed to this group. + + + + +CC_TIMEOUT timeout reading data from command channel + +The program waited too long for data from the command channel (usually when it +sent a query to different program and it didn't answer for whatever reason). + + + + +CC_UNSUBSCRIBE unsubscribing from communication group %1 + +Debug message. The program no longer wants to receive messages addressed to +this group. + + + + +CC_WRITE_ERROR error writing data to command channel (%1) + +A low level error happened when the library tried to write data to the command +channel socket. + + + + +CC_ZERO_LENGTH invalid message length (0) + +The library received a message length being zero, which makes no sense, since +all messages must contain at least the envelope. + + + + +CFGMGR_AUTOMATIC_CONFIG_DATABASE_UPDATE Updating configuration database from version %1 to %2 + +An older version of the configuration database has been found, from which +there was an automatic upgrade path to the current version. These changes +are now applied, and no action from the administrator is necessary. + + + + +CFGMGR_CC_SESSION_ERROR Error connecting to command channel: %1 + +The configuration manager daemon was unable to connect to the messaging +system. The most likely cause is that msgq is not running. + + + + +CFGMGR_DATA_READ_ERROR error reading configuration database from disk: %1 + +There was a problem reading the persistent configuration data as stored +on disk. The file may be corrupted, or it is of a version from where +there is no automatic upgrade path. The file needs to be repaired or +removed. The configuration manager daemon will now shut down. + + + + +CFGMGR_IOERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1 + +There was an IO error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the directory where +the file is stored does not exist, or is not writable. The updated +configuration is not stored. + + + + +CFGMGR_OSERROR_WHILE_WRITING_CONFIGURATION Unable to write configuration file; configuration not stored: %1 + +There was an OS error from the system while the configuration manager +was trying to write the configuration database to disk. The specific +error is given. The most likely cause is that the system does not have +write access to the configuration database file. The updated +configuration is not stored. + + + + +CFGMGR_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the cfgmgr daemon. The +daemon will now shut down. @@ -148,32 +808,18 @@ The message itself is ignored by this module. CONFIG_CCSESSION_MSG_INTERNAL error handling CC session message: %1 -There was an internal problem handling an incoming message on the -command and control channel. An unexpected exception was thrown. This -most likely points to an internal inconsistency in the module code. The -exception message is appended to the log error, and the module will -continue to run, but will not send back an answer. +There was an internal problem handling an incoming message on the command +and control channel. An unexpected exception was thrown, details of +which are appended to the message. The module will continue to run, +but will not send back an answer. + +The most likely cause of this error is a programming error. Please raise +a bug report. - -CONFIG_FOPEN_ERR error opening %1: %2 - -There was an error opening the given file. - - - - -CONFIG_JSON_PARSE JSON parse error in %1: %2 - -There was a parse error in the JSON file. The given file does not appear -to be in valid JSON format. Please verify that the filename is correct -and that the contents are valid JSON. - - - - -CONFIG_MANAGER_CONFIG error getting configuration from cfgmgr: %1 + +CONFIG_GET_FAIL error getting configuration from cfgmgr: %1 The configuration manager returned an error when this module requested the configuration. The full error message answer from the configuration @@ -183,23 +829,40 @@ running configuration manager. - -CONFIG_MANAGER_MOD_SPEC module specification not accepted by cfgmgr: %1 + +CONFIG_JSON_PARSE JSON parse error in %1: %2 -The module specification file for this module was rejected by the -configuration manager. The full error message answer from the -configuration manager is appended to the log error. The most likely -cause is that the module is of a different (specification file) version -than the running configuration manager. +There was an error parsing the JSON file. The given file does not appear +to be in valid JSON format. Please verify that the filename is correct +and that the contents are valid JSON. - -CONFIG_MODULE_SPEC module specification error in %1: %2 + +CONFIG_MOD_SPEC_FORMAT module specification error in %1: %2 -The given file does not appear to be a valid specification file. Please -verify that the filename is correct and that its contents are a valid -BIND10 module specification. +The given file does not appear to be a valid specification file: details +are included in the message. Please verify that the filename is correct +and that its contents are a valid BIND10 module specification. + + + + +CONFIG_MOD_SPEC_REJECT module specification rejected by cfgmgr: %1 + +The specification file for this module was rejected by the configuration +manager. The full error message answer from the configuration manager is +appended to the log error. The most likely cause is that the module is of +a different (specification file) version than the running configuration +manager. + + + + +CONFIG_OPEN_FAIL error opening %1: %2 + +There was an error opening the given file. The reason for the failure +is included in the message. @@ -349,7 +1012,7 @@ returning the CNAME instead. DATASRC_MEM_CNAME_COEXIST can't add data to CNAME in domain '%1' This is the same problem as in MEM_CNAME_TO_NONEMPTY, but it happened the -other way around -- adding some outher data to CNAME. +other way around -- adding some other data to CNAME. @@ -401,7 +1064,7 @@ Debug information. A DNAME was found instead of the requested information. -DATASRC_MEM_DNAME_NS dNAME and NS can't coexist in non-apex domain '%1' +DATASRC_MEM_DNAME_NS DNAME and NS can't coexist in non-apex domain '%1' It was requested for DNAME and NS records to be put into the same domain which is not the apex (the top of the zone). This is forbidden by RFC @@ -544,7 +1207,7 @@ behaviour is specified by RFC 1034, section 4.3.3 -DATASRC_MEM_WILDCARD_DNAME dNAME record in wildcard domain '%1' +DATASRC_MEM_WILDCARD_DNAME DNAME record in wildcard domain '%1' The software refuses to load DNAME records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should @@ -554,7 +1217,7 @@ different tools. -DATASRC_MEM_WILDCARD_NS nS record in wildcard domain '%1' +DATASRC_MEM_WILDCARD_NS NS record in wildcard domain '%1' The software refuses to load NS records into a wildcard domain. It isn't explicitly forbidden, but the protocol is ambiguous about how this should @@ -666,7 +1329,7 @@ way down to the given domain. -DATASRC_QUERY_EMPTY_CNAME cNAME at '%1' is empty +DATASRC_QUERY_EMPTY_CNAME CNAME at '%1' is empty There was an CNAME and it was being followed. But it contains no records, so there's nowhere to go. There will be no answer. This indicates a problem @@ -905,7 +1568,7 @@ already. The code is 1 for error, 2 for not implemented. -DATASRC_QUERY_TOO_MANY_CNAMES cNAME chain limit exceeded at '%1' +DATASRC_QUERY_TOO_MANY_CNAMES CNAME chain limit exceeded at '%1' A CNAME led to another CNAME and it led to another, and so on. After 16 CNAMEs, the software gave up. Long CNAME chains are discouraged, and this @@ -962,14 +1625,14 @@ Debug information. The SQLite data source is closing the database file. -DATASRC_SQLITE_CREATE sQLite data source created +DATASRC_SQLITE_CREATE SQLite data source created Debug information. An instance of SQLite data source is being created. -DATASRC_SQLITE_DESTROY sQLite data source destroyed +DATASRC_SQLITE_DESTROY SQLite data source destroyed Debug information. An instance of SQLite data source is being destroyed. @@ -978,7 +1641,7 @@ Debug information. An instance of SQLite data source is being destroyed. DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1' -Debug information. The SQLite data source is trying to identify, which zone +Debug information. The SQLite data source is trying to identify which zone should hold this domain. @@ -986,7 +1649,7 @@ should hold this domain. DATASRC_SQLITE_ENCLOSURE_NOTFOUND no zone contains it -Debug information. The last SQLITE_ENCLOSURE query was unsuccessful, there's +Debug information. The last SQLITE_ENCLOSURE query was unsuccessful; there's no such zone in our data. @@ -1050,7 +1713,7 @@ a referral and where it goes. DATASRC_SQLITE_FINDREF_BAD_CLASS class mismatch looking for referral ('%1' and '%2') -The SQLite data source was trying to identify, if there's a referral. But +The SQLite data source was trying to identify if there's a referral. But it contains different class than the query was for. @@ -1143,294 +1806,325 @@ generated. - -LOGIMPL_ABOVEDBGMAX debug level of %1 is too high and will be set to the maximum of %2 + +LOGIMPL_ABOVE_MAX_DEBUG debug level of %1 is too high and will be set to the maximum of %2 -A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is above the maximum allowed value and has -been reduced to that value. +A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is above the maximum allowed value and has +been reduced to that value. The appearance of this message may indicate +a programming error - please submit a bug report. - -LOGIMPL_BADDEBUG debug string is '%1': must be of the form DEBUGn + +LOGIMPL_BAD_DEBUG_STRING debug string '%1' has invalid format -The string indicating the extended logging level (used by the underlying -logger implementation code) is not of the stated form. In particular, -it starts DEBUG but does not end with an integer. +A message from the interface to the underlying logger implementation +reporting that an internally-created string used to set the debug level +is not of the correct format (it should be of the form DEBUGn, where n +is an integer, e.g. DEBUG22). The appearance of this message indicates +a programming error - please submit a bug report. - -LOGIMPL_BELOWDBGMIN debug level of %1 is too low and will be set to the minimum of %2 + +LOGIMPL_BELOW_MIN_DEBUG debug level of %1 is too low and will be set to the minimum of %2 -A message from the underlying logger implementation code, the debug level -(as set by the string DEBGUGn) is below the minimum allowed value and has -been increased to that value. +A message from the interface to the underlying logger implementation reporting +that the debug level (as set by an internally-created string DEBUGn, where n +is an integer, e.g. DEBUG22) is below the minimum allowed value and has +been increased to that value. The appearance of this message may indicate +a programming error - please submit a bug report. - -MSG_BADDESTINATION unrecognized log destination: %1 + +LOG_BAD_DESTINATION unrecognized log destination: %1 A logger destination value was given that was not recognized. The destination should be one of "console", "file", or "syslog". - -MSG_BADSEVERITY unrecognized log severity: %1 + +LOG_BAD_SEVERITY unrecognized log severity: %1 A logger severity value was given that was not recognized. The severity should be one of "DEBUG", "INFO", "WARN", "ERROR", or "FATAL". - -MSG_BADSTREAM bad log console output stream: %1 + +LOG_BAD_STREAM bad log console output stream: %1 -A log console output stream was given that was not recognized. The -output stream should be one of "stdout", or "stderr" +A log console output stream was given that was not recognized. The output +stream should be one of "stdout", or "stderr" - -MSG_DUPLNS line %1: duplicate $NAMESPACE directive found + +LOG_DUPLICATE_MESSAGE_ID duplicate message ID (%1) in compiled code -When reading a message file, more than one $NAMESPACE directive was found. In -this version of the code, such a condition is regarded as an error and the -read will be abandoned. +During start-up, BIND10 detected that the given message identification had +been defined multiple times in the BIND10 code. + +This has no ill-effects other than the possibility that an erronous +message may be logged. However, as it is indicative of a programming +error, please log a bug report. - -MSG_DUPMSGID duplicate message ID (%1) in compiled code + +LOG_DUPLICATE_NAMESPACE line %1: duplicate $NAMESPACE directive found -Indicative of a programming error, when it started up, BIND10 detected that -the given message ID had been registered by one or more modules. (All message -IDs should be unique throughout BIND10.) This has no impact on the operation -of the server other that erroneous messages may be logged. (When BIND10 loads -the message IDs (and their associated text), if a duplicate ID is found it is -discarded. However, when the module that supplied the duplicate ID logs that -particular message, the text supplied by the module that added the original -ID will be output - something that may bear no relation to the condition being -logged. +When reading a message file, more than one $NAMESPACE directive was found. +Such a condition is regarded as an error and the read will be abandoned. - -MSG_IDNOTFND could not replace message text for '%1': no such message + +LOG_INPUT_OPEN_FAIL unable to open message file %1 for input: %2 + +The program was not able to open the specified input message file for +the reason given. + + + + +LOG_INVALID_MESSAGE_ID line %1: invalid message identification '%2' + +An invalid message identification (ID) has been found during the read of +a message file. Message IDs should comprise only alphanumeric characters +and the underscore, and should not start with a digit. + + + + +LOG_NAMESPACE_EXTRA_ARGS line %1: $NAMESPACE directive has too many arguments + +The $NAMESPACE directive in a message file takes a single argument, a +namespace in which all the generated symbol names are placed. This error +is generated when the compiler finds a $NAMESPACE directive with more +than one argument. + + + + +LOG_NAMESPACE_INVALID_ARG line %1: $NAMESPACE directive has an invalid argument ('%2') + +The $NAMESPACE argument in a message file should be a valid C++ namespace. +This message is output if the simple check on the syntax of the string +carried out by the reader fails. + + + + +LOG_NAMESPACE_NO_ARGS line %1: no arguments were given to the $NAMESPACE directive + +The $NAMESPACE directive in a message file takes a single argument, +a C++ namespace in which all the generated symbol names are placed. +This error is generated when the compiler finds a $NAMESPACE directive +with no arguments. + + + + +LOG_NO_MESSAGE_ID line %1: message definition line found without a message ID + +Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line in +the message file comprising just the "%" and nothing else. + + + + +LOG_NO_MESSAGE_TEXT line %1: line found containing a message ID ('%2') and no text + +Within a message file, message are defined by lines starting with a "%". +The rest of the line should comprise the message ID and text describing +the message. This error indicates the message compiler found a line +in the message file comprising just the "%" and message identification, +but no text. + + + + +LOG_NO_SUCH_MESSAGE could not replace message text for '%1': no such message During start-up a local message file was read. A line with the listed -message identification was found in the file, but the identification is not -one contained in the compiled-in message dictionary. Either the message -identification has been mis-spelled in the file, or the local file was used -for an earlier version of the software and the message with that -identification has been removed. +message identification was found in the file, but the identification is +not one contained in the compiled-in message dictionary. This message +may appear a number of times in the file, once for every such unknown +message identification. -This message may appear a number of times in the file, once for every such -unknown message identification. +There may be several reasons why this message may appear: + +- The message ID has been mis-spelled in the local message file. + +- The program outputting the message may not use that particular message +(e.g. it originates in a module not used by the program.) + +- The local file was written for an earlier version of the BIND10 software +and the later version no longer generates that message. + +Whatever the reason, there is no impact on the operation of BIND10. - -MSG_INVMSGID line %1: invalid message identification '%2' + +LOG_OPEN_OUTPUT_FAIL unable to open %1 for output: %2 -The concatenation of the prefix and the message identification is used as -a symbol in the C++ module; as such it may only contain +Originating within the logging code, the program was not able to open +the specified output file for the reason given. - -MSG_NOMSGID line %1: message definition line found without a message ID + +LOG_PREFIX_EXTRA_ARGS line %1: $PREFIX directive has too many arguments -Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -indicates the message compiler found a line in the message file comprising -just the "%" and nothing else. +Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +This error is generated when the compiler finds a $PREFIX directive with +more than one argument. + +Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND10. - -MSG_NOMSGTXT line %1: line found containing a message ID ('%2') and no text + +LOG_PREFIX_INVALID_ARG line %1: $PREFIX directive has an invalid argument ('%2') -Message definition lines are lines starting with a "%". The rest of the line -should comprise the message ID and text describing the message. This error -is generated when a line is found in the message file that contains the -leading "%" and the message identification but no text. +Within a message file, the $PREFIX directive takes a single argument, +a prefix to be added to the symbol names when a C++ file is created. +As such, it must adhere to restrictions on C++ symbol names (e.g. may +only contain alphanumeric characters or underscores, and may nor start +with a digit). A $PREFIX directive was found with an argument (given +in the message) that violates those restictions. + +Note: the $PREFIX directive is deprecated and will be removed in a future +version of BIND10. - -MSG_NSEXTRARG line %1: $NAMESPACE directive has too many arguments + +LOG_READING_LOCAL_FILE reading local message file %1 -The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with more than one argument. +This is an informational message output by BIND10 when it starts to read +a local message file. (A local message file may replace the text of +one of more messages; the ID of the message will not be changed though.) - -MSG_NSINVARG line %1: $NAMESPACE directive has an invalid argument ('%2') - -The $NAMESPACE argument should be a valid C++ namespace. The reader does a -cursory check on its validity, checking that the characters in the namespace -are correct. The error is generated when the reader finds an invalid -character. (Valid are alphanumeric characters, underscores and colons.) - - - - -MSG_NSNOARG line %1: no arguments were given to the $NAMESPACE directive - -The $NAMESPACE directive takes a single argument, a namespace in which all the -generated symbol names are placed. This error is generated when the -compiler finds a $NAMESPACE directive with no arguments. - - - - -MSG_OPENIN unable to open message file %1 for input: %2 - -The program was not able to open the specified input message file for the -reason given. - - - - -MSG_OPENOUT unable to open %1 for output: %2 - -The program was not able to open the specified output file for the reason -given. - - - - -MSG_PRFEXTRARG line %1: $PREFIX directive has too many arguments - -The $PREFIX directive takes a single argument, a prefix to be added to the -symbol names when a C++ .h file is created. This error is generated when the -compiler finds a $PREFIX directive with more than one argument. - - - - -MSG_PRFINVARG line %1: $PREFIX directive has an invalid argument ('%2') - -The $PREFIX argument is used in a symbol name in a C++ header file. As such, -it must adhere to restrictions on C++ symbol names (e.g. may only contain -alphanumeric characters or underscores, and may nor start with a digit). -A $PREFIX directive was found with an argument (given in the message) that -violates those restictions. - - - - -MSG_RDLOCMES reading local message file %1 - -This is an informational message output by BIND10 when it starts to read a -local message file. (A local message file may replace the text of one of more -messages; the ID of the message will not be changed though.) - - - - -MSG_READERR error reading from message file %1: %2 + +LOG_READ_ERROR error reading from message file %1: %2 The specified error was encountered reading from the named message file. - -MSG_UNRECDIR line %1: unrecognised directive '%2' + +LOG_UNRECOGNISED_DIRECTIVE line %1: unrecognised directive '%2' -A line starting with a dollar symbol was found, but the first word on the line -(shown in the message) was not a recognised message compiler directive. +Within a message file, a line starting with a dollar symbol was found +(indicating the presence of a directive) but the first word on the line +(shown in the message) was not recognised. - -MSG_WRITERR error writing to %1: %2 + +LOG_WRITE_ERROR error writing to %1: %2 -The specified error was encountered by the message compiler when writing to -the named output file. +The specified error was encountered by the message compiler when writing +to the named output file. - -NSAS_INVRESPSTR queried for %1 but got invalid response + +NSAS_FIND_NS_ADDRESS asking resolver to obtain A and AAAA records for %1 -This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for a RR for the -specified nameserver but received an invalid response. Either the success -function was called without a DNS message or the message was invalid on some -way. (In the latter case, the error should have been picked up elsewhere in -the processing logic, hence the raising of the error here.) +A debug message issued when the NSAS (nameserver address store - part +of the resolver) is making a callback into the resolver to retrieve the +address records for the specified nameserver. - -NSAS_INVRESPTC queried for %1 RR of type/class %2/%3, received response %4/%5 + +NSAS_FOUND_ADDRESS found address %1 for %2 -This message indicates an internal error in the nameserver address store -component (NSAS) of the resolver. The NSAS made a query for the given RR -type and class, but instead received an answer with the given type and class. +A debug message issued when the NSAS (nameserver address store - part +of the resolver) has retrieved the given address for the specified +nameserver through an external query. - -NSAS_LOOKUPCANCEL lookup for zone %1 has been cancelled + +NSAS_INVALID_RESPONSE queried for %1 but got invalid response -A debug message, this is output when a NSAS (nameserver address store - -part of the resolver) lookup for a zone has been cancelled. +The NSAS (nameserver address store - part of the resolver) made a query +for a RR for the specified nameserver but received an invalid response. +Either the success function was called without a DNS message or the +message was invalid on some way. (In the latter case, the error should +have been picked up elsewhere in the processing logic, hence the raising +of the error here.) + +This message indicates an internal error in the NSAS. Please raise a +bug report. - -NSAS_LOOKUPZONE searching NSAS for nameservers for zone %1 + +NSAS_LOOKUP_CANCEL lookup for zone %1 has been canceled -A debug message, this is output when a call is made to the nameserver address -store (part of the resolver) to obtain the nameservers for the specified zone. +A debug message issued when an NSAS (nameserver address store - part of +the resolver) lookup for a zone has been canceled. - -NSAS_NSADDR asking resolver to obtain A and AAAA records for %1 + +NSAS_NS_LOOKUP_FAIL failed to lookup any %1 for %2 -A debug message, the NSAS (nameserver address store - part of the resolver) is -making a callback into the resolver to retrieve the address records for the -specified nameserver. +A debug message issued when the NSAS (nameserver address store - part of +the resolver) has been unable to retrieve the specified resource record +for the specified nameserver. This is not necessarily a problem - the +nameserver may be unreachable, in which case the NSAS will try other +nameservers in the zone. - -NSAS_NSLKUPFAIL failed to lookup any %1 for %2 + +NSAS_SEARCH_ZONE_NS searching NSAS for nameservers for zone %1 -A debug message, the NSAS (nameserver address store - part of the resolver) -has been unable to retrieve the specified resource record for the specified -nameserver. This is not necessarily a problem - the nameserver may be -unreachable, in which case the NSAS will try other nameservers in the zone. +A debug message output when a call is made to the NSAS (nameserver +address store - part of the resolver) to obtain the nameservers for +the specified zone. - -NSAS_NSLKUPSUCC found address %1 for %2 - -A debug message, the NSAS (nameserver address store - part of the resolver) -has retrieved the given address for the specified nameserver through an -external query. - - - - -NSAS_SETRTT reporting RTT for %1 as %2; new value is now %3 + +NSAS_UPDATE_RTT update RTT for %1: was %2 ms, is now %3 ms A NSAS (nameserver address store - part of the resolver) debug message -reporting the round-trip time (RTT) for a query made to the specified -nameserver. The RTT has been updated using the value given and the new RTT is -displayed. (The RTT is subject to a calculation that damps out sudden -changes. As a result, the new RTT is not necessarily equal to the RTT -reported.) +reporting the update of a round-trip time (RTT) for a query made to the +specified nameserver. The RTT has been updated using the value given +and the new RTT is displayed. (The RTT is subject to a calculation that +damps out sudden changes. As a result, the new RTT used by the NSAS in +future decisions of which nameserver to use is not necessarily equal to +the RTT reported.) + + + + +NSAS_WRONG_ANSWER queried for %1 RR of type/class %2/%3, received response %4/%5 + +A NSAS (nameserver address store - part of the resolver) made a query for +a resource record of a particular type and class, but instead received +an answer with a different given type and class. + +This message indicates an internal error in the NSAS. Please raise a +bug report. @@ -1460,16 +2154,16 @@ type> tuple in the cache; instead, the deepest delegation found is indicated. - -RESLIB_FOLLOWCNAME following CNAME chain to <%1> + +RESLIB_FOLLOW_CNAME following CNAME chain to <%1> A debug message, a CNAME response was received and another query is being issued for the <name, class, type> tuple. - -RESLIB_LONGCHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded + +RESLIB_LONG_CHAIN CNAME received in response to query for <%1>: CNAME chain length exceeded A debug message recording that a CNAME response has been received to an upstream query for the specified question (Previous debug messages will have indicated @@ -1479,26 +2173,26 @@ is where on CNAME points to another) and so an error is being returned. - -RESLIB_NONSRRSET no NS RRSet in referral response received to query for <%1> + +RESLIB_NO_NS_RRSET no NS RRSet in referral response received to query for <%1> A debug message, this indicates that a response was received for the specified -query and was categorised as a referral. However, the received message did +query and was categorized as a referral. However, the received message did not contain any NS RRsets. This may indicate a programming error in the response classification code. - -RESLIB_NSASLOOK looking up nameserver for zone %1 in the NSAS + +RESLIB_NSAS_LOOKUP looking up nameserver for zone %1 in the NSAS A debug message, the RunningQuery object is querying the NSAS for the nameservers for the specified zone. - -RESLIB_NXDOMRR NXDOMAIN/NXRRSET received in response to query for <%1> + +RESLIB_NXDOM_NXRR NXDOMAIN/NXRRSET received in response to query for <%1> A debug message recording that either a NXDOMAIN or an NXRRSET response has been received to an upstream query for the specified question. Previous debug @@ -1514,8 +2208,8 @@ are no retries left, an error will be reported. - -RESLIB_PROTOCOLRTRY protocol error in answer for %1: %2 (retries left: %3) + +RESLIB_PROTOCOL_RETRY protocol error in answer for %1: %2 (retries left: %3) A debug message indicating that a protocol error was received and that the resolver is repeating the query to the same nameserver. After this @@ -1523,14 +2217,35 @@ repeated query, there will be the indicated number of retries left. - -RESLIB_RCODERR RCODE indicates error in response to query for <%1> + +RESLIB_RCODE_ERR RCODE indicates error in response to query for <%1> A debug message, the response to the specified query indicated an error that is not covered by a specific code path. A SERVFAIL will be returned. + +RESLIB_RECQ_CACHE_FIND found <%1> in the cache (resolve() instance %2) + +This is a debug message and indicates that a RecursiveQuery object found the +the specified <name, class, type> tuple in the cache. The instance number +at the end of the message indicates which of the two resolve() methods has +been called. + + + + +RESLIB_RECQ_CACHE_NO_FIND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2) + +This is a debug message and indicates that the look in the cache made by the +RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery +object has been created to resolve the question. The instance number at +the end of the message indicates which of the two resolve() methods has +been called. + + + RESLIB_REFERRAL referral received in response to query for <%1> @@ -1540,35 +2255,14 @@ have indicated the server to which the question was sent. - -RESLIB_REFERZONE referred to zone %1 + +RESLIB_REFER_ZONE referred to zone %1 A debug message indicating that the last referral message was to the specified zone. - -RESLIB_RESCAFND found <%1> in the cache (resolve() instance %2) - -This is a debug message and indicates that a RecursiveQuery object found the -the specified <name, class, type> tuple in the cache. The instance number -at the end of the message indicates which of the two resolve() methods has -been called. - - - - -RESLIB_RESCANOTFND did not find <%1> in the cache, starting RunningQuery (resolve() instance %2) - -This is a debug message and indicates that the look in the cache made by the -RecursiveQuery::resolve() method did not find an answer, so a new RunningQuery -object has been created to resolve the question. The instance number at -the end of the message indicates which of the two resolve() methods has -been called. - - - RESLIB_RESOLVE asked to resolve <%1> (resolve() instance %2) @@ -1579,8 +2273,8 @@ message indicates which of the two resolve() methods has been called. - -RESLIB_RRSETFND found single RRset in the cache when querying for <%1> (resolve() instance %2) + +RESLIB_RRSET_FOUND found single RRset in the cache when querying for <%1> (resolve() instance %2) A debug message, indicating that when RecursiveQuery::resolve queried the cache, a single RRset was found which was put in the answer. The instance @@ -1596,16 +2290,16 @@ A debug message giving the round-trip time of the last query and response. - -RESLIB_RUNCAFND found <%1> in the cache + +RESLIB_RUNQ_CACHE_FIND found <%1> in the cache This is a debug message and indicates that a RunningQuery object found the specified <name, class, type> tuple in the cache. - -RESLIB_RUNCALOOK looking up up <%1> in the cache + +RESLIB_RUNQ_CACHE_LOOKUP looking up up <%1> in the cache This is a debug message and indicates that a RunningQuery object has made a call to its doLookup() method to look up the specified <name, class, type> @@ -1613,16 +2307,16 @@ tuple, the first action of which will be to examine the cache. - -RESLIB_RUNQUFAIL failure callback - nameservers are unreachable + +RESLIB_RUNQ_FAIL failure callback - nameservers are unreachable A debug message indicating that a RunningQuery's failure callback has been called because all nameservers for the zone in question are unreachable. - -RESLIB_RUNQUSUCC success callback - sending query to %1 + +RESLIB_RUNQ_SUCCESS success callback - sending query to %1 A debug message indicating that a RunningQuery's success callback has been called because a nameserver has been found, and that a query is being sent @@ -1630,8 +2324,8 @@ to the specified nameserver. - -RESLIB_TESTSERV setting test server to %1(%2) + +RESLIB_TEST_SERVER setting test server to %1(%2) This is an internal debugging message and is only generated in unit tests. It indicates that all upstream queries from the resolver are being routed to @@ -1641,8 +2335,8 @@ operation, it is a warning message instead of a debug message. - -RESLIB_TESTUPSTR sending upstream query for <%1> to test server at %2 + +RESLIB_TEST_UPSTREAM sending upstream query for <%1> to test server at %2 This is a debug message and should only be seen in unit tests. A query for the specified <name, class, type> tuple is being sent to a test nameserver @@ -1658,8 +2352,8 @@ there are no retries left, an error will be reported. - -RESLIB_TIMEOUTRTRY query <%1> to %2 timed out, re-trying (retries left: %3) + +RESLIB_TIMEOUT_RETRY query <%1> to %2 timed out, re-trying (retries left: %3) A debug message indicating that the specified query has timed out and that the resolver is repeating the query to the same nameserver. After this @@ -1685,8 +2379,8 @@ tuple is being sent to a nameserver whose address is given in the message. - -RESOLVER_AXFRTCP AXFR request received over TCP + +RESOLVER_AXFR_TCP AXFR request received over TCP A debug message, the resolver received a NOTIFY message over TCP. The server cannot process it and will return an error message to the sender with the @@ -1694,8 +2388,8 @@ RCODE set to NOTIMP. - -RESOLVER_AXFRUDP AXFR request received over UDP + +RESOLVER_AXFR_UDP AXFR request received over UDP A debug message, the resolver received a NOTIFY message over UDP. The server cannot process it (and in any case, an AXFR request should be sent over TCP) @@ -1703,24 +2397,24 @@ and will return an error message to the sender with the RCODE set to FORMERR. - -RESOLVER_CLTMOSMALL client timeout of %1 is too small + +RESOLVER_CLIENT_TIME_SMALL client timeout of %1 is too small An error indicating that the configuration value specified for the query timeout is too small. - -RESOLVER_CONFIGCHAN configuration channel created + +RESOLVER_CONFIG_CHANNEL configuration channel created A debug message, output when the resolver has successfully established a connection to the configuration channel. - -RESOLVER_CONFIGERR error in configuration: %1 + +RESOLVER_CONFIG_ERROR error in configuration: %1 An error was detected in a configuration update received by the resolver. This may be in the format of the configuration message (in which case this is a @@ -1730,16 +2424,16 @@ will give more details. - -RESOLVER_CONFIGLOAD configuration loaded + +RESOLVER_CONFIG_LOADED configuration loaded A debug message, output when the resolver configuration has been successfully loaded. - -RESOLVER_CONFIGUPD configuration updated: %1 + +RESOLVER_CONFIG_UPDATED configuration updated: %1 A debug message, the configuration has been updated with the specified information. @@ -1753,16 +2447,16 @@ A debug message, output when the Resolver() object has been created. - -RESOLVER_DNSMSGRCVD DNS message received: %1 + +RESOLVER_DNS_MESSAGE_RECEIVED DNS message received: %1 A debug message, this always precedes some other logging message and is the formatted contents of the DNS packet that the other message refers to. - -RESOLVER_DNSMSGSENT DNS message of %1 bytes sent: %2 + +RESOLVER_DNS_MESSAGE_SENT DNS message of %1 bytes sent: %2 A debug message, this contains details of the response sent back to the querying system. @@ -1777,24 +2471,24 @@ resolver. All it can do is to shut down. - -RESOLVER_FWDADDR setting forward address %1(%2) + +RESOLVER_FORWARD_ADDRESS setting forward address %1(%2) This message may appear multiple times during startup, and it lists the forward addresses used by the resolver when running in forwarding mode. - -RESOLVER_FWDQUERY processing forward query + +RESOLVER_FORWARD_QUERY processing forward query The received query has passed all checks and is being forwarded to upstream servers. - -RESOLVER_HDRERR message received, exception when processing header: %1 + +RESOLVER_HEADER_ERROR message received, exception when processing header: %1 A debug message noting that an exception occurred during the processing of a received packet. The packet has been dropped. @@ -1809,40 +2503,34 @@ and will return an error message to the sender with the RCODE set to NOTIMP. - -RESOLVER_LKTMOSMALL lookup timeout of %1 is too small + +RESOLVER_LOOKUP_TIME_SMALL lookup timeout of %1 is too small An error indicating that the configuration value specified for the lookup timeout is too small. - -RESOLVER_NFYNOTAUTH NOTIFY arrived but server is not authoritative + +RESOLVER_MESSAGE_ERROR error parsing received message: %1 - returning %2 -The resolver received a NOTIFY message. As the server is not authoritative it -cannot process it, so it returns an error message to the sender with the RCODE -set to NOTAUTH. +A debug message noting that the resolver received a message and the +parsing of the body of the message failed due to some error (although +the parsing of the header succeeded). The message parameters give a +textual description of the problem and the RCODE returned. - -RESOLVER_NORMQUERY processing normal query + +RESOLVER_NEGATIVE_RETRIES negative number of retries (%1) specified in the configuration -The received query has passed all checks and is being processed by the resolver. +An error message indicating that the resolver configuration has specified a +negative retry count. Only zero or positive values are valid. - -RESOLVER_NOROOTADDR no root addresses available - -A warning message during startup, indicates that no root addresses have been -set. This may be because the resolver will get them from a priming query. - - - - -RESOLVER_NOTIN non-IN class request received, returning REFUSED message + +RESOLVER_NON_IN_PACKET non-IN class request received, returning REFUSED message A debug message, the resolver has received a DNS packet that was not IN class. The resolver cannot handle such packets, so is returning a REFUSED response to @@ -1850,8 +2538,24 @@ the sender. - -RESOLVER_NOTONEQUES query contained %1 questions, exactly one question was expected + +RESOLVER_NORMAL_QUERY processing normal query + +The received query has passed all checks and is being processed by the resolver. + + + + +RESOLVER_NOTIFY_RECEIVED NOTIFY arrived but server is not authoritative + +The resolver received a NOTIFY message. As the server is not authoritative it +cannot process it, so it returns an error message to the sender with the RCODE +set to NOTAUTH. + + + + +RESOLVER_NOT_ONE_QUESTION query contained %1 questions, exactly one question was expected A debug message, the resolver received a query that contained the number of entires in the question section detailed in the message. This is a malformed @@ -1860,17 +2564,16 @@ return a message to the sender with the RCODE set to FORMERR. - -RESOLVER_OPCODEUNS opcode %1 not supported by the resolver + +RESOLVER_NO_ROOT_ADDRESS no root addresses available -A debug message, the resolver received a message with an unsupported opcode -(it can only process QUERY opcodes). It will return a message to the sender -with the RCODE set to NOTIMP. +A warning message during startup, indicates that no root addresses have been +set. This may be because the resolver will get them from a priming query. - -RESOLVER_PARSEERR error parsing received message: %1 - returning %2 + +RESOLVER_PARSE_ERROR error parsing received message: %1 - returning %2 A debug message noting that the resolver received a message and the parsing of the body of the message failed due to some non-protocol related reason @@ -1879,16 +2582,16 @@ a textual description of the problem and the RCODE returned. - -RESOLVER_PRINTMSG print message command, aeguments are: %1 + +RESOLVER_PRINT_COMMAND print message command, arguments are: %1 This message is logged when a "print_message" command is received over the command channel. - -RESOLVER_PROTERR protocol error parsing received message: %1 - returning %2 + +RESOLVER_PROTOCOL_ERROR protocol error parsing received message: %1 - returning %2 A debug message noting that the resolver received a message and the parsing of the body of the message failed due to some protocol error (although the @@ -1897,28 +2600,70 @@ description of the problem and the RCODE returned. - -RESOLVER_QUSETUP query setup + +RESOLVER_QUERY_ACCEPTED query accepted: '%1/%2/%3' from %4 + +A debug message that indicates an incoming query is accepted in terms of +the query ACL. The log message shows the query in the form of +<query name>/<query type>/<query class>, and the client that sends the +query in the form of <Source IP address>#<source port>. + + + + +RESOLVER_QUERY_DROPPED query dropped: '%1/%2/%3' from %4 + +An informational message that indicates an incoming query is dropped +in terms of the query ACL. Unlike the RESOLVER_QUERY_REJECTED +case, the server does not return any response. The log message +shows the query in the form of <query name>/<query type>/<query +class>, and the client that sends the query in the form of <Source +IP address>#<source port>. + + + + +RESOLVER_QUERY_REJECTED query rejected: '%1/%2/%3' from %4 + +An informational message that indicates an incoming query is rejected +in terms of the query ACL. This results in a response with an RCODE of +REFUSED. The log message shows the query in the form of <query +name>/<query type>/<query class>, and the client that sends the +query in the form of <Source IP address>#<source port>. + + + + +RESOLVER_QUERY_SETUP query setup A debug message noting that the resolver is creating a RecursiveQuery object. - -RESOLVER_QUSHUT query shutdown + +RESOLVER_QUERY_SHUTDOWN query shutdown A debug message noting that the resolver is destroying a RecursiveQuery object. - -RESOLVER_QUTMOSMALL query timeout of %1 is too small + +RESOLVER_QUERY_TIME_SMALL query timeout of %1 is too small An error indicating that the configuration value specified for the query timeout is too small. + +RESOLVER_RECEIVED_MESSAGE resolver has received a DNS message + +A debug message indicating that the resolver has received a message. Depending +on the debug settings, subsequent log output will indicate the nature of the +message. + + + RESOLVER_RECURSIVE running in recursive mode @@ -1927,43 +2672,18 @@ resolver is running in recursive mode. - -RESOLVER_RECVMSG resolver has received a DNS message - -A debug message indicating that the resolver has received a message. Depending -on the debug settings, subsequent log output will indicate the nature of the -message. - - - - -RESOLVER_RETRYNEG negative number of retries (%1) specified in the configuration - -An error message indicating that the resolver configuration has specified a -negative retry count. Only zero or positive values are valid. - - - - -RESOLVER_ROOTADDR setting root address %1(%2) - -This message may appear multiple times during startup; it lists the root -addresses used by the resolver. - - - - -RESOLVER_SERVICE service object created + +RESOLVER_SERVICE_CREATED service object created A debug message, output when the main service object (which handles the received queries) is created. - -RESOLVER_SETPARAM query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4 + +RESOLVER_SET_PARAMS query timeout: %1, client timeout: %2, lookup timeout: %3, retry count: %4 -A debug message, lists the parameters associated with the message. These are: +A debug message, lists the parameters being set for the resolver. These are: query timeout: the timeout (in ms) used for queries originated by the resolver to upstream servers. Client timeout: the interval to resolver a query by a client: after this time, the resolver sends back a SERVFAIL to the client @@ -1972,17 +2692,33 @@ resolver gives up trying to resolve a query. Retry count: the number of times the resolver will retry a query to an upstream server if it gets a timeout. The client and lookup timeouts require a bit more explanation. The -resolution of the clent query might require a large number of queries to +resolution of the client query might require a large number of queries to upstream nameservers. Even if none of these queries timeout, the total time taken to perform all the queries may exceed the client timeout. When this happens, a SERVFAIL is returned to the client, but the resolver continues with the resolution process. Data received is added to the cache. However, -there comes a time - the lookup timeout - when even the resolve gives up. +there comes a time - the lookup timeout - when even the resolver gives up. At this point it will wait for pending upstream queries to complete or timeout and drop the query. + +RESOLVER_SET_QUERY_ACL query ACL is configured + +A debug message that appears when a new query ACL is configured for the +resolver. + + + + +RESOLVER_SET_ROOT_ADDRESS setting root address %1(%2) + +This message may appear multiple times during startup; it lists the root +addresses used by the resolver. + + + RESOLVER_SHUTDOWN resolver shutdown complete @@ -2005,12 +2741,385 @@ An informational message, this is output when the resolver starts up. - -RESOLVER_UNEXRESP received unexpected response, ignoring + +RESOLVER_UNEXPECTED_RESPONSE received unexpected response, ignoring A debug message noting that the server has received a response instead of a query and is ignoring it. + + + +RESOLVER_UNSUPPORTED_OPCODE opcode %1 not supported by the resolver + +A debug message, the resolver received a message with an unsupported opcode +(it can only process QUERY opcodes). It will return a message to the sender +with the RCODE set to NOTIMP. + + + + +XFRIN_AXFR_DATABASE_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to a database problem. +The error is shown in the log message. + + + + +XFRIN_AXFR_INTERNAL_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to an internal +problem in the bind10 python wrapper library. +The error is shown in the log message. + + + + +XFRIN_AXFR_TRANSFER_FAILURE AXFR transfer of zone %1 failed: %2 + +The AXFR transfer for the given zone has failed due to a protocol error. +The error is shown in the log message. + + + + +XFRIN_AXFR_TRANSFER_STARTED AXFR transfer of zone %1 started + +A connection to the master server has been made, the serial value in +the SOA record has been checked, and a zone transfer has been started. + + + + +XFRIN_AXFR_TRANSFER_SUCCESS AXFR transfer of zone %1 succeeded + +The AXFR transfer of the given zone was successfully completed. + + + + +XFRIN_BAD_MASTER_ADDR_FORMAT bad format for master address: %1 + +The given master address is not a valid IP address. + + + + +XFRIN_BAD_MASTER_PORT_FORMAT bad format for master port: %1 + +The master port as read from the configuration is not a valid port number. + + + + +XFRIN_BAD_TSIG_KEY_STRING bad TSIG key string: %1 + +The TSIG key string as read from the configuration does not represent +a valid TSIG key. + + + + +XFRIN_BAD_ZONE_CLASS Invalid zone class: %1 + +The zone class as read from the configuration is not a valid DNS class. + + + + +XFRIN_CC_SESSION_ERROR error reading from cc channel: %1 + +There was a problem reading from the command and control channel. The +most likely cause is that xfrin the msgq daemon is not running. + + + + +XFRIN_COMMAND_ERROR error while executing command '%1': %2 + +There was an error while the given command was being processed. The +error is given in the log message. + + + + +XFRIN_CONNECT_MASTER error connecting to master at %1: %2 + +There was an error opening a connection to the master. The error is +shown in the log message. + + + + +XFRIN_IMPORT_DNS error importing python DNS module: %1 + +There was an error importing the python DNS module pydnspp. The most +likely cause is a PYTHONPATH problem. + + + + +XFRIN_MSGQ_SEND_ERROR error while contacting %1 and %2 + +There was a problem sending a message to the xfrout module or the +zone manager. This most likely means that the msgq daemon has quit or +was killed. + + + + +XFRIN_MSGQ_SEND_ERROR_ZONE_MANAGER error while contacting %1 + +There was a problem sending a message to the zone manager. This most +likely means that the msgq daemon has quit or was killed. + + + + +XFRIN_RETRANSFER_UNKNOWN_ZONE got notification to retransfer unknown zone %1 + +There was an internal command to retransfer the given zone, but the +zone is not known to the system. This may indicate that the configuration +for xfrin is incomplete, or there was a typographical error in the +zone name in the configuration. + + + + +XFRIN_STARTING starting resolver with command line '%1' + +An informational message, this is output when the resolver starts up. + + + + +XFRIN_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the xfrin daemon. The +daemon will now shut down. + + + + +XFRIN_UNKNOWN_ERROR unknown error: %1 + +An uncaught exception was raised while running the xfrin daemon. The +exception message is printed in the log message. + + + + +XFROUT_AXFR_TRANSFER_DONE transfer of %1/%2 complete + +The transfer of the given zone has been completed successfully, or was +aborted due to a shutdown event. + + + + +XFROUT_AXFR_TRANSFER_ERROR error transferring zone %1/%2: %3 + +An uncaught exception was encountered while sending the response to +an AXFR query. The error message of the exception is included in the +log message, but this error most likely points to incomplete exception +handling in the code. + + + + +XFROUT_AXFR_TRANSFER_FAILED transfer of %1/%2 failed, rcode: %3 + +A transfer out for the given zone failed. An error response is sent +to the client. The given rcode is the rcode that is set in the error +response. This is either NOTAUTH (we are not authoritative for the +zone), SERVFAIL (our internal database is missing the SOA record for +the zone), or REFUSED (the limit of simultaneous outgoing AXFR +transfers, as specified by the configuration value +Xfrout/max_transfers_out, has been reached). + + + + +XFROUT_AXFR_TRANSFER_STARTED transfer of zone %1/%2 has started + +A transfer out of the given zone has started. + + + + +XFROUT_BAD_TSIG_KEY_STRING bad TSIG key string: %1 + +The TSIG key string as read from the configuration does not represent +a valid TSIG key. + + + + +XFROUT_CC_SESSION_ERROR error reading from cc channel: %1 + +There was a problem reading from the command and control channel. The +most likely cause is that the msgq daemon is not running. + + + + +XFROUT_CC_SESSION_TIMEOUT_ERROR timeout waiting for cc response + +There was a problem reading a response from antoher module over the +command and control channel. The most likely cause is that the +configuration manager b10-cfgmgr is not running. + + + + +XFROUT_FETCH_REQUEST_ERROR socket error while fetching a request from the auth daemon + +There was a socket error while contacting the b10-auth daemon to +fetch a transfer request. The auth daemon may have shutdown. + + + + +XFROUT_HANDLE_QUERY_ERROR error while handling query: %1 + +There was a general error handling an xfrout query. The error is shown +in the message. In principle this error should not appear, and points +to an oversight catching exceptions in the right place. However, to +ensure the daemon keeps running, this error is caught and reported. + + + + +XFROUT_IMPORT error importing python module: %1 + +There was an error importing a python module. One of the modules needed +by xfrout could not be found. This suggests that either some libraries +are missing on the system, or the PYTHONPATH variable is not correct. +The specific place where this library needs to be depends on your +system and your specific installation. + + + + +XFROUT_NEW_CONFIG Update xfrout configuration + +New configuration settings have been sent from the configuration +manager. The xfrout daemon will now apply them. + + + + +XFROUT_NEW_CONFIG_DONE Update xfrout configuration done + +The xfrout daemon is now done reading the new configuration settings +received from the configuration manager. + + + + +XFROUT_NOTIFY_COMMAND received command to send notifies for %1/%2 + +The xfrout daemon received a command on the command channel that +NOTIFY packets should be sent for the given zone. + + + + +XFROUT_PARSE_QUERY_ERROR error parsing query: %1 + +There was a parse error while reading an incoming query. The parse +error is shown in the log message. A remote client sent a packet we +do not understand or support. The xfrout request will be ignored. +In general, this should only occur for unexpected problems like +memory allocation failures, as the query should already have been +parsed by the b10-auth daemon, before it was passed here. + + + + +XFROUT_PROCESS_REQUEST_ERROR error processing transfer request: %2 + +There was an error processing a transfer request. The error is included +in the log message, but at this point no specific information other +than that could be given. This points to incomplete exception handling +in the code. + + + + +XFROUT_RECEIVED_SHUTDOWN_COMMAND shutdown command received + +The xfrout daemon received a shutdown command from the command channel +and will now shut down. + + + + +XFROUT_RECEIVE_FILE_DESCRIPTOR_ERROR error receiving the file descriptor for an XFR connection + +There was an error receiving the file descriptor for the transfer +request. Normally, the request is received by b10-auth, and passed on +to the xfrout daemon, so it can answer directly. However, there was a +problem receiving this file descriptor. The request will be ignored. + + + + +XFROUT_REMOVE_OLD_UNIX_SOCKET_FILE_ERROR error removing unix socket file %1: %2 + +The unix socket file xfrout needs for contact with the auth daemon +already exists, and needs to be removed first, but there is a problem +removing it. It is likely that we do not have permission to remove +this file. The specific error is show in the log message. The xfrout +daemon will shut down. + + + + +XFROUT_REMOVE_UNIX_SOCKET_FILE_ERROR error clearing unix socket file %1: %2 + +When shutting down, the xfrout daemon tried to clear the unix socket +file used for communication with the auth daemon. It failed to remove +the file. The reason for the failure is given in the error message. + + + + +XFROUT_SOCKET_SELECT_ERROR error while calling select() on request socket: %1 + +There was an error while calling select() on the socket that informs +the xfrout daemon that a new xfrout request has arrived. This should +be a result of rare local error such as memory allocation failure and +shouldn't happen under normal conditions. The error is included in the +log message. + + + + +XFROUT_STOPPED_BY_KEYBOARD keyboard interrupt, shutting down + +There was a keyboard interrupt signal to stop the xfrout daemon. The +daemon will now shut down. + + + + +XFROUT_STOPPING the xfrout daemon is shutting down + +The current transfer is aborted, as the xfrout daemon is shutting down. + + + + +XFROUT_UNIX_SOCKET_FILE_IN_USE another xfrout process seems to be using the unix socket file %1 + +While starting up, the xfrout daemon tried to clear the unix domain +socket needed for contacting the b10-auth daemon to pass requests +on, but the file is in use. The most likely cause is that another +xfrout daemon process is still running. This xfrout daemon (the one +printing this message) will not start. + From 525d9602da83a5d8ddbfc9ebda282209aa743a70 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 5 Jul 2011 10:58:11 -0500 Subject: [PATCH 009/175] [jreed-docs-2] add some TODO comments some docs need to be written --- src/bin/xfrin/b10-xfrin.xml | 1 + src/bin/xfrout/b10-xfrout.xml | 8 ++++++++ 2 files changed, 9 insertions(+) diff --git a/src/bin/xfrin/b10-xfrin.xml b/src/bin/xfrin/b10-xfrin.xml index ea4c724d23..71fcf931ca 100644 --- a/src/bin/xfrin/b10-xfrin.xml +++ b/src/bin/xfrin/b10-xfrin.xml @@ -103,6 +103,7 @@ in separate zonemgr process. b10-xfrin daemon. The list items are: name (the zone name), + master_addr (the zone master to transfer from), master_port (defaults to 53), and tsig_key (optional TSIG key to use). diff --git a/src/bin/xfrout/b10-xfrout.xml b/src/bin/xfrout/b10-xfrout.xml index ad71fe2bf7..9889b8058e 100644 --- a/src/bin/xfrout/b10-xfrout.xml +++ b/src/bin/xfrout/b10-xfrout.xml @@ -134,6 +134,14 @@ data storage types. + + + The configuration commands are: From e05a3418c9d6b3f70cdb387d1f30d8ba59733f02 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 26 Jul 2011 15:51:13 +0200 Subject: [PATCH 010/175] [trac801] Creator API --- src/bin/bind10/creatorapi.txt | 80 +++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 src/bin/bind10/creatorapi.txt diff --git a/src/bin/bind10/creatorapi.txt b/src/bin/bind10/creatorapi.txt new file mode 100644 index 0000000000..a55099e01b --- /dev/null +++ b/src/bin/bind10/creatorapi.txt @@ -0,0 +1,80 @@ +Socket creator API +================== + +This API is between Boss and other modules to allow them requesting of sockets. +For simplicity, we will use the socket creator for all (even non-privileged) +ports for now, but we should have some function where we can abstract it later. + +Goals +----- +* Be able to request a socket of any combination IP/IPv6 UDP/TCP bound to given + port and address (sockets that are not bound to anything can be created + without privileges, therefore are not requested from the socket creator). +* Allow to provide the same socket to multiple modules (eg. multiple running + auth servers). +* Allow releasing the sockets (in case all modules using it give it up, + terminate or crash). +* Allow restricting of the sharing (don't allow shared socket between auth + and recursive, as the packets would often get to the wrong application, + show error instead). +* Get the socket to the application. + +Transport of sockets +-------------------- +It seems we are stuck with current msgq for a while and there's a chance the +new replacement will not be able to send sockets inbound. So, we need another +channel. + +The boss will create a unix-domain socket and listen on it. When something +requests a socket over the command channel and the socket is created, some kind +of token is returned to the application (which will represent the future +socket). The application then connects to the unix-domain socket, sends the +token over the connection (so Boss will know which socket to send there, in case +multiple applications ask for sockets simultaneously) and Boss sends the socket +in return. + +Caching of sockets +------------------ +To allow sending the same socket to multiple application, the Boss process will +hold a cache. Each socket that is created and sent is kept open in Boss and +preserved there as well. A reference count is kept with each of them. + +When another application asks for the same socket, it is simply sent from the +cache instead of creating it again by the creator. + +When application gives the socket willingly (by sending a message over the +command channel), the reference count can be decreased without problems. But +when the application terminates or crashes, we need to decrease it as well. +There's a problem, since we don't know which command channel connection (eg. +lname) belongs to which PID. Furthermore, the applications don't need to be +started by boss. + +There are two possibilities: +* Let the msgq send messages about disconnected clients (eg. group message to + some name). This one is better if we want to migrate to dbus, since dbus + already has this capability as well as sending the sockets inbound (at last it + seems so on unix) and we could get rid of the unix-domain socket completely. +* Keep the unix-domain connections open forever. Boss can remember which socket + was sent to which connection and when the connection closes (because the + application crashed), it can drop all the references on the sockets. This + seems easier to implement. + +The commands +------------ +* Command to release a socket. This one would have single parameter, the token + used to get the socket. After this, boss would decrease its reference count + and if it drops to zero, close its own copy of the socket. This should be used + when the module stops using the socket (and after closes it). +* Command to request a socket. It would have parameters to specify which socket + (IP address, address family, port) and how to allow sharing. Sharing would be + one of: + - None + - Same kind of application + - Any kind of application + And a kind of application would be provided, to decide if the sharing is + possible (eg. if auth allows sharing with the same kind and something else + allows sharing with anything, the sharing is not possible, two auths can). + + It would return either error (the socket can't be created or sharing is not + possible) or the token. Then there would be some time for the application to + pick up the requested socket. From c6ef5865b3fd8e5d5fb8c891467b3722fde4d685 Mon Sep 17 00:00:00 2001 From: reed Date: Tue, 26 Jul 2011 17:04:33 -0500 Subject: [PATCH 011/175] trac1011: a TODO to research for logging docs --- doc/guide/bind10-guide.xml | 1 + 1 file changed, 1 insertion(+) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 6a4218207a..f297223296 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1498,6 +1498,7 @@ then change those defaults with config set Resolver/forward_addresses[0]/address 2011-06-15 13:48:22.034 + The date and time at which the message was generated. From ba7bc1e14fcf1a223a9a42ede2e9cd7d290c8b61 Mon Sep 17 00:00:00 2001 From: reed Date: Tue, 26 Jul 2011 17:06:50 -0500 Subject: [PATCH 012/175] trac1011: add Logging configuration docs This is a copy and paste from http://bind10.isc.org/wiki/LoggingConfigurationGuide No formatting or cleanup or XML-ization yet. --- doc/guide/bind10-guide.xml | 180 +++++++++++++++++++++++++++++++++++++ 1 file changed, 180 insertions(+) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index f297223296..22515c05fb 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1551,6 +1551,186 @@ then change those defaults with config set Resolver/forward_addresses[0]/address +Logging configuration ¶ + +The logging system in BIND 10 is configured through the Logging module. All BIND 10 modules will look at the configuration in Logging to see what should be logged and to where. +Loggers ¶ + +Within BIND 10, a message is logged through a component called a "logger". Different parts of BIND 10 log messages through different loggers, and each logger can be configured independently of one another. + +In the Logging module, you can specify the configuration for zero or more loggers; any that are not specified will take appropriate default values.. + +The three most important elements of a logger configuration are the name (the component that is generating the messages), the severity (what to log), and the output_options (where to log). +name (string) ¶ + +Each logger in the system has a name, the name being that of the component using it to log messages. For instance, if you want to configure logging for the resolver module, you add an entry for a logger named 'Resolver'. This configuration will then be used by the loggers in the Resolver module, and all the libraries used by it. + +If you want to specify logging for one specific library within the module, you set the name to 'module.library'. For example, the logger used by the nameserver address store component has the full name of 'Resolver.nsas'. If there is no entry in Logging for a particular library, it will use the configuration given for the module. + +To illustrate this, suppose you want the cache library to log messages of severity DEBUG, and the rest of the resolver code to log messages of severity INFO. To achieve this you specify two loggers, one with the name 'Resolver' and severity INFO, and one with the name 'Resolver.cache' with severity DEBUG. As there are no entries for other libraries (e.g. the nsas), they will use the configuration for the module ('Resolver'), so giving the desired behavior. + +One special case is that of a module name of '*', which is interpreted as 'any module'. You can set global logging options by using this, including setting the logging configuration for a library that is used by multiple modules (e.g. '*.config" specifies the configuration library code in whatever module is using it). + +If there are multiple logger specifications in the configuration that might match a particular logger, the specification with the more specific logger name takes precedence. For example, if there are entries for for both '*' and 'Resolver', the resolver module - and all libraries it uses - will log messages according to the configuration in the second entry ('Resolver'). All other modules will use the configuration of the first entry ('*'). If there was also a configuration entry for 'Resolver.cache', the cache library within the resolver would use that in preference to the entry for 'Resolver'. + +One final note about the naming. When specifying the module name within a logger, use the name of the module as specified in bindctl, e.g. 'Resolver' for the resolver module, 'Xfrout' for the xfrout module etc. When the message is logged, the message will include the name of the logger generating the message, but with the module name replaced by the name of the process implementing the module (so for example, a message generated by the 'Auth.cache' logger will appear in the output with a logger name of 'b10-auth.cache'). +severity (string) ¶ + +This specifies the category of messages logged. + +Each message is logged with an associated severity which may be one of the following (in descending order of severity): + + FATAL + ERROR + WARN + INFO + DEBUG + +When the severity of a logger is set to one of these values, it will only log messages of that severity, and the severities below it. The severity may also be set to NONE, in which case all messages from that logger are inhibited. +output_options (list) ¶ + +Each logger can have zero or more output_options. These specify where log messages are sent to. These are explained in detail below. + +The other options for a logger are: +debuglevel (integer) ¶ + +When a logger's severity is set to DEBUG, this value specifies what debug messages should be printed. It ranges from 0 (least verbose) to 99 (most verbose). The general classification of debug message types is + +TODO; there's a ticket to determine these levels, see #1074 + +If severity for the logger is not DEBUG, this value is ignored. +additive (true or false) ¶ + +If this is true, the output_options from the parent will be used. For example, if there are two loggers configured; 'Resolver' and 'Resolver.cache', and additive is true in the second, it will write the log messages not only to the destinations specified for 'Resolver.cache', but also to the destinations as specified in the output_options in the logger named Resolver'. + +TODO: check this +Output Options ¶ + +The main settings for an output option are the 'destination' and a value called 'output', the meaning of which depends on the destination that is set. +destination (string) ¶ + +The destination is the type of output. It can be one of: + + * console + * file + * syslog + +output (string) ¶ + +Depending on what is set as the output destination, this value is interpreted as follows: + + * destination is 'console' + 'output' must be one of 'stdout' (messages printed to standard output) or 'stderr' (messages printed to standard error). + + * destination is 'file' + The value of output is interpreted as a file name; log messages will be appended to this file. + + * destination is 'syslog' + The value of output is interpreted as the syslog facility (e.g. 'local0') that should be used for log messages. + +The other options for output_options are: +flush (true of false) ¶ + +Flush buffers after each log message. Doing this will reduce performance but will ensure that if the program terminates abnormally, all messages up to the point of termination are output. +maxsize (integer) ¶ + +Only relevant when destination is file, this is maximum file size of output files in bytes. When the maximum size is reached, the file is renamed (a ".1" is appended to the name - if a ".1" file exists, it is renamed ".2" etc.) and a new file opened. + +If this is 0, no maximum file size is used. +maxver (integer) ¶ + +Maximum number of old log files to keep around when rolling the output file. Only relevant when destination if 'file'. +Example session ¶ + +In this example we want to set the global logging to write to the file /var/log/my_bind10.log, at severity WARN. We want the authoritative server to log at DEBUG with debuglevel 40, to a different file (/tmp/debug_messages). + +Start bindctl + +["login success "] +> config show Logging +Logging/loggers [] list + +By default, no specific loggers are configured, in which case the severity defaults to INFO and the output is written to stderr. + +Let's first add a default logger; + +> config add Logging/loggers +> config show Logging +Logging/loggers/ list (modified) + +The loggers value line changed to indicate that it is no longer an empty list; + +> config show Logging/loggers +Logging/loggers[0]/name "" string (default) +Logging/loggers[0]/severity "INFO" string (default) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options [] list (default) + +The name is mandatory, so we must set it. We will also change the severity as well. Let's start with the global logger. + +> config set Logging/loggers[0]/name * +> config set Logging/loggers[0]/severity WARN +> config show Logging/loggers +Logging/loggers[0]/name "*" string (modified) +Logging/loggers[0]/severity "WARN" string (modified) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options [] list (default) + +Of course, we need to specify where we want the log messages to go, so we add an entry for an output option. + +> config add Logging/loggers[0]/output_options +> config show Logging/loggers[0]/output_options +Logging/loggers[0]/output_options[0]/destination "console" string (default) +Logging/loggers[0]/output_options[0]/output "stdout" string (default) +Logging/loggers[0]/output_options[0]/flush false boolean (default) +Logging/loggers[0]/output_options[0]/maxsize 0 integer (default) +Logging/loggers[0]/output_options[0]/maxver 0 integer (default) + +These aren't the values we are looking for. + +> config set Logging/loggers[0]/output_options[0]/destination file +> config set Logging/loggers[0]/output_options[0]/output /var/log/bind10.log +> config set Logging/loggers[0]/output_options[0]/maxsize 30000 +> config set Logging/loggers[0]/output_options[0]/maxver 8 + +Which would make the entire configuration for this logger look like: + +> config show all Logging/loggers +Logging/loggers[0]/name "*" string (modified) +Logging/loggers[0]/severity "WARN" string (modified) +Logging/loggers[0]/debuglevel 0 integer (default) +Logging/loggers[0]/additive false boolean (default) +Logging/loggers[0]/output_options[0]/destination "file" string (modified) +Logging/loggers[0]/output_options[0]/output "/var/log/bind10.log" string (modified) +Logging/loggers[0]/output_options[0]/flush false boolean (default) +Logging/loggers[0]/output_options[0]/maxsize 30000 integer (modified) +Logging/loggers[0]/output_options[0]/maxver 8 integer (modified) + +That looks OK, so let's commit it before we add the configuration for the authoritative server's logger. + +> config commit + +Now that we have set it, and checked each value along the way, adding a second entry is quite similar. + +> config add Logging/loggers +> config set Logging/loggers[1]/name Auth +> config set Logging/loggers[1]/severity DEBUG +> config set Logging/loggers[1]/debuglevel 40 +> config add Logging/loggers[1]/output_options +> config set Logging/loggers[1]/output_options[0]/destination file +> config set Logging/loggers[1]/output_options[0]/output /tmp/auth_debug.log +> config commit + +And that's it. Once we have found whatever it was we needed the debug messages for, we can simply remove the second logger to let the authoritative server use the same settings as the rest. + +> config remove Logging/loggers[1] +> config commit + +And every module will now be using the values from the logger named '*'. + + From 7b0201a4f98ee1b1288ae3b074cd1007707b6b21 Mon Sep 17 00:00:00 2001 From: reed Date: Tue, 26 Jul 2011 17:57:53 -0500 Subject: [PATCH 013/175] trac1011: some XML formatting Add some docbook formatting. This is not complete. --- doc/guide/bind10-guide.xml | 441 +++++++++++++++++++++++++++++++++---- 1 file changed, 397 insertions(+), 44 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 22515c05fb..98070c7450 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1550,34 +1550,110 @@ then change those defaults with config set Resolver/forward_addresses[0]/address + -Logging configuration ¶ +
+ Logging configuration -The logging system in BIND 10 is configured through the Logging module. All BIND 10 modules will look at the configuration in Logging to see what should be logged and to where. -Loggers ¶ + -Within BIND 10, a message is logged through a component called a "logger". Different parts of BIND 10 log messages through different loggers, and each logger can be configured independently of one another. + The logging system in BIND 10 is configured through the + Logging module. All BIND 10 modules will look at the + configuration in Logging to see what should be logged and + to where. + + + + + +
+ Loggers + + + + Within BIND 10, a message is logged through a component + called a "logger". Different parts of BIND 10 log messages + through different loggers, and each logger can be configured + independently of one another. + + + + + + In the Logging module, you can specify the configuration + for zero or more loggers; any that are not specified will + take appropriate default values.. + + + + -In the Logging module, you can specify the configuration for zero or more loggers; any that are not specified will take appropriate default values.. The three most important elements of a logger configuration are the name (the component that is generating the messages), the severity (what to log), and the output_options (where to log). -name (string) ¶ + + + + + + +name (string) Each logger in the system has a name, the name being that of the component using it to log messages. For instance, if you want to configure logging for the resolver module, you add an entry for a logger named 'Resolver'. This configuration will then be used by the loggers in the Resolver module, and all the libraries used by it. + + + + + If you want to specify logging for one specific library within the module, you set the name to 'module.library'. For example, the logger used by the nameserver address store component has the full name of 'Resolver.nsas'. If there is no entry in Logging for a particular library, it will use the configuration given for the module. + + + + + To illustrate this, suppose you want the cache library to log messages of severity DEBUG, and the rest of the resolver code to log messages of severity INFO. To achieve this you specify two loggers, one with the name 'Resolver' and severity INFO, and one with the name 'Resolver.cache' with severity DEBUG. As there are no entries for other libraries (e.g. the nsas), they will use the configuration for the module ('Resolver'), so giving the desired behavior. + + + + + One special case is that of a module name of '*', which is interpreted as 'any module'. You can set global logging options by using this, including setting the logging configuration for a library that is used by multiple modules (e.g. '*.config" specifies the configuration library code in whatever module is using it). + + + + + If there are multiple logger specifications in the configuration that might match a particular logger, the specification with the more specific logger name takes precedence. For example, if there are entries for for both '*' and 'Resolver', the resolver module - and all libraries it uses - will log messages according to the configuration in the second entry ('Resolver'). All other modules will use the configuration of the first entry ('*'). If there was also a configuration entry for 'Resolver.cache', the cache library within the resolver would use that in preference to the entry for 'Resolver'. + + + + + One final note about the naming. When specifying the module name within a logger, use the name of the module as specified in bindctl, e.g. 'Resolver' for the resolver module, 'Xfrout' for the xfrout module etc. When the message is logged, the message will include the name of the logger generating the message, but with the module name replaced by the name of the process implementing the module (so for example, a message generated by the 'Auth.cache' logger will appear in the output with a logger name of 'b10-auth.cache'). -severity (string) ¶ + + + + + + +severity (string) + + + + + This specifies the category of messages logged. + + + + + Each message is logged with an associated severity which may be one of the following (in descending order of severity): FATAL @@ -1586,118 +1662,355 @@ Each message is logged with an associated severity which may be one of the follo INFO DEBUG + + + + + When the severity of a logger is set to one of these values, it will only log messages of that severity, and the severities below it. The severity may also be set to NONE, in which case all messages from that logger are inhibited. -output_options (list) ¶ + + + + + + +output_options (list) + + + + + Each logger can have zero or more output_options. These specify where log messages are sent to. These are explained in detail below. + + + + + The other options for a logger are: -debuglevel (integer) ¶ + + + + + + +debuglevel (integer) + + + + + When a logger's severity is set to DEBUG, this value specifies what debug messages should be printed. It ranges from 0 (least verbose) to 99 (most verbose). The general classification of debug message types is + + + + + TODO; there's a ticket to determine these levels, see #1074 + + + + + If severity for the logger is not DEBUG, this value is ignored. -additive (true or false) ¶ + + + + + + +additive (true or false) + + + + + If this is true, the output_options from the parent will be used. For example, if there are two loggers configured; 'Resolver' and 'Resolver.cache', and additive is true in the second, it will write the log messages not only to the destinations specified for 'Resolver.cache', but also to the destinations as specified in the output_options in the logger named Resolver'. + + + + + TODO: check this -Output Options ¶ + + + +
+ +
+ Output Options + + The main settings for an output option are the 'destination' and a value called 'output', the meaning of which depends on the destination that is set. -destination (string) ¶ + + + + + + +destination (string) + + + + + The destination is the type of output. It can be one of: + + + + + * console * file * syslog -output (string) ¶ + + + + + +output (string) + + + + + Depending on what is set as the output destination, this value is interpreted as follows: + + + + + * destination is 'console' 'output' must be one of 'stdout' (messages printed to standard output) or 'stderr' (messages printed to standard error). + + + + + * destination is 'file' The value of output is interpreted as a file name; log messages will be appended to this file. + + + + + * destination is 'syslog' The value of output is interpreted as the syslog facility (e.g. 'local0') that should be used for log messages. + + + + + The other options for output_options are: -flush (true of false) ¶ + + + + + + +flush (true of false) + + + + + Flush buffers after each log message. Doing this will reduce performance but will ensure that if the program terminates abnormally, all messages up to the point of termination are output. -maxsize (integer) ¶ + + + + + + +maxsize (integer) + + + + + Only relevant when destination is file, this is maximum file size of output files in bytes. When the maximum size is reached, the file is renamed (a ".1" is appended to the name - if a ".1" file exists, it is renamed ".2" etc.) and a new file opened. + + + + + If this is 0, no maximum file size is used. -maxver (integer) ¶ + + + + + + +maxver (integer) + + + + + Maximum number of old log files to keep around when rolling the output file. Only relevant when destination if 'file'. -Example session ¶ + + + +
+ +
+ Example session + + In this example we want to set the global logging to write to the file /var/log/my_bind10.log, at severity WARN. We want the authoritative server to log at DEBUG with debuglevel 40, to a different file (/tmp/debug_messages). + + + + + Start bindctl -["login success "] -> config show Logging + + + + + + ["login success "] +> config show Logging Logging/loggers [] list + + + + + + By default, no specific loggers are configured, in which case the severity defaults to INFO and the output is written to stderr. + + + + + Let's first add a default logger; -> config add Logging/loggers -> config show Logging + + + + + + + > config add Logging/loggers +> config show Logging Logging/loggers/ list (modified) + + + + + + The loggers value line changed to indicate that it is no longer an empty list; -> config show Logging/loggers + + + + + > config show Logging/loggers Logging/loggers[0]/name "" string (default) Logging/loggers[0]/severity "INFO" string (default) Logging/loggers[0]/debuglevel 0 integer (default) Logging/loggers[0]/additive false boolean (default) Logging/loggers[0]/output_options [] list (default) + + + + + + The name is mandatory, so we must set it. We will also change the severity as well. Let's start with the global logger. -> config set Logging/loggers[0]/name * -> config set Logging/loggers[0]/severity WARN -> config show Logging/loggers + + + + + + > config set Logging/loggers[0]/name * +> config set Logging/loggers[0]/severity WARN +> config show Logging/loggers Logging/loggers[0]/name "*" string (modified) Logging/loggers[0]/severity "WARN" string (modified) Logging/loggers[0]/debuglevel 0 integer (default) Logging/loggers[0]/additive false boolean (default) Logging/loggers[0]/output_options [] list (default) + + + + + + Of course, we need to specify where we want the log messages to go, so we add an entry for an output option. -> config add Logging/loggers[0]/output_options -> config show Logging/loggers[0]/output_options + + + + + + > config add Logging/loggers[0]/output_options +> config show Logging/loggers[0]/output_options Logging/loggers[0]/output_options[0]/destination "console" string (default) Logging/loggers[0]/output_options[0]/output "stdout" string (default) Logging/loggers[0]/output_options[0]/flush false boolean (default) Logging/loggers[0]/output_options[0]/maxsize 0 integer (default) Logging/loggers[0]/output_options[0]/maxver 0 integer (default) + + + + + + These aren't the values we are looking for. -> config set Logging/loggers[0]/output_options[0]/destination file -> config set Logging/loggers[0]/output_options[0]/output /var/log/bind10.log -> config set Logging/loggers[0]/output_options[0]/maxsize 30000 -> config set Logging/loggers[0]/output_options[0]/maxver 8 + + + + + + > config set Logging/loggers[0]/output_options[0]/destination file +> config set Logging/loggers[0]/output_options[0]/output /var/log/bind10.log +> config set Logging/loggers[0]/output_options[0]/maxsize 30000 +> config set Logging/loggers[0]/output_options[0]/maxver 8 + + + + + Which would make the entire configuration for this logger look like: -> config show all Logging/loggers + + + + + > config show all Logging/loggers Logging/loggers[0]/name "*" string (modified) Logging/loggers[0]/severity "WARN" string (modified) Logging/loggers[0]/debuglevel 0 integer (default) @@ -1707,31 +2020,71 @@ Logging/loggers[0]/output_options[0]/output "/var/log/bind10.log" string (modifi Logging/loggers[0]/output_options[0]/flush false boolean (default) Logging/loggers[0]/output_options[0]/maxsize 30000 integer (modified) Logging/loggers[0]/output_options[0]/maxver 8 integer (modified) + + + + + That looks OK, so let's commit it before we add the configuration for the authoritative server's logger. -> config commit + + + + + + > config commit + + + + + Now that we have set it, and checked each value along the way, adding a second entry is quite similar. -> config add Logging/loggers -> config set Logging/loggers[1]/name Auth -> config set Logging/loggers[1]/severity DEBUG -> config set Logging/loggers[1]/debuglevel 40 -> config add Logging/loggers[1]/output_options -> config set Logging/loggers[1]/output_options[0]/destination file -> config set Logging/loggers[1]/output_options[0]/output /tmp/auth_debug.log -> config commit + + + + + + > config add Logging/loggers +> config set Logging/loggers[1]/name Auth +> config set Logging/loggers[1]/severity DEBUG +> config set Logging/loggers[1]/debuglevel 40 +> config add Logging/loggers[1]/output_options +> config set Logging/loggers[1]/output_options[0]/destination file +> config set Logging/loggers[1]/output_options[0]/output /tmp/auth_debug.log +> config commit + + + + + + And that's it. Once we have found whatever it was we needed the debug messages for, we can simply remove the second logger to let the authoritative server use the same settings as the rest. -> config remove Logging/loggers[1] -> config commit + + + + + + > config remove Logging/loggers[1] +> config commit + + + + + And every module will now be using the values from the logger named '*'. + + +
+ +
- From aa9497f4d2346e7a18cd07b9bf31dfb5832031bc Mon Sep 17 00:00:00 2001 From: reed Date: Tue, 26 Jul 2011 18:17:54 -0500 Subject: [PATCH 014/175] trac1011: more logging doc formatting --- doc/guide/bind10-guide.xml | 257 ++++++++++++++++++++++--------------- 1 file changed, 151 insertions(+), 106 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 98070c7450..4643bfb70e 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1588,53 +1588,98 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - -The three most important elements of a logger configuration are the name (the component that is generating the messages), the severity (what to log), and the output_options (where to log). + The three most important elements of a logger configuration + are the name (the component that is generating the + messages), the severity (what to log), and the output_options + (where to log). - name (string) -Each logger in the system has a name, the name being that of the component using it to log messages. For instance, if you want to configure logging for the resolver module, you add an entry for a logger named 'Resolver'. This configuration will then be used by the loggers in the Resolver module, and all the libraries used by it. + + + + + Each logger in the system has a name, the name being that + of the component using it to log messages. For instance, + if you want to configure logging for the resolver module, + you add an entry for a logger named 'Resolver'. This + configuration will then be used by the loggers in the + Resolver module, and all the libraries used by it. + + + If you want to specify logging for one specific library + within the module, you set the name to 'module.library'. + For example, the logger used by the nameserver address + store component has the full name of 'Resolver.nsas'. If + there is no entry in Logging for a particular library, + it will use the configuration given for the module. -If you want to specify logging for one specific library within the module, you set the name to 'module.library'. For example, the logger used by the nameserver address store component has the full name of 'Resolver.nsas'. If there is no entry in Logging for a particular library, it will use the configuration given for the module. + - -To illustrate this, suppose you want the cache library to log messages of severity DEBUG, and the rest of the resolver code to log messages of severity INFO. To achieve this you specify two loggers, one with the name 'Resolver' and severity INFO, and one with the name 'Resolver.cache' with severity DEBUG. As there are no entries for other libraries (e.g. the nsas), they will use the configuration for the module ('Resolver'), so giving the desired behavior. - + To illustrate this, suppose you want the cache library + to log messages of severity DEBUG, and the rest of the + resolver code to log messages of severity INFO. To achieve + this you specify two loggers, one with the name 'Resolver' + and severity INFO, and one with the name 'Resolver.cache' + with severity DEBUG. As there are no entries for other + libraries (e.g. the nsas), they will use the configuration + for the module ('Resolver'), so giving the desired + behavior. -One special case is that of a module name of '*', which is interpreted as 'any module'. You can set global logging options by using this, including setting the logging configuration for a library that is used by multiple modules (e.g. '*.config" specifies the configuration library code in whatever module is using it). + One special case is that of a module name of '*', which + is interpreted as 'any module'. You can set global logging + options by using this, including setting the logging + configuration for a library that is used by multiple + modules (e.g. '*.config" specifies the configuration + library code in whatever module is using it). - -If there are multiple logger specifications in the configuration that might match a particular logger, the specification with the more specific logger name takes precedence. For example, if there are entries for for both '*' and 'Resolver', the resolver module - and all libraries it uses - will log messages according to the configuration in the second entry ('Resolver'). All other modules will use the configuration of the first entry ('*'). If there was also a configuration entry for 'Resolver.cache', the cache library within the resolver would use that in preference to the entry for 'Resolver'. + If there are multiple logger specifications in the + configuration that might match a particular logger, the + specification with the more specific logger name takes + precedence. For example, if there are entries for for + both '*' and 'Resolver', the resolver module - and all + libraries it uses - will log messages according to the + configuration in the second entry ('Resolver'). All other + modules will use the configuration of the first entry + ('*'). If there was also a configuration entry for + 'Resolver.cache', the cache library within the resolver + would use that in preference to the entry for 'Resolver'. - -One final note about the naming. When specifying the module name within a logger, use the name of the module as specified in bindctl, e.g. 'Resolver' for the resolver module, 'Xfrout' for the xfrout module etc. When the message is logged, the message will include the name of the logger generating the message, but with the module name replaced by the name of the process implementing the module (so for example, a message generated by the 'Auth.cache' logger will appear in the output with a logger name of 'b10-auth.cache'). - + One final note about the naming. When specifying the + module name within a logger, use the name of the module + as specified in bindctl, e.g. 'Resolver' for the resolver + module, 'Xfrout' for the xfrout module etc. When the + message is logged, the message will include the name of + the logger generating the message, but with the module + name replaced by the name of the process implementing + the module (so for example, a message generated by the + 'Auth.cache' logger will appear in the output with a + logger name of 'b10-auth.cache'). @@ -1642,19 +1687,19 @@ One final note about the naming. When specifying the module name within a logger severity (string) + + + + + This specifies the category of messages logged. -This specifies the category of messages logged. - - - - - - -Each message is logged with an associated severity which may be one of the following (in descending order of severity): + Each message is logged with an associated severity which + may be one of the following (in descending order of + severity): FATAL ERROR @@ -1662,13 +1707,17 @@ Each message is logged with an associated severity which may be one of the follo INFO DEBUG - -When the severity of a logger is set to one of these values, it will only log messages of that severity, and the severities below it. The severity may also be set to NONE, in which case all messages from that logger are inhibited. + When the severity of a logger is set to one of these + values, it will only log messages of that severity, and + the severities below it. The severity may also be set to + NONE, in which case all messages from that logger are + inhibited. + @@ -1680,16 +1729,15 @@ output_options (list) - -Each logger can have zero or more output_options. These specify where log messages are sent to. These are explained in detail below. - + Each logger can have zero or more output_options. These + specify where log messages are sent to. These are explained + in detail below. -The other options for a logger are: - + The other options for a logger are: @@ -1701,23 +1749,20 @@ debuglevel (integer) + When a logger's severity is set to DEBUG, this value + specifies what debug messages should be printed. It ranges + from 0 (least verbose) to 99 (most verbose). The general + classification of debug message types is -When a logger's severity is set to DEBUG, this value specifies what debug messages should be printed. It ranges from 0 (least verbose) to 99 (most verbose). The general classification of debug message types is - + - - -TODO; there's a ticket to determine these levels, see #1074 - - - + -If severity for the logger is not DEBUG, this value is ignored. - + If severity for the logger is not DEBUG, this value is ignored. @@ -1725,19 +1770,19 @@ If severity for the logger is not DEBUG, this value is ignored. additive (true or false) - -If this is true, the output_options from the parent will be used. For example, if there are two loggers configured; 'Resolver' and 'Resolver.cache', and additive is true in the second, it will write the log messages not only to the destinations specified for 'Resolver.cache', but also to the destinations as specified in the output_options in the logger named Resolver'. + If this is true, the output_options from the parent will + be used. For example, if there are two loggers configured; + 'Resolver' and 'Resolver.cache', and additive is true in + the second, it will write the log messages not only to + the destinations specified for 'Resolver.cache', but also + to the destinations as specified in the output_options + in the logger named Resolver'. - - - - - -TODO: check this + @@ -1748,8 +1793,9 @@ TODO: check this -The main settings for an output option are the 'destination' and a value called 'output', the meaning of which depends on the destination that is set. - + The main settings for an output option are the 'destination' + and a value called 'output', the meaning of which depends + on the destination that is set. @@ -1757,13 +1803,11 @@ The main settings for an output option are the 'destination' and a value called destination (string) - -The destination is the type of output. It can be one of: - + The destination is the type of output. It can be one of: @@ -1777,30 +1821,29 @@ The destination is the type of output. It can be one of: - output (string) - -Depending on what is set as the output destination, this value is interpreted as follows: + Depending on what is set as the output destination, this + value is interpreted as follows: - * destination is 'console' - 'output' must be one of 'stdout' (messages printed to standard output) or 'stderr' (messages printed to standard error). + 'output' must be one of 'stdout' (messages printed to standard output) or 'stderr' (messages printed to standard error). * destination is 'file' + The value of output is interpreted as a file name; log messages will be appended to this file. @@ -1809,15 +1852,14 @@ Depending on what is set as the output destination, this value is interpreted as * destination is 'syslog' + The value of output is interpreted as the syslog facility (e.g. 'local0') that should be used for log messages. - -The other options for output_options are: - + The other options for output_options are: @@ -1825,13 +1867,14 @@ The other options for output_options are: flush (true of false) - -Flush buffers after each log message. Doing this will reduce performance but will ensure that if the program terminates abnormally, all messages up to the point of termination are output. - + Flush buffers after each log message. Doing this will + reduce performance but will ensure that if the program + terminates abnormally, all messages up to the point of + termination are output. @@ -1839,20 +1882,21 @@ Flush buffers after each log message. Doing this will reduce performance but wil maxsize (integer) + + + + + Only relevant when destination is file, this is maximum + file size of output files in bytes. When the maximum size + is reached, the file is renamed (a ".1" is appended to + the name - if a ".1" file exists, it is renamed ".2" + etc.) and a new file opened. -Only relevant when destination is file, this is maximum file size of output files in bytes. When the maximum size is reached, the file is renamed (a ".1" is appended to the name - if a ".1" file exists, it is renamed ".2" etc.) and a new file opened. - - - - - - -If this is 0, no maximum file size is used. - + If this is 0, no maximum file size is used. @@ -1860,12 +1904,13 @@ If this is 0, no maximum file size is used. maxver (integer) - -Maximum number of old log files to keep around when rolling the output file. Only relevant when destination if 'file'. + Maximum number of old log files to keep around when + rolling the output file. Only relevant when destination + if 'file'. @@ -1876,8 +1921,10 @@ Maximum number of old log files to keep around when rolling the output file. Onl -In this example we want to set the global logging to write to the file /var/log/my_bind10.log, at severity WARN. We want the authoritative server to log at DEBUG with debuglevel 40, to a different file (/tmp/debug_messages). - + In this example we want to set the global logging to + write to the file /var/log/my_bind10.log, at severity + WARN. We want the authoritative server to log at DEBUG + with debuglevel 40, to a different file (/tmp/debug_messages). @@ -1885,7 +1932,6 @@ In this example we want to set the global logging to write to the file /var/log/ Start bindctl - @@ -1895,20 +1941,19 @@ Start bindctl Logging/loggers [] list + + + + + By default, no specific loggers are configured, in which + case the severity defaults to INFO and the output is + written to stderr. -By default, no specific loggers are configured, in which case the severity defaults to INFO and the output is written to stderr. - - - - - - -Let's first add a default logger; - + Let's first add a default logger; @@ -1924,8 +1969,8 @@ Logging/loggers/ list (modified) - -The loggers value line changed to indicate that it is no longer an empty list; + The loggers value line changed to indicate that it is no + longer an empty list; @@ -1943,9 +1988,9 @@ Logging/loggers[0]/output_options [] list (default) - -The name is mandatory, so we must set it. We will also change the severity as well. Let's start with the global logger. - + The name is mandatory, so we must set it. We will also + change the severity as well. Let's start with the global + logger. @@ -1965,9 +2010,8 @@ Logging/loggers[0]/output_options [] list (default) - -Of course, we need to specify where we want the log messages to go, so we add an entry for an output option. - + Of course, we need to specify where we want the log + messages to go, so we add an entry for an output option. @@ -1987,8 +2031,7 @@ Logging/loggers[0]/output_options[0]/maxver 0 integer (default) -These aren't the values we are looking for. - + These aren't the values we are looking for. @@ -2004,7 +2047,8 @@ These aren't the values we are looking for. -Which would make the entire configuration for this logger look like: + Which would make the entire configuration for this logger + look like: @@ -2026,8 +2070,8 @@ Logging/loggers[0]/output_options[0]/maxver 8 integer (modified) -That looks OK, so let's commit it before we add the configuration for the authoritative server's logger. - + That looks OK, so let's commit it before we add the + configuration for the authoritative server's logger. @@ -2035,13 +2079,12 @@ That looks OK, so let's commit it before we add the configuration for the author > config commit - -Now that we have set it, and checked each value along the way, adding a second entry is quite similar. - + Now that we have set it, and checked each value along + the way, adding a second entry is quite similar. @@ -2061,9 +2104,10 @@ Now that we have set it, and checked each value along the way, adding a second e - -And that's it. Once we have found whatever it was we needed the debug messages for, we can simply remove the second logger to let the authoritative server use the same settings as the rest. - + And that's it. Once we have found whatever it was we + needed the debug messages for, we can simply remove the + second logger to let the authoritative server use the + same settings as the rest. @@ -2077,7 +2121,8 @@ And that's it. Once we have found whatever it was we needed the debug messages f -And every module will now be using the values from the logger named '*'. + And every module will now be using the values from the + logger named '*'. From 87a4f24037965ae88435ebe3f887750c500cbfde Mon Sep 17 00:00:00 2001 From: reed Date: Tue, 26 Jul 2011 18:18:49 -0500 Subject: [PATCH 015/175] trac1011: minor punctuation fix --- doc/guide/bind10-guide.xml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 4643bfb70e..309c35d5fe 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1953,7 +1953,7 @@ Logging/loggers [] list - Let's first add a default logger; + Let's first add a default logger: @@ -1970,7 +1970,7 @@ Logging/loggers/ list (modified) The loggers value line changed to indicate that it is no - longer an empty list; + longer an empty list: From 579fd2bf848e994ed6dcd8d1c3633f2fa62cbd28 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Thu, 28 Jul 2011 13:47:17 +0200 Subject: [PATCH 016/175] [trac1061] Interface of the database connection and client It will look something like this, hopefully. Let's see if it works. --- src/lib/datasrc/Makefile.am | 1 + src/lib/datasrc/client.h | 2 ++ src/lib/datasrc/database.cc | 21 +++++++++++++++++++ src/lib/datasrc/database.h | 40 +++++++++++++++++++++++++++++++++++++ 4 files changed, 64 insertions(+) create mode 100644 src/lib/datasrc/database.cc create mode 100644 src/lib/datasrc/database.h diff --git a/src/lib/datasrc/Makefile.am b/src/lib/datasrc/Makefile.am index 261baaeb0b..eecd26a9b6 100644 --- a/src/lib/datasrc/Makefile.am +++ b/src/lib/datasrc/Makefile.am @@ -22,6 +22,7 @@ libdatasrc_la_SOURCES += zone.h libdatasrc_la_SOURCES += result.h libdatasrc_la_SOURCES += logger.h logger.cc libdatasrc_la_SOURCES += client.h +libdatasrc_la_SOURCES += database.h database.cc nodist_libdatasrc_la_SOURCES = datasrc_messages.h datasrc_messages.cc libdatasrc_la_LIBADD = $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/lib/datasrc/client.h b/src/lib/datasrc/client.h index a830f00c21..9fe6519532 100644 --- a/src/lib/datasrc/client.h +++ b/src/lib/datasrc/client.h @@ -15,6 +15,8 @@ #ifndef __DATA_SOURCE_CLIENT_H #define __DATA_SOURCE_CLIENT_H 1 +#include + #include namespace isc { diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc new file mode 100644 index 0000000000..71014d2cea --- /dev/null +++ b/src/lib/datasrc/database.cc @@ -0,0 +1,21 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +namespace isc { +namespace datasrc { + +} +} diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h new file mode 100644 index 0000000000..b2fe081c26 --- /dev/null +++ b/src/lib/datasrc/database.h @@ -0,0 +1,40 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#ifndef __DATABASE_DATASRC_H +#define __DATABASE_DATASRC_H + +#include + +namespace isc { +namespace datasrc { + +class DatabaseConnection : boost::noncopyable { +public: + ~ DatabaseConnection() { } + virtual std::pair getZone() const; +}; + +class DatabaseClient : public DataSourceClient { +public: + DatabaseClient(const std::auto_ptr& connection); + virtual FindResult findZone(const isc::dns::Name& name) const; +private: + const std::auto_ptr connection_; +}; + +} +} + +#endif From 63f4617b5ab99d75e98e40760ff68bb1615a84e6 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Thu, 28 Jul 2011 15:42:45 +0200 Subject: [PATCH 017/175] [trac1061] Doxygen comments for database classes --- src/lib/datasrc/database.h | 138 ++++++++++++++++++++++++++++++++++++- 1 file changed, 136 insertions(+), 2 deletions(-) diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index b2fe081c26..e949ec3cb7 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -20,17 +20,151 @@ namespace isc { namespace datasrc { +/** + * \brief Abstract connection to database with DNS data + * + * This class is defines interface to databases. Each supported database + * will provide methods for accessing the data stored there in a generic + * manner. The methods are meant to be low-level, without much or any knowledge + * about DNS and should be possible to translate directly to queries. + * + * On the other hand, how the communication with database is done and in what + * schema (in case of relational/SQL database) is up to the concrete classes. + * + * This class is non-copyable, as copying connections to database makes little + * sense and will not be needed. + * + * \todo Is it true this does not need to be copied? For example the zone + * iterator might need it's own copy. But a virtual clone() method might + * be better for that than copy constructor. + * + * \note The same application may create multiple connections to the same + * database. If the database allows having multiple open queries at one + * connection, the connection class may share it. + */ class DatabaseConnection : boost::noncopyable { public: - ~ DatabaseConnection() { } - virtual std::pair getZone() const; + /** + * \brief Destructor + * + * It is empty, but needs a virtual one, since we will use the derived + * classes in polymorphic way. + */ + virtual ~ DatabaseConnection() { } + /** + * \brief Retrieve a zone identifier + * + * This method looks up a zone for the given name in the database. It + * should match only exact zone name (eg. name is equal to the zone's + * apex), as the DatabaseClient will loop trough the labels itself and + * find the most suitable zone. + * + * It is not specified if and what implementation of this method may throw, + * so code should expect anything. + * + * \param name The name of the zone's apex to be looked up. + * \return The first part of the result indicates if a matching zone + * was found. In case it was, the second part is internal zone ID. + * This one will be passed to methods finding data in the zone. + * It is not required to keep them, in which case whatever might + * be returned - the ID is only passed back to the connection as + * an opaque handle. + */ + virtual std::pair getZone(const isc::dns::Name& name) const; }; +/** + * \brief Concrete data source client oriented at database backends. + * + * This class (together with corresponding versions of ZoneFinder, + * ZoneIterator, etc.) translates high-level data source queries to + * low-level calls on DatabaseConnection. It calls multiple queries + * if necessary and validates data from the database, allowing the + * DatabaseConnection to be just simple translation to SQL/other + * queries to database. + * + * While it is possible to subclass it for specific database in case + * of special needs, it is not expected to be needed. This should just + * work as it is with whatever DatabaseConnection. + */ class DatabaseClient : public DataSourceClient { public: + /** + * \brief Constructor + * + * It initializes the client with a connection. + * + * It throws isc::InvalidParameter if connection is NULL. It might throw + * standard allocation exception as well, but doesn't throw anything else. + * + * \note Some objects returned from methods of this class (like ZoneFinder) + * hold references to the connection. As the lifetime of the connection + * is bound to this object, the returned objects must not be used after + * descruction of the DatabaseClient. + * + * \todo Should we use shared_ptr instead? On one side, we would get rid of + * the restriction and maybe could easy up some shutdown scenarios with + * multi-threaded applications, on the other hand it is more expensive + * and looks generally unneeded. + * + * \param connection The connection to use to get data. As the parameter + * suggests, the client takes ownership of the connection and will + * delete it when itself deleted. + */ DatabaseClient(const std::auto_ptr& connection); + /** + * \brief Corresponding ZoneFinder implementation + * + * The zone finder implementation for database data sources. Similarly + * to the DatabaseClient, it translates the queries to methods of the + * connection. + * + * Application should not come directly in contact with this class + * (it should handle it trough generic ZoneFinder pointer), therefore + * it could be completely hidden in the .cc file. But it is provided + * to allow testing and for rare cases when a database needs slightly + * different handling, so it can be subclassed. + * + * Methods directly corresponds to the ones in ZoneFinder. + */ + class Finder : public ZoneFinder { + public: + /** + * \brief Constructor + * + * \param connection The connection (shared with DatabaseClient) to + * be used for queries (the one asked for ID before). + * \param zone_id The zone ID which was returned from + * DatabaseConnection::getZone and which will be passed to further + * calls to the connection. + */ + Finder(DatabaseConnection& connection, int zone_id); + virtual const isc::dns::Name& getOrigin() const; + virtual const isc::dns::RRClass& getClass() const; + virtual FindResult find(const isc::dns::Name& name, + const isc::dns::RRType& type, + isc::dns::RRsetList* target = NULL, + const FindOptions options = FIND_DEFAULT) + const = 0; + private: + DatabaseConnection& connection_; + const int zone_id_; + }; + /** + * \brief Find a zone in the database + * + * This queries connection's getZone to find the best matching zone. + * It will propagate whatever exceptions are thrown from that method + * (which is not restricted in any way). + * + * \param name Name of the zone or data contained there. + * \return Result containing the code and instance of Finder, if anything + * is found. Applications should not rely on the specific class being + * returned, though. + */ virtual FindResult findZone(const isc::dns::Name& name) const; private: + /// \brief Our connection. const std::auto_ptr connection_; }; From b5fbd9c942b1080aa60a48ee23da60574d1fc22f Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Mon, 1 Aug 2011 11:47:37 +0200 Subject: [PATCH 018/175] [trac1061] Tests for finding of the zone. --- src/lib/datasrc/database.h | 20 ++++- src/lib/datasrc/tests/Makefile.am | 1 + src/lib/datasrc/tests/database_unittest.cc | 93 ++++++++++++++++++++++ 3 files changed, 113 insertions(+), 1 deletion(-) create mode 100644 src/lib/datasrc/tests/database_unittest.cc diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index e949ec3cb7..e2ff407984 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -70,7 +70,7 @@ public: * be returned - the ID is only passed back to the connection as * an opaque handle. */ - virtual std::pair getZone(const isc::dns::Name& name) const; + virtual std::pair getZone(const isc::dns::Name& name) const = 0; }; /** @@ -146,6 +146,24 @@ public: isc::dns::RRsetList* target = NULL, const FindOptions options = FIND_DEFAULT) const = 0; + /** + * \brief The zone ID + * + * This function provides the stored zone ID as passed to the + * constructor. This is meant for testing purposes and normal + * applications shouldn't need it. + */ + int zone_id() const { return (zone_id_); } + /** + * \brief The database connection. + * + * This function provides the database connection stored inside as + * passed to the constructor. This is meant for testing purposes and + * normal applications shouldn't need it. + */ + const DatabaseConnection& connection() const { + return (connection_); + } private: DatabaseConnection& connection_; const int zone_id_; diff --git a/src/lib/datasrc/tests/Makefile.am b/src/lib/datasrc/tests/Makefile.am index fbcf9c95c0..9cfd0d8c29 100644 --- a/src/lib/datasrc/tests/Makefile.am +++ b/src/lib/datasrc/tests/Makefile.am @@ -28,6 +28,7 @@ run_unittests_SOURCES += rbtree_unittest.cc run_unittests_SOURCES += zonetable_unittest.cc run_unittests_SOURCES += memory_datasrc_unittest.cc run_unittests_SOURCES += logger_unittest.cc +run_unittests_SOURCES += database_unittest.cc run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) run_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc new file mode 100644 index 0000000000..45e445905f --- /dev/null +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -0,0 +1,93 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include + +#include + +#include + +using namespace isc::datasrc; +using namespace std; +using namespace boost; +using isc::dns::Name; + +namespace { + +/* + * A virtual database connection that pretends it contains single zone -- + * example.org. + */ +class MockConnection : public DatabaseConnection { +public: + virtual std::pair getZone(const Name& name) const { + if (name == Name("zone.example.org")) { + return (std::pair(true, 42)); + } else { + return (std::pair(false, 0)); + } + } +}; + +class DatabaseClientTest : public ::testing::Test { +public: + DatabaseClientTest() { + createClient(); + } + /* + * We initialize the client from a function, so we can call it multiple + * times per test. + */ + void createClient() { + current_connection_ = new MockConnection(); + client_.reset(new DatabaseClient(auto_ptr( + current_connection_))); + } + // Will be deleted by client_, just keep the current value for comparison. + MockConnection* current_connection_; + auto_ptr client_; + /** + * Check the zone finder is a valid one and references the zone ID and + * connection available here. + */ + void checkZoneFinder(const DataSourceClient::FindResult& zone) { + ASSERT_NE(ZoneFinderPtr(), zone.zone_finder) << "No zone finder"; + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + ASSERT_NE(shared_ptr(), finder) << + "Wrong type of finder"; + EXPECT_EQ(42, finder->zone_id()); + EXPECT_EQ(current_connection_, &finder->connection()); + } +}; + +TEST_F(DatabaseClientTest, zoneNotFound) { + DataSourceClient::FindResult zone(client_->findZone(Name("example.com"))); + EXPECT_EQ(result::NOTFOUND, zone.code); +} + +TEST_F(DatabaseClientTest, exactZone) { + DataSourceClient::FindResult zone(client_->findZone(Name("example.org"))); + EXPECT_EQ(result::SUCCESS, zone.code); + checkZoneFinder(zone); +} + +TEST_F(DatabaseClientTest, superZone) { + DataSourceClient::FindResult zone(client_->findZone(Name( + "sub.example.org"))); + EXPECT_EQ(result::PARTIALMATCH, zone.code); + checkZoneFinder(zone); +} + +} From 14a0766224d50d1c4c409e883cf29515dafc25f0 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Mon, 1 Aug 2011 12:03:35 +0200 Subject: [PATCH 019/175] [trac1061] Test for constructor exception --- src/lib/datasrc/tests/database_unittest.cc | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 45e445905f..f18dc3e69a 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -15,6 +15,7 @@ #include #include +#include #include @@ -90,4 +91,9 @@ TEST_F(DatabaseClientTest, superZone) { checkZoneFinder(zone); } +TEST_F(DatabaseClientTest, noConnException) { + EXPECT_THROW(DatabaseClient(auto_ptr()), + isc::InvalidParameter); +} + } From b63b9aac20259f3612e23c7a3e977dcb48693ef1 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Mon, 1 Aug 2011 13:04:40 +0200 Subject: [PATCH 020/175] [trac1061] Don't return reference While it works for in-memory zone and similar, it can't be done with databases, as the database returns some primitive data and it must be created on the spot. --- src/bin/auth/tests/query_unittest.cc | 4 ++-- src/lib/datasrc/database.h | 4 ++-- src/lib/datasrc/memory_datasrc.cc | 4 ++-- src/lib/datasrc/memory_datasrc.h | 4 ++-- src/lib/datasrc/zone.h | 4 ++-- 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/src/bin/auth/tests/query_unittest.cc b/src/bin/auth/tests/query_unittest.cc index 6a75856eee..9ef8c13682 100644 --- a/src/bin/auth/tests/query_unittest.cc +++ b/src/bin/auth/tests/query_unittest.cc @@ -122,8 +122,8 @@ public: masterLoad(zone_stream, origin_, rrclass_, boost::bind(&MockZoneFinder::loadRRset, this, _1)); } - virtual const isc::dns::Name& getOrigin() const { return (origin_); } - virtual const isc::dns::RRClass& getClass() const { return (rrclass_); } + virtual isc::dns::Name getOrigin() const { return (origin_); } + virtual isc::dns::RRClass getClass() const { return (rrclass_); } virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, RRsetList* target = NULL, diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index e2ff407984..f3aa9f5740 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -139,8 +139,8 @@ public: * calls to the connection. */ Finder(DatabaseConnection& connection, int zone_id); - virtual const isc::dns::Name& getOrigin() const; - virtual const isc::dns::RRClass& getClass() const; + virtual isc::dns::Name getOrigin() const; + virtual isc::dns::RRClass getClass() const; virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, diff --git a/src/lib/datasrc/memory_datasrc.cc b/src/lib/datasrc/memory_datasrc.cc index 3d24ce0200..26223dad90 100644 --- a/src/lib/datasrc/memory_datasrc.cc +++ b/src/lib/datasrc/memory_datasrc.cc @@ -606,12 +606,12 @@ InMemoryZoneFinder::~InMemoryZoneFinder() { delete impl_; } -const Name& +Name InMemoryZoneFinder::getOrigin() const { return (impl_->origin_); } -const RRClass& +RRClass InMemoryZoneFinder::getClass() const { return (impl_->zone_class_); } diff --git a/src/lib/datasrc/memory_datasrc.h b/src/lib/datasrc/memory_datasrc.h index 9bed9603c1..9707797299 100644 --- a/src/lib/datasrc/memory_datasrc.h +++ b/src/lib/datasrc/memory_datasrc.h @@ -58,10 +58,10 @@ public: //@} /// \brief Returns the origin of the zone. - virtual const isc::dns::Name& getOrigin() const; + virtual isc::dns::Name getOrigin() const; /// \brief Returns the class of the zone. - virtual const isc::dns::RRClass& getClass() const; + virtual isc::dns::RRClass getClass() const; /// \brief Looks up an RRset in the zone. /// diff --git a/src/lib/datasrc/zone.h b/src/lib/datasrc/zone.h index 69785f0227..f67ed4be24 100644 --- a/src/lib/datasrc/zone.h +++ b/src/lib/datasrc/zone.h @@ -131,10 +131,10 @@ public: /// These methods should never throw an exception. //@{ /// Return the origin name of the zone. - virtual const isc::dns::Name& getOrigin() const = 0; + virtual isc::dns::Name getOrigin() const = 0; /// Return the RR class of the zone. - virtual const isc::dns::RRClass& getClass() const = 0; + virtual isc::dns::RRClass getClass() const = 0; //@} /// From 11b8b873e7fd6722053aa224d20f29350bf2b298 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Mon, 1 Aug 2011 13:06:13 +0200 Subject: [PATCH 021/175] [trac1061] Implement finding a zone in DB And provide some basics of the ZoneFinder, which does not work, but compiles at last. --- src/lib/datasrc/database.cc | 64 ++++++++++++++++++++++ src/lib/datasrc/database.h | 4 +- src/lib/datasrc/tests/database_unittest.cc | 2 +- 3 files changed, 67 insertions(+), 3 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 71014d2cea..5fe9f7ba8e 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -14,8 +14,72 @@ #include +#include +#include + +using isc::dns::Name; + namespace isc { namespace datasrc { +DatabaseClient::DatabaseClient(std::auto_ptr connection) : + connection_(connection) +{ + if (connection_.get() == NULL) { + isc_throw(isc::InvalidParameter, + "No connection provided to DatabaseClient"); + } +} + +DataSourceClient::FindResult +DatabaseClient::findZone(const Name& name) const { + std::pair zone(connection_->getZone(name)); + // Try exact first + if (zone.first) { + return (FindResult(result::SUCCESS, + ZoneFinderPtr(new Finder(*connection_, + zone.second)))); + } + // Than super domains + // Start from 1, as 0 is covered above + for (size_t i(1); i < name.getLabelCount(); ++i) { + zone = connection_->getZone(name.split(i)); + if (zone.first) { + return (FindResult(result::PARTIALMATCH, + ZoneFinderPtr(new Finder(*connection_, + zone.second)))); + } + } + // No, really nothing + return (FindResult(result::NOTFOUND, ZoneFinderPtr())); +} + +DatabaseClient::Finder::Finder(DatabaseConnection& connection, int zone_id) : + connection_(connection), + zone_id_(zone_id) +{ } + +ZoneFinder::FindResult +DatabaseClient::Finder::find(const isc::dns::Name&, + const isc::dns::RRType&, + isc::dns::RRsetList*, + const FindOptions) const +{ + // TODO Implement + return (FindResult(SUCCESS, isc::dns::ConstRRsetPtr())); +} + +Name +DatabaseClient::Finder::getOrigin() const { + // TODO Implement + return (Name(".")); +} + +isc::dns::RRClass +DatabaseClient::Finder::getClass() const { + // TODO Implement + return isc::dns::RRClass::IN(); +} + } } diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index f3aa9f5740..5693479a77 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -111,7 +111,7 @@ public: * suggests, the client takes ownership of the connection and will * delete it when itself deleted. */ - DatabaseClient(const std::auto_ptr& connection); + DatabaseClient(std::auto_ptr connection); /** * \brief Corresponding ZoneFinder implementation * @@ -145,7 +145,7 @@ public: const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, const FindOptions options = FIND_DEFAULT) - const = 0; + const; /** * \brief The zone ID * diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index f18dc3e69a..b60d5c0ced 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -33,7 +33,7 @@ namespace { class MockConnection : public DatabaseConnection { public: virtual std::pair getZone(const Name& name) const { - if (name == Name("zone.example.org")) { + if (name == Name("example.org")) { return (std::pair(true, 42)); } else { return (std::pair(false, 0)); From be9d5fe994e6a086a951e432d56e7de2af3cfd09 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Mon, 1 Aug 2011 13:29:36 +0200 Subject: [PATCH 022/175] [trac1061] First attempt at SQLite3Connection interface It will probably change during the implementation though. --- src/lib/datasrc/Makefile.am | 1 + src/lib/datasrc/sqlite3_connection.cc | 14 +++++++++ src/lib/datasrc/sqlite3_connection.h | 43 +++++++++++++++++++++++++++ 3 files changed, 58 insertions(+) create mode 100644 src/lib/datasrc/sqlite3_connection.cc create mode 100644 src/lib/datasrc/sqlite3_connection.h diff --git a/src/lib/datasrc/Makefile.am b/src/lib/datasrc/Makefile.am index eecd26a9b6..e6bff58fea 100644 --- a/src/lib/datasrc/Makefile.am +++ b/src/lib/datasrc/Makefile.am @@ -23,6 +23,7 @@ libdatasrc_la_SOURCES += result.h libdatasrc_la_SOURCES += logger.h logger.cc libdatasrc_la_SOURCES += client.h libdatasrc_la_SOURCES += database.h database.cc +libdatasrc_la_SOURCES += sqlite3_connection.h sqlite3_connection.cc nodist_libdatasrc_la_SOURCES = datasrc_messages.h datasrc_messages.cc libdatasrc_la_LIBADD = $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc new file mode 100644 index 0000000000..e8c25092dc --- /dev/null +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -0,0 +1,14 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h new file mode 100644 index 0000000000..e18386cd7d --- /dev/null +++ b/src/lib/datasrc/sqlite3_connection.h @@ -0,0 +1,43 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + + +#ifndef __DATASRC_SQLITE3_CONNECTION_H +#define __DATASRC_SQLITE3_CONNECTION_H + +#include + +// TODO Once the whole SQLite3 thing is ported here, move the Sqlite3Error +// here and remove the header file. +#include + +namespace isc { +namespace datasrc { + +class SQLite3Connection : public DatabaseConnection { +public: + // TODO Should we simplify this as well and just pass config to the + // constructor and be done? (whenever the config would change, we would + // recreate new connections) + Result init() { return (init(isc::data::ElementPtr())); } + Result init(const isc::data::ConstElementPtr& config); + Result close(); + + virtual std::pair getZone(const isc::dns::Name& name) const; +}; + +} +} + +#endif From dc3b856b460ff380feb68cdff551f334e6db5a27 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Mon, 1 Aug 2011 13:41:04 -0500 Subject: [PATCH 023/175] [trac1011] add section title to logging message format section also removed some tabs and spaces at end of lines --- doc/guide/bind10-guide.xml | 131 +++++++++++++++++++------------------ 1 file changed, 68 insertions(+), 63 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 309c35d5fe..b21e49bea6 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -146,7 +146,7 @@ The processes started by the bind10 command have names starting with "b10-", including: - + @@ -1472,60 +1472,63 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - - Each message written by BIND 10 to the configured logging destinations - comprises a number of components that identify the origin of the - message and, if the message indicates a problem, information about the - problem that may be useful in fixing it. - +
+ Logging Message Formatn - - Consider the message below logged to a file: - 2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink] + + Each message written by BIND 10 to the configured logging destinations + comprises a number of components that identify the origin of the + message and, if the message indicates a problem, information about the + problem that may be useful in fixing it. + + + + Consider the message below logged to a file: + 2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink] ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53) - + - - Note: the layout of messages written to the system logging - file (syslog) may be slightly different. This message has - been split across two lines here for display reasons; in the - logging file, it will appear on one line.) - + + Note: the layout of messages written to the system logging + file (syslog) may be slightly different. This message has + been split across two lines here for display reasons; in the + logging file, it will appear on one line.) + - - The log message comprises a number of components: + + The log message comprises a number of components: - - - 2011-06-15 13:48:22.034 + + + 2011-06-15 13:48:22.034 - - The date and time at which the message was generated. - - + + The date and time at which the message was generated. + + - - ERROR - - The severity of the message. - - + + ERROR + + The severity of the message. + + - - [b10-resolver.asiolink] - - The source of the message. This comprises two components: - the BIND 10 process generating the message (in this - case, b10-resolver) and the module - within the program from which the message originated - (which in the example is the asynchronous I/O link - module, asiolink). - - + + [b10-resolver.asiolink] + + The source of the message. This comprises two components: + the BIND 10 process generating the message (in this + case, b10-resolver) and the module + within the program from which the message originated + (which in the example is the asynchronous I/O link + module, asiolink). + + - - ASIODNS_OPENSOCK - + + ASIODNS_OPENSOCK + The message identification. Every message in BIND 10 has a unique identification, which can be used as an index into the () from which more information can be obtained. - - + + - - error 111 opening TCP socket to 127.0.0.1(53) - - A brief description of the cause of the problem. Within this text, - information relating to the condition that caused the message to - be logged will be included. In this example, error number 111 - (an operating system-specific error number) was encountered when - trying to open a TCP connection to port 53 on the local system - (address 127.0.0.1). The next step would be to find out the reason - for the failure by consulting your system's documentation to - identify what error number 111 means. - - - - + + error 111 opening TCP socket to 127.0.0.1(53) + + A brief description of the cause of the problem. Within this text, + information relating to the condition that caused the message to + be logged will be included. In this example, error number 111 + (an operating system-specific error number) was encountered when + trying to open a TCP connection to port 53 on the local system + (address 127.0.0.1). The next step would be to find out the reason + for the failure by consulting your system's documentation to + identify what error number 111 means. + + + + + +
Logging configuration From 9819295a58b8b40ca6d95c84f1f1de08fb0eb707 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Mon, 1 Aug 2011 13:42:26 -0500 Subject: [PATCH 024/175] [trac1011] reformat some docbook (no change to final result) --- doc/guide/bind10-guide.xml | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index b21e49bea6..c7c5fd57b3 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1476,10 +1476,11 @@ then change those defaults with config set Resolver/forward_addresses[0]/address Logging Message Formatn - Each message written by BIND 10 to the configured logging destinations - comprises a number of components that identify the origin of the - message and, if the message indicates a problem, information about the - problem that may be useful in fixing it. + Each message written by BIND 10 to the configured logging + destinations comprises a number of components that identify + the origin of the message and, if the message indicates + a problem, information about the problem that may be + useful in fixing it. @@ -1542,14 +1543,16 @@ then change those defaults with config set Resolver/forward_addresses[0]/address error 111 opening TCP socket to 127.0.0.1(53) - A brief description of the cause of the problem. Within this text, - information relating to the condition that caused the message to - be logged will be included. In this example, error number 111 - (an operating system-specific error number) was encountered when - trying to open a TCP connection to port 53 on the local system - (address 127.0.0.1). The next step would be to find out the reason - for the failure by consulting your system's documentation to - identify what error number 111 means. + A brief description of the cause of the problem. + Within this text, information relating to the condition + that caused the message to be logged will be included. + In this example, error number 111 (an operating + system-specific error number) was encountered when + trying to open a TCP connection to port 53 on the + local system (address 127.0.0.1). The next step + would be to find out the reason for the failure by + consulting your system's documentation to identify + what error number 111 means. From 1c8043e5b50bd47d7734397a08d5015e3672b9ad Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Mon, 1 Aug 2011 13:43:15 -0500 Subject: [PATCH 025/175] [trac1011] fix my typo --- doc/guide/bind10-guide.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index c7c5fd57b3..34ba092cb7 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1473,7 +1473,7 @@ then change those defaults with config set Resolver/forward_addresses[0]/address
- Logging Message Formatn + Logging Message Format Each message written by BIND 10 to the configured logging From 61029d971895738ba353841d99f4ca07ecf792b7 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Mon, 1 Aug 2011 16:31:32 -0500 Subject: [PATCH 026/175] [trac1011] fix typo --- doc/guide/bind10-guide.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 34ba092cb7..d6d4703780 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -547,7 +547,7 @@ Debian and Ubuntu: --prefix - Define the the installation location (the + Define the installation location (the default is /usr/local/). From 16e52275c4c9e355cf4e448a5b17136f24324d7a Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Mon, 1 Aug 2011 16:56:57 -0500 Subject: [PATCH 027/175] [trac1011] some docbook formatting for new content This needs to be indented correctly still. --- doc/guide/bind10-guide.xml | 182 ++++++++++++++++++++++--------------- 1 file changed, 110 insertions(+), 72 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index d6d4703780..3024467822 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1603,25 +1603,20 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - - -name (string) - - - - +
+ name (string) + Each logger in the system has a name, the name being that of the component using it to log messages. For instance, if you want to configure logging for the resolver module, you add an entry for a logger named 'Resolver'. This configuration will then be used by the loggers in the Resolver module, and all the libraries used by it. + - - If you want to specify logging for one specific library @@ -1691,11 +1686,10 @@ name (string) - +
-severity (string) - -
+
+ severity (string) @@ -1708,15 +1702,40 @@ severity (string) Each message is logged with an associated severity which may be one of the following (in descending order of severity): - - FATAL - ERROR - WARN - INFO - DEBUG - + + + + FATAL + + + + + + ERROR + + + + + + WARN + + + + + + INFO + + + + + + DEBUG + + + + When the severity of a logger is set to one of these @@ -1729,11 +1748,10 @@ severity (string) - +
-output_options (list) - - +
+ output_options (list) @@ -1749,11 +1767,10 @@ output_options (list) - +
-debuglevel (integer) - - +
+ debuglevel (integer) @@ -1774,11 +1791,10 @@ debuglevel (integer) - +
-additive (true or false) - - +
+ additive (true or false) @@ -1796,6 +1812,8 @@ additive (true or false)
+
+
Output Options @@ -1807,11 +1825,8 @@ additive (true or false) - - -destination (string) - - +
+ destination (string) @@ -1819,19 +1834,36 @@ destination (string) - + - * console - * file - * syslog + - + + + console + + - + + + file + + -output (string) + + + syslog + + - + + + + +
+ +
+ output (string) @@ -1840,30 +1872,38 @@ output (string) - - - * destination is 'console' + + + destination is 'console' + + 'output' must be one of 'stdout' (messages printed to standard output) or 'stderr' (messages printed to standard error). + + + - - - - - * destination is 'file' + + destination is 'file' + + The value of output is interpreted as a file name; log messages will be appended to this file. + + + - - - - - * destination is 'syslog' - + + destination is 'syslog' + + The value of output is interpreted as the syslog facility (e.g. 'local0') that should be used for log messages. + + + - + @@ -1871,11 +1911,10 @@ output (string) - +
-flush (true of false) - - +
+ flush (true of false) @@ -1886,11 +1925,10 @@ flush (true of false) - +
-maxsize (integer) - - +
+ maxsize (integer) @@ -1908,11 +1946,10 @@ maxsize (integer) - +
-maxver (integer) - - +
+ maxver (integer) @@ -1921,6 +1958,7 @@ maxver (integer) if 'file'. +
From e76dc86b0a01a54dab56cbf8552bd0c5fbb5b461 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 2 Aug 2011 22:28:14 +0200 Subject: [PATCH 028/175] [trac1061] (Co|De)structor of SQLite3Connection Most of the code is slightly modified copy-paste from the Sqlite3DataSource. No documentation or log messages and the getZone method is dummy. But it compiles and provides some kind of frame for the rest. --- src/lib/datasrc/datasrc_messages.mes | 7 + src/lib/datasrc/sqlite3_connection.cc | 280 ++++++++++++++++++ src/lib/datasrc/sqlite3_connection.h | 28 +- src/lib/datasrc/tests/Makefile.am | 1 + .../tests/sqlite3_connection_unittest.cc | 70 +++++ 5 files changed, 376 insertions(+), 10 deletions(-) create mode 100644 src/lib/datasrc/tests/sqlite3_connection_unittest.cc diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index 3dc69e070d..a6f783785c 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -497,3 +497,10 @@ data source. This indicates a programming error. An internal task of unknown type was generated. +% DATASRC_SQLITE_NEWCONN TODO + +% DATASRC_SQLITE_DROPCONN TODO + +% DATASRC_SQLITE_CONNOPEN TODO + +% DATASRC_SQLITE_CONNCLOSE TODO diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index e8c25092dc..6dd8319879 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -12,3 +12,283 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include + +#include +#include +#include + +namespace isc { +namespace datasrc { + +struct SQLite3Parameters { + SQLite3Parameters() : + db_(NULL), version_(-1), + q_zone_(NULL) /*, q_record_(NULL), q_addrs_(NULL), q_referral_(NULL), + q_any_(NULL), q_count_(NULL), q_previous_(NULL), q_nsec3_(NULL), + q_prevnsec3_(NULL) */ + {} + sqlite3* db_; + int version_; + sqlite3_stmt* q_zone_; + /* + TODO: Yet unneeded statements + sqlite3_stmt* q_record_; + sqlite3_stmt* q_addrs_; + sqlite3_stmt* q_referral_; + sqlite3_stmt* q_any_; + sqlite3_stmt* q_count_; + sqlite3_stmt* q_previous_; + sqlite3_stmt* q_nsec3_; + sqlite3_stmt* q_prevnsec3_; + */ +}; + +SQLite3Connection::SQLite3Connection(const isc::data::ConstElementPtr& + config) : + dbparameters_(new SQLite3Parameters) +{ + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_NEWCONN); + + if (config && config->contains("database_file")) { + open(config->get("database_file")->stringValue()); + } else { + isc_throw(DataSourceError, "No SQLite database file specified"); + } +} + +namespace { + +// This is a helper class to initialize a Sqlite3 DB safely. An object of +// this class encapsulates all temporary resources that are necessary for +// the initialization, and release them in the destructor. Once everything +// is properly initialized, the move() method moves the allocated resources +// to the main object in an exception free manner. This way, the main code +// for the initialization can be exception safe, and can provide the strong +// exception guarantee. +class Initializer { +public: + ~Initializer() { + if (params_.q_zone_ != NULL) { + sqlite3_finalize(params_.q_zone_); + } + /* + if (params_.q_record_ != NULL) { + sqlite3_finalize(params_.q_record_); + } + if (params_.q_addrs_ != NULL) { + sqlite3_finalize(params_.q_addrs_); + } + if (params_.q_referral_ != NULL) { + sqlite3_finalize(params_.q_referral_); + } + if (params_.q_any_ != NULL) { + sqlite3_finalize(params_.q_any_); + } + if (params_.q_count_ != NULL) { + sqlite3_finalize(params_.q_count_); + } + if (params_.q_previous_ != NULL) { + sqlite3_finalize(params_.q_previous_); + } + if (params_.q_nsec3_ != NULL) { + sqlite3_finalize(params_.q_nsec3_); + } + if (params_.q_prevnsec3_ != NULL) { + sqlite3_finalize(params_.q_prevnsec3_); + } + */ + if (params_.db_ != NULL) { + sqlite3_close(params_.db_); + } + } + void move(SQLite3Parameters* dst) { + *dst = params_; + params_ = SQLite3Parameters(); // clear everything + } + SQLite3Parameters params_; +}; + +const char* const SCHEMA_LIST[] = { + "CREATE TABLE schema_version (version INTEGER NOT NULL)", + "INSERT INTO schema_version VALUES (1)", + "CREATE TABLE zones (id INTEGER PRIMARY KEY, " + "name STRING NOT NULL COLLATE NOCASE, " + "rdclass STRING NOT NULL COLLATE NOCASE DEFAULT 'IN', " + "dnssec BOOLEAN NOT NULL DEFAULT 0)", + "CREATE INDEX zones_byname ON zones (name)", + "CREATE TABLE records (id INTEGER PRIMARY KEY, " + "zone_id INTEGER NOT NULL, name STRING NOT NULL COLLATE NOCASE, " + "rname STRING NOT NULL COLLATE NOCASE, ttl INTEGER NOT NULL, " + "rdtype STRING NOT NULL COLLATE NOCASE, sigtype STRING COLLATE NOCASE, " + "rdata STRING NOT NULL)", + "CREATE INDEX records_byname ON records (name)", + "CREATE INDEX records_byrname ON records (rname)", + "CREATE TABLE nsec3 (id INTEGER PRIMARY KEY, zone_id INTEGER NOT NULL, " + "hash STRING NOT NULL COLLATE NOCASE, " + "owner STRING NOT NULL COLLATE NOCASE, " + "ttl INTEGER NOT NULL, rdtype STRING NOT NULL COLLATE NOCASE, " + "rdata STRING NOT NULL)", + "CREATE INDEX nsec3_byhash ON nsec3 (hash)", + NULL +}; + +const char* const q_zone_str = "SELECT id FROM zones WHERE name=?1"; + +/* TODO: Prune the statements, not everything will be needed maybe? +const char* const q_record_str = "SELECT rdtype, ttl, sigtype, rdata " + "FROM records WHERE zone_id=?1 AND name=?2 AND " + "((rdtype=?3 OR sigtype=?3) OR " + "(rdtype='CNAME' OR sigtype='CNAME') OR " + "(rdtype='NS' OR sigtype='NS'))"; + +const char* const q_addrs_str = "SELECT rdtype, ttl, sigtype, rdata " + "FROM records WHERE zone_id=?1 AND name=?2 AND " + "(rdtype='A' OR sigtype='A' OR rdtype='AAAA' OR sigtype='AAAA')"; + +const char* const q_referral_str = "SELECT rdtype, ttl, sigtype, rdata FROM " + "records WHERE zone_id=?1 AND name=?2 AND" + "(rdtype='NS' OR sigtype='NS' OR rdtype='DS' OR sigtype='DS' OR " + "rdtype='DNAME' OR sigtype='DNAME')"; + +const char* const q_any_str = "SELECT rdtype, ttl, sigtype, rdata " + "FROM records WHERE zone_id=?1 AND name=?2"; + +const char* const q_count_str = "SELECT COUNT(*) FROM records " + "WHERE zone_id=?1 AND rname LIKE (?2 || '%');"; + +const char* const q_previous_str = "SELECT name FROM records " + "WHERE zone_id=?1 AND rdtype = 'NSEC' AND " + "rname < $2 ORDER BY rname DESC LIMIT 1"; + +const char* const q_nsec3_str = "SELECT rdtype, ttl, rdata FROM nsec3 " + "WHERE zone_id = ?1 AND hash = $2"; + +const char* const q_prevnsec3_str = "SELECT hash FROM nsec3 " + "WHERE zone_id = ?1 AND hash <= $2 ORDER BY hash DESC LIMIT 1"; + */ + +sqlite3_stmt* +prepare(sqlite3* const db, const char* const statement) { + sqlite3_stmt* prepared = NULL; + if (sqlite3_prepare_v2(db, statement, -1, &prepared, NULL) != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not prepare SQLite statement: " << + statement); + } + return (prepared); +} + +void +checkAndSetupSchema(Initializer* initializer) { + sqlite3* const db = initializer->params_.db_; + + sqlite3_stmt* prepared = NULL; + if (sqlite3_prepare_v2(db, "SELECT version FROM schema_version", -1, + &prepared, NULL) == SQLITE_OK && + sqlite3_step(prepared) == SQLITE_ROW) { + initializer->params_.version_ = sqlite3_column_int(prepared, 0); + sqlite3_finalize(prepared); + } else { + logger.info(DATASRC_SQLITE_SETUP); + if (prepared != NULL) { + sqlite3_finalize(prepared); + } + for (int i = 0; SCHEMA_LIST[i] != NULL; ++i) { + if (sqlite3_exec(db, SCHEMA_LIST[i], NULL, NULL, NULL) != + SQLITE_OK) { + isc_throw(SQLite3Error, + "Failed to set up schema " << SCHEMA_LIST[i]); + } + } + } + + initializer->params_.q_zone_ = prepare(db, q_zone_str); + /* TODO: Yet unneeded statements + initializer->params_.q_record_ = prepare(db, q_record_str); + initializer->params_.q_addrs_ = prepare(db, q_addrs_str); + initializer->params_.q_referral_ = prepare(db, q_referral_str); + initializer->params_.q_any_ = prepare(db, q_any_str); + initializer->params_.q_count_ = prepare(db, q_count_str); + initializer->params_.q_previous_ = prepare(db, q_previous_str); + initializer->params_.q_nsec3_ = prepare(db, q_nsec3_str); + initializer->params_.q_prevnsec3_ = prepare(db, q_prevnsec3_str); + */ +} + +} + +void +SQLite3Connection::open(const std::string& name) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_CONNOPEN).arg(name); + if (dbparameters_->db_ != NULL) { + // There shouldn't be a way to trigger this anyway + isc_throw(DataSourceError, "Duplicate SQLite open with " << name); + } + + Initializer initializer; + + if (sqlite3_open(name.c_str(), &initializer.params_.db_) != 0) { + isc_throw(SQLite3Error, "Cannot open SQLite database file: " << name); + } + + checkAndSetupSchema(&initializer); + initializer.move(dbparameters_); +} + +SQLite3Connection::~ SQLite3Connection() { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_DROPCONN); + if (dbparameters_->db_ != NULL) { + close(); + } + delete dbparameters_; +} + +void +SQLite3Connection::close(void) { + LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_CONNCLOSE); + if (dbparameters_->db_ == NULL) { + isc_throw(DataSourceError, + "SQLite data source is being closed before open"); + } + + // XXX: sqlite3_finalize() could fail. What should we do in that case? + sqlite3_finalize(dbparameters_->q_zone_); + dbparameters_->q_zone_ = NULL; + + /* TODO: Once they are needed or not, uncomment or drop + sqlite3_finalize(dbparameters->q_record_); + dbparameters->q_record_ = NULL; + + sqlite3_finalize(dbparameters->q_addrs_); + dbparameters->q_addrs_ = NULL; + + sqlite3_finalize(dbparameters->q_referral_); + dbparameters->q_referral_ = NULL; + + sqlite3_finalize(dbparameters->q_any_); + dbparameters->q_any_ = NULL; + + sqlite3_finalize(dbparameters->q_count_); + dbparameters->q_count_ = NULL; + + sqlite3_finalize(dbparameters->q_previous_); + dbparameters->q_previous_ = NULL; + + sqlite3_finalize(dbparameters->q_prevnsec3_); + dbparameters->q_prevnsec3_ = NULL; + + sqlite3_finalize(dbparameters->q_nsec3_); + dbparameters->q_nsec3_ = NULL; + */ + + sqlite3_close(dbparameters_->db_); + dbparameters_->db_ = NULL; +} + +std::pair +SQLite3Connection::getZone(const isc::dns::Name&) const { + return std::pair(false, 0); +} + +} +} diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index e18386cd7d..62d42e1156 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -18,23 +18,31 @@ #include -// TODO Once the whole SQLite3 thing is ported here, move the Sqlite3Error -// here and remove the header file. -#include +#include +#include + +#include namespace isc { namespace datasrc { +class SQLite3Error : public Exception { +public: + SQLite3Error(const char* file, size_t line, const char* what) : + isc::Exception(file, line, what) {} +}; + +struct SQLite3Parameters; + class SQLite3Connection : public DatabaseConnection { public: - // TODO Should we simplify this as well and just pass config to the - // constructor and be done? (whenever the config would change, we would - // recreate new connections) - Result init() { return (init(isc::data::ElementPtr())); } - Result init(const isc::data::ConstElementPtr& config); - Result close(); - + SQLite3Connection(const isc::data::ConstElementPtr& config); + ~ SQLite3Connection(); virtual std::pair getZone(const isc::dns::Name& name) const; +private: + SQLite3Parameters* dbparameters_; + void open(const std::string& filename); + void close(); }; } diff --git a/src/lib/datasrc/tests/Makefile.am b/src/lib/datasrc/tests/Makefile.am index 9cfd0d8c29..c2e2b5caad 100644 --- a/src/lib/datasrc/tests/Makefile.am +++ b/src/lib/datasrc/tests/Makefile.am @@ -29,6 +29,7 @@ run_unittests_SOURCES += zonetable_unittest.cc run_unittests_SOURCES += memory_datasrc_unittest.cc run_unittests_SOURCES += logger_unittest.cc run_unittests_SOURCES += database_unittest.cc +run_unittests_SOURCES += sqlite3_connection_unittest.cc run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) run_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS) diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc new file mode 100644 index 0000000000..b88e986cac --- /dev/null +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -0,0 +1,70 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include + +using namespace isc::datasrc; +using isc::data::ConstElementPtr; +using isc::data::Element; + +namespace { +// Some test data +ConstElementPtr SQLITE_DBFILE_EXAMPLE = Element::fromJSON( + "{ \"database_file\": \"" TEST_DATA_DIR "/test.sqlite3\"}"); +ConstElementPtr SQLITE_DBFILE_EXAMPLE2 = Element::fromJSON( + "{ \"database_file\": \"" TEST_DATA_DIR "/example2.com.sqlite3\"}"); +ConstElementPtr SQLITE_DBFILE_EXAMPLE_ROOT = Element::fromJSON( + "{ \"database_file\": \"" TEST_DATA_DIR "/test-root.sqlite3\"}"); +ConstElementPtr SQLITE_DBFILE_BROKENDB = Element::fromJSON( + "{ \"database_file\": \"" TEST_DATA_DIR "/brokendb.sqlite3\"}"); +ConstElementPtr SQLITE_DBFILE_MEMORY = Element::fromJSON( + "{ \"database_file\": \":memory:\"}"); + +// The following file must be non existent and must be non"creatable"; +// the sqlite3 library will try to create a new DB file if it doesn't exist, +// so to test a failure case the create operation should also fail. +// The "nodir", a non existent directory, is inserted for this purpose. +ConstElementPtr SQLITE_DBFILE_NOTEXIST = Element::fromJSON( + "{ \"database_file\": \"" TEST_DATA_DIR "/nodir/notexist\"}"); + +// Opening works (the content is tested in different tests) +TEST(SQLite3Open, common) { + EXPECT_NO_THROW(SQLite3Connection conn(SQLITE_DBFILE_EXAMPLE)); +} + +// Missing config +TEST(SQLite3Open, noConfig) { + EXPECT_THROW(SQLite3Connection conn(Element::fromJSON("{}")), + DataSourceError); +} + +// The file can't be opened +TEST(SQLite3Open, notExist) { + EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_NOTEXIST), SQLite3Error); +} + +// It rejects broken DB +TEST(SQLite3Open, brokenDB) { + EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_BROKENDB), SQLite3Error); +} + +// Test we can create the schema on the fly +TEST(SQLite3Open, memoryDB) { + EXPECT_NO_THROW(SQLite3Connection conn(SQLITE_DBFILE_MEMORY)); +} + +} From 608d45610e9f499fb43d2e52eba461d489a7d45f Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Wed, 3 Aug 2011 10:58:47 +0200 Subject: [PATCH 029/175] [trac1061] Tests for SQLite3Connection::getZone These are not copied, as this method was not public in the original implementation. The constructor got a new parameter, the RR class, so it knows what to query from the DB. --- src/lib/datasrc/sqlite3_connection.cc | 6 +- src/lib/datasrc/sqlite3_connection.h | 8 ++- .../tests/sqlite3_connection_unittest.cc | 56 +++++++++++++++++-- 3 files changed, 62 insertions(+), 8 deletions(-) diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index 6dd8319879..1e39d3edfd 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -45,8 +45,10 @@ struct SQLite3Parameters { }; SQLite3Connection::SQLite3Connection(const isc::data::ConstElementPtr& - config) : - dbparameters_(new SQLite3Parameters) + config, + const isc::dns::RRClass& rrclass) : + dbparameters_(new SQLite3Parameters), + class_(rrclass.toText()) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_NEWCONN); diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index 62d42e1156..fbb1667d35 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -24,6 +24,10 @@ #include namespace isc { +namespace dns { +class RRClass; +} + namespace datasrc { class SQLite3Error : public Exception { @@ -36,11 +40,13 @@ struct SQLite3Parameters; class SQLite3Connection : public DatabaseConnection { public: - SQLite3Connection(const isc::data::ConstElementPtr& config); + SQLite3Connection(const isc::data::ConstElementPtr& config, + const isc::dns::RRClass& rrclass); ~ SQLite3Connection(); virtual std::pair getZone(const isc::dns::Name& name) const; private: SQLite3Parameters* dbparameters_; + std::string class_; void open(const std::string& filename); void close(); }; diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index b88e986cac..3065dfe9fb 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -15,11 +15,15 @@ #include #include +#include + #include using namespace isc::datasrc; using isc::data::ConstElementPtr; using isc::data::Element; +using isc::dns::RRClass; +using isc::dns::Name; namespace { // Some test data @@ -43,28 +47,70 @@ ConstElementPtr SQLITE_DBFILE_NOTEXIST = Element::fromJSON( // Opening works (the content is tested in different tests) TEST(SQLite3Open, common) { - EXPECT_NO_THROW(SQLite3Connection conn(SQLITE_DBFILE_EXAMPLE)); + EXPECT_NO_THROW(SQLite3Connection conn(SQLITE_DBFILE_EXAMPLE, + RRClass::IN())); } // Missing config TEST(SQLite3Open, noConfig) { - EXPECT_THROW(SQLite3Connection conn(Element::fromJSON("{}")), + EXPECT_THROW(SQLite3Connection conn(Element::fromJSON("{}"), + RRClass::IN()), DataSourceError); } // The file can't be opened TEST(SQLite3Open, notExist) { - EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_NOTEXIST), SQLite3Error); + EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_NOTEXIST, + RRClass::IN()), SQLite3Error); } // It rejects broken DB TEST(SQLite3Open, brokenDB) { - EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_BROKENDB), SQLite3Error); + EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_BROKENDB, + RRClass::IN()), SQLite3Error); } // Test we can create the schema on the fly TEST(SQLite3Open, memoryDB) { - EXPECT_NO_THROW(SQLite3Connection conn(SQLITE_DBFILE_MEMORY)); + EXPECT_NO_THROW(SQLite3Connection conn(SQLITE_DBFILE_MEMORY, + RRClass::IN())); +} + +// Test fixture for querying the connection +class SQLite3Conn : public ::testing::Test { +public: + SQLite3Conn() { + initConn(SQLITE_DBFILE_EXAMPLE, RRClass::IN()); + } + // So it can be re-created with different data + void initConn(const ConstElementPtr& config, const RRClass& rrclass) { + conn.reset(new SQLite3Connection(config, rrclass)); + } + // The tested connection + std::auto_ptr conn; +}; + +// This zone exists in the data, so it should be found +TEST_F(SQLite3Conn, getZone) { + std::pair result(conn->getZone(Name("example.com"))); + EXPECT_TRUE(result.first); + EXPECT_EQ(1, result.second); +} + +// But it should find only the zone, nothing below it +TEST_F(SQLite3Conn, subZone) { + EXPECT_FALSE(conn->getZone(Name("sub.example.com")).first); +} + +// This zone is not there at all +TEST_F(SQLite3Conn, noZone) { + EXPECT_FALSE(conn->getZone(Name("example.org")).first); +} + +// This zone is there, but in different class +TEST_F(SQLite3Conn, noClass) { + initConn(SQLITE_DBFILE_EXAMPLE, RRClass::CH()); + EXPECT_FALSE(conn->getZone(Name("example.com")).first); } } From 823e0fcf308c7f3fc88ba48070e12bd995e75392 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Wed, 3 Aug 2011 11:14:41 +0200 Subject: [PATCH 030/175] [trac1061] Implement SQLite3Connection::getZone --- src/lib/datasrc/sqlite3_connection.cc | 32 ++++++++++++++++++++++++--- src/lib/datasrc/sqlite3_connection.h | 2 +- 2 files changed, 30 insertions(+), 4 deletions(-) diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index 1e39d3edfd..e850db4250 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -135,7 +135,7 @@ const char* const SCHEMA_LIST[] = { NULL }; -const char* const q_zone_str = "SELECT id FROM zones WHERE name=?1"; +const char* const q_zone_str = "SELECT id FROM zones WHERE name=?1 AND rdclass = ?2"; /* TODO: Prune the statements, not everything will be needed maybe? const char* const q_record_str = "SELECT rdtype, ttl, sigtype, rdata " @@ -288,8 +288,34 @@ SQLite3Connection::close(void) { } std::pair -SQLite3Connection::getZone(const isc::dns::Name&) const { - return std::pair(false, 0); +SQLite3Connection::getZone(const isc::dns::Name& name) const { + int rc; + + sqlite3_reset(dbparameters_->q_zone_); + rc = sqlite3_bind_text(dbparameters_->q_zone_, 1, name.toText().c_str(), + -1, SQLITE_STATIC); + if (rc != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not bind " << name << + " to SQL statement (zone)"); + } + rc = sqlite3_bind_text(dbparameters_->q_zone_, 2, class_.c_str(), -1, + SQLITE_STATIC); + if (rc != SQLITE_OK) { + isc_throw(SQLite3Error, "Could not bind " << class_ << + " to SQL statement (zone)"); + } + + rc = sqlite3_step(dbparameters_->q_zone_); + std::pair result; + if (rc == SQLITE_ROW) { + result = std::pair(true, + sqlite3_column_int(dbparameters_-> + q_zone_, 0)); + } else { + result = std::pair(false, 0); + } + sqlite3_reset(dbparameters_->q_zone_); + return (result); } } diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index fbb1667d35..86ad9c386b 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -46,7 +46,7 @@ public: virtual std::pair getZone(const isc::dns::Name& name) const; private: SQLite3Parameters* dbparameters_; - std::string class_; + const std::string class_; void open(const std::string& filename); void close(); }; From e47f04584b00f6d7b5c8bf9e8ae6af9aaa6831fd Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Wed, 3 Aug 2011 11:29:52 +0200 Subject: [PATCH 031/175] [trac1061] Doxygen comments for SQLite3Connection --- src/lib/datasrc/sqlite3_connection.h | 54 ++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index 86ad9c386b..266dd05ea6 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -30,6 +30,13 @@ class RRClass; namespace datasrc { +/** + * \brief Low-level database error + * + * This exception is thrown when the SQLite library complains about something. + * It might mean corrupt database file, invalid request or that something is + * rotten in the library. + */ class SQLite3Error : public Exception { public: SQLite3Error(const char* file, size_t line, const char* what) : @@ -38,16 +45,63 @@ public: struct SQLite3Parameters; +/** + * \brief Concrete implementation of DatabaseConnection for SQLite3 databases + * + * This opens one database file with our schema and serves data from there. + * According to the design, it doesn't interpret the data in any way, it just + * provides unified access to the DB. + */ class SQLite3Connection : public DatabaseConnection { public: + /** + * \brief Constructor + * + * This opens the database and becomes ready to serve data from there. + * + * This might throw SQLite3Error if the given database file doesn't work + * (it is broken, doesn't exist and can't be created, etc). It might throw + * DataSourceError if the provided config is invalid (it is missing the + * database_file element). + * + * \param config The part of config describing which database file should + * be used. + * \param rrclass Which class of data it should serve (while the database + * can contain multiple classes of data, single connection can provide + * only one class). + * \todo Should we pass the database filename instead of the config? It + * might be cleaner if this class doesn't know anything about configs. + */ SQLite3Connection(const isc::data::ConstElementPtr& config, const isc::dns::RRClass& rrclass); + /** + * \brief Destructor + * + * Closes the database. + */ ~ SQLite3Connection(); + /** + * \brief Look up a zone + * + * This implements the getZone from DatabaseConnection and looks up a zone + * in the data. It looks for a zone with the exact given origin and class + * passed to the constructor. + * + * It may throw SQLite3Error if something about the database is broken. + * + * \param name The name of zone to look up + * \return The pair contains if the lookup was successful in the first + * element and the zone id in the second if it was. + */ virtual std::pair getZone(const isc::dns::Name& name) const; private: + /// \brief Private database data SQLite3Parameters* dbparameters_; + /// \brief The class for which the queries are done const std::string class_; + /// \brief Opens the database void open(const std::string& filename); + /// \brief Closes the database void close(); }; From 3702df52de21023d90052afdc54732d9ad285b39 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Wed, 3 Aug 2011 11:36:47 +0200 Subject: [PATCH 032/175] [trac1061] Logging descriptions --- src/lib/datasrc/datasrc_messages.mes | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index a6f783785c..3fbb24d05d 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -400,12 +400,22 @@ enough information for it. The code is 1 for error, 2 for not implemented. % DATASRC_SQLITE_CLOSE closing SQLite database Debug information. The SQLite data source is closing the database file. + +% DATASRC_SQLITE_CONNOPEN Opening sqlite database file '%1' +The database file is being opened so it can start providing data. + +% DATASRC_SQLITE_CONNCLOSE Closing sqlite database +The database file is no longer needed and is being closed. + % DATASRC_SQLITE_CREATE SQLite data source created Debug information. An instance of SQLite data source is being created. % DATASRC_SQLITE_DESTROY SQLite data source destroyed Debug information. An instance of SQLite data source is being destroyed. +% DATASRC_SQLITE_DROPCONN SQLite3Connection is being deinitialized +The object around a database connection is being destroyed. + % DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1' Debug information. The SQLite data source is trying to identify which zone should hold this domain. @@ -458,6 +468,9 @@ source. The SQLite data source was asked to provide a NSEC3 record for given zone. But it doesn't contain that zone. +% DATASRC_SQLITE_NEWCONN SQLite3Connection is being initialized +A wrapper object to hold database connection is being initialized. + % DATASRC_SQLITE_OPEN opening SQLite database '%1' Debug information. The SQLite data source is loading an SQLite database in the provided file. @@ -496,11 +509,3 @@ data source. % DATASRC_UNEXPECTED_QUERY_STATE unexpected query state This indicates a programming error. An internal task of unknown type was generated. - -% DATASRC_SQLITE_NEWCONN TODO - -% DATASRC_SQLITE_DROPCONN TODO - -% DATASRC_SQLITE_CONNOPEN TODO - -% DATASRC_SQLITE_CONNCLOSE TODO From fea1f88cd0bb5bdeefc6048b122da4328635163d Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Thu, 4 Aug 2011 17:18:54 +0200 Subject: [PATCH 033/175] [1061] Address review comments Mostly comments and cleanups, some simplification of interface and change from auto_ptr to shared_ptr. --- src/lib/datasrc/database.cc | 10 ++++--- src/lib/datasrc/database.h | 24 +++++---------- src/lib/datasrc/sqlite3_connection.cc | 16 +++++----- src/lib/datasrc/sqlite3_connection.h | 18 ++++-------- src/lib/datasrc/tests/database_unittest.cc | 6 ++-- .../tests/sqlite3_connection_unittest.cc | 29 +++++-------------- 6 files changed, 38 insertions(+), 65 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 5fe9f7ba8e..2264f2c7ab 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -22,7 +22,8 @@ using isc::dns::Name; namespace isc { namespace datasrc { -DatabaseClient::DatabaseClient(std::auto_ptr connection) : +DatabaseClient::DatabaseClient(boost::shared_ptr + connection) : connection_(connection) { if (connection_.get() == NULL) { @@ -37,7 +38,7 @@ DatabaseClient::findZone(const Name& name) const { // Try exact first if (zone.first) { return (FindResult(result::SUCCESS, - ZoneFinderPtr(new Finder(*connection_, + ZoneFinderPtr(new Finder(connection_, zone.second)))); } // Than super domains @@ -46,7 +47,7 @@ DatabaseClient::findZone(const Name& name) const { zone = connection_->getZone(name.split(i)); if (zone.first) { return (FindResult(result::PARTIALMATCH, - ZoneFinderPtr(new Finder(*connection_, + ZoneFinderPtr(new Finder(connection_, zone.second)))); } } @@ -54,7 +55,8 @@ DatabaseClient::findZone(const Name& name) const { return (FindResult(result::NOTFOUND, ZoneFinderPtr())); } -DatabaseClient::Finder::Finder(DatabaseConnection& connection, int zone_id) : +DatabaseClient::Finder::Finder(boost::shared_ptr + connection, int zone_id) : connection_(connection), zone_id_(zone_id) { } diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 5693479a77..8e5c1564e5 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -50,7 +50,7 @@ public: * It is empty, but needs a virtual one, since we will use the derived * classes in polymorphic way. */ - virtual ~ DatabaseConnection() { } + virtual ~DatabaseConnection() { } /** * \brief Retrieve a zone identifier * @@ -94,24 +94,14 @@ public: * * It initializes the client with a connection. * - * It throws isc::InvalidParameter if connection is NULL. It might throw + * \exception isc::InvalidParameter if connection is NULL. It might throw * standard allocation exception as well, but doesn't throw anything else. * - * \note Some objects returned from methods of this class (like ZoneFinder) - * hold references to the connection. As the lifetime of the connection - * is bound to this object, the returned objects must not be used after - * descruction of the DatabaseClient. - * - * \todo Should we use shared_ptr instead? On one side, we would get rid of - * the restriction and maybe could easy up some shutdown scenarios with - * multi-threaded applications, on the other hand it is more expensive - * and looks generally unneeded. - * * \param connection The connection to use to get data. As the parameter * suggests, the client takes ownership of the connection and will * delete it when itself deleted. */ - DatabaseClient(std::auto_ptr connection); + DatabaseClient(boost::shared_ptr connection); /** * \brief Corresponding ZoneFinder implementation * @@ -138,7 +128,7 @@ public: * DatabaseConnection::getZone and which will be passed to further * calls to the connection. */ - Finder(DatabaseConnection& connection, int zone_id); + Finder(boost::shared_ptr connection, int zone_id); virtual isc::dns::Name getOrigin() const; virtual isc::dns::RRClass getClass() const; virtual FindResult find(const isc::dns::Name& name, @@ -162,10 +152,10 @@ public: * normal applications shouldn't need it. */ const DatabaseConnection& connection() const { - return (connection_); + return (*connection_); } private: - DatabaseConnection& connection_; + boost::shared_ptr connection_; const int zone_id_; }; /** @@ -183,7 +173,7 @@ public: virtual FindResult findZone(const isc::dns::Name& name) const; private: /// \brief Our connection. - const std::auto_ptr connection_; + const boost::shared_ptr connection_; }; } diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index e850db4250..35db44620d 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -44,19 +44,14 @@ struct SQLite3Parameters { */ }; -SQLite3Connection::SQLite3Connection(const isc::data::ConstElementPtr& - config, +SQLite3Connection::SQLite3Connection(const std::string& filename, const isc::dns::RRClass& rrclass) : dbparameters_(new SQLite3Parameters), class_(rrclass.toText()) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_NEWCONN); - if (config && config->contains("database_file")) { - open(config->get("database_file")->stringValue()); - } else { - isc_throw(DataSourceError, "No SQLite database file specified"); - } + open(filename); } namespace { @@ -237,7 +232,7 @@ SQLite3Connection::open(const std::string& name) { initializer.move(dbparameters_); } -SQLite3Connection::~ SQLite3Connection() { +SQLite3Connection::~SQLite3Connection() { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_DROPCONN); if (dbparameters_->db_ != NULL) { close(); @@ -291,6 +286,8 @@ std::pair SQLite3Connection::getZone(const isc::dns::Name& name) const { int rc; + // Take the statement (simple SELECT id FROM zones WHERE...) + // and prepare it (bind the parameters to it) sqlite3_reset(dbparameters_->q_zone_); rc = sqlite3_bind_text(dbparameters_->q_zone_, 1, name.toText().c_str(), -1, SQLITE_STATIC); @@ -305,6 +302,7 @@ SQLite3Connection::getZone(const isc::dns::Name& name) const { " to SQL statement (zone)"); } + // Get the data there and see if it found anything rc = sqlite3_step(dbparameters_->q_zone_); std::pair result; if (rc == SQLITE_ROW) { @@ -314,7 +312,9 @@ SQLite3Connection::getZone(const isc::dns::Name& name) const { } else { result = std::pair(false, 0); } + // Free resources sqlite3_reset(dbparameters_->q_zone_); + return (result); } diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index 266dd05ea6..484571599e 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -19,7 +19,6 @@ #include #include -#include #include @@ -59,27 +58,22 @@ public: * * This opens the database and becomes ready to serve data from there. * - * This might throw SQLite3Error if the given database file doesn't work - * (it is broken, doesn't exist and can't be created, etc). It might throw - * DataSourceError if the provided config is invalid (it is missing the - * database_file element). + * \exception SQLite3Error will be thrown if the given database file + * doesn't work (it is broken, doesn't exist and can't be created, etc). * - * \param config The part of config describing which database file should - * be used. + * \param filename The database file to be used. * \param rrclass Which class of data it should serve (while the database * can contain multiple classes of data, single connection can provide * only one class). - * \todo Should we pass the database filename instead of the config? It - * might be cleaner if this class doesn't know anything about configs. */ - SQLite3Connection(const isc::data::ConstElementPtr& config, + SQLite3Connection(const std::string& filename, const isc::dns::RRClass& rrclass); /** * \brief Destructor * * Closes the database. */ - ~ SQLite3Connection(); + ~SQLite3Connection(); /** * \brief Look up a zone * @@ -87,7 +81,7 @@ public: * in the data. It looks for a zone with the exact given origin and class * passed to the constructor. * - * It may throw SQLite3Error if something about the database is broken. + * \exception SQLite3Error if something about the database is broken. * * \param name The name of zone to look up * \return The pair contains if the lookup was successful in the first diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index b60d5c0ced..c271a76dc8 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -52,12 +52,12 @@ public: */ void createClient() { current_connection_ = new MockConnection(); - client_.reset(new DatabaseClient(auto_ptr( + client_.reset(new DatabaseClient(shared_ptr( current_connection_))); } // Will be deleted by client_, just keep the current value for comparison. MockConnection* current_connection_; - auto_ptr client_; + shared_ptr client_; /** * Check the zone finder is a valid one and references the zone ID and * connection available here. @@ -92,7 +92,7 @@ TEST_F(DatabaseClientTest, superZone) { } TEST_F(DatabaseClientTest, noConnException) { - EXPECT_THROW(DatabaseClient(auto_ptr()), + EXPECT_THROW(DatabaseClient(shared_ptr()), isc::InvalidParameter); } diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 3065dfe9fb..6d2a945a7f 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -27,23 +27,17 @@ using isc::dns::Name; namespace { // Some test data -ConstElementPtr SQLITE_DBFILE_EXAMPLE = Element::fromJSON( - "{ \"database_file\": \"" TEST_DATA_DIR "/test.sqlite3\"}"); -ConstElementPtr SQLITE_DBFILE_EXAMPLE2 = Element::fromJSON( - "{ \"database_file\": \"" TEST_DATA_DIR "/example2.com.sqlite3\"}"); -ConstElementPtr SQLITE_DBFILE_EXAMPLE_ROOT = Element::fromJSON( - "{ \"database_file\": \"" TEST_DATA_DIR "/test-root.sqlite3\"}"); -ConstElementPtr SQLITE_DBFILE_BROKENDB = Element::fromJSON( - "{ \"database_file\": \"" TEST_DATA_DIR "/brokendb.sqlite3\"}"); -ConstElementPtr SQLITE_DBFILE_MEMORY = Element::fromJSON( - "{ \"database_file\": \":memory:\"}"); +std::string SQLITE_DBFILE_EXAMPLE = TEST_DATA_DIR "/test.sqlite3"; +std::string SQLITE_DBFILE_EXAMPLE2 = TEST_DATA_DIR "/example2.com.sqlite3"; +std::string SQLITE_DBFILE_EXAMPLE_ROOT = TEST_DATA_DIR "/test-root.sqlite3"; +std::string SQLITE_DBFILE_BROKENDB = TEST_DATA_DIR "/brokendb.sqlite3"; +std::string SQLITE_DBFILE_MEMORY = "memory"; // The following file must be non existent and must be non"creatable"; // the sqlite3 library will try to create a new DB file if it doesn't exist, // so to test a failure case the create operation should also fail. // The "nodir", a non existent directory, is inserted for this purpose. -ConstElementPtr SQLITE_DBFILE_NOTEXIST = Element::fromJSON( - "{ \"database_file\": \"" TEST_DATA_DIR "/nodir/notexist\"}"); +std::string SQLITE_DBFILE_NOTEXIST = TEST_DATA_DIR "/nodir/notexist"; // Opening works (the content is tested in different tests) TEST(SQLite3Open, common) { @@ -51,13 +45,6 @@ TEST(SQLite3Open, common) { RRClass::IN())); } -// Missing config -TEST(SQLite3Open, noConfig) { - EXPECT_THROW(SQLite3Connection conn(Element::fromJSON("{}"), - RRClass::IN()), - DataSourceError); -} - // The file can't be opened TEST(SQLite3Open, notExist) { EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_NOTEXIST, @@ -83,8 +70,8 @@ public: initConn(SQLITE_DBFILE_EXAMPLE, RRClass::IN()); } // So it can be re-created with different data - void initConn(const ConstElementPtr& config, const RRClass& rrclass) { - conn.reset(new SQLite3Connection(config, rrclass)); + void initConn(const std::string& filename, const RRClass& rrclass) { + conn.reset(new SQLite3Connection(filename, rrclass)); } // The tested connection std::auto_ptr conn; From 885d7987eefb0b8b694626b0831ed93123fb8d8d Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Fri, 5 Aug 2011 15:06:29 +0200 Subject: [PATCH 034/175] [trac1061] fix name for in-memory sqlite db --- src/lib/datasrc/tests/sqlite3_connection_unittest.cc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 6d2a945a7f..1bdbe90206 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -31,7 +31,7 @@ std::string SQLITE_DBFILE_EXAMPLE = TEST_DATA_DIR "/test.sqlite3"; std::string SQLITE_DBFILE_EXAMPLE2 = TEST_DATA_DIR "/example2.com.sqlite3"; std::string SQLITE_DBFILE_EXAMPLE_ROOT = TEST_DATA_DIR "/test-root.sqlite3"; std::string SQLITE_DBFILE_BROKENDB = TEST_DATA_DIR "/brokendb.sqlite3"; -std::string SQLITE_DBFILE_MEMORY = "memory"; +std::string SQLITE_DBFILE_MEMORY = ":memory:"; // The following file must be non existent and must be non"creatable"; // the sqlite3 library will try to create a new DB file if it doesn't exist, From d23cde8c4285cf55b007b300123c41fa852d38d9 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Wed, 3 Aug 2011 14:51:51 +0200 Subject: [PATCH 035/175] [trac1062] initial addition of searchForRecords and getNextRecord --- src/lib/datasrc/database.h | 23 +++++++ src/lib/datasrc/sqlite3_connection.cc | 73 ++++++++++++++++++---- src/lib/datasrc/sqlite3_connection.h | 2 + src/lib/datasrc/tests/database_unittest.cc | 2 + 4 files changed, 87 insertions(+), 13 deletions(-) diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 8e5c1564e5..2ed9cd5f51 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -71,6 +71,29 @@ public: * an opaque handle. */ virtual std::pair getZone(const isc::dns::Name& name) const = 0; + + /** + * \brief Starts a new search for records of the given name in the given zone + * + * \param zone_id The zone to search in, as returned by getZone() + * \param name The name of the records to find + */ + virtual void searchForRecords(int zone_id, const std::string& name) const = 0; + + /** + * \brief Retrieves the next record from the search started with searchForRecords() + * + * Returns a boolean specifying whether or not there was more data to read. + * In the case of a database error, a DatasourceError is thrown. + * + * \exception DatasourceError if there was an error reading from the database + * + * \param columns This vector will be cleared, and the fields of the record will + * be appended here as strings (in the order rdtype, ttl, sigtype, + * and rdata). If there was no data, the vector is untouched. + * \return true if there was a next record, false if there was not + */ + virtual bool getNextRecord(std::vector& columns) const = 0; }; /** diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index 35db44620d..cab92388ad 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -24,19 +24,20 @@ namespace datasrc { struct SQLite3Parameters { SQLite3Parameters() : db_(NULL), version_(-1), - q_zone_(NULL) /*, q_record_(NULL), q_addrs_(NULL), q_referral_(NULL), - q_any_(NULL), q_count_(NULL), q_previous_(NULL), q_nsec3_(NULL), + q_zone_(NULL), q_any_(NULL) + /*q_record_(NULL), q_addrs_(NULL), q_referral_(NULL), + q_count_(NULL), q_previous_(NULL), q_nsec3_(NULL), q_prevnsec3_(NULL) */ {} sqlite3* db_; int version_; sqlite3_stmt* q_zone_; + sqlite3_stmt* q_any_; /* TODO: Yet unneeded statements sqlite3_stmt* q_record_; sqlite3_stmt* q_addrs_; sqlite3_stmt* q_referral_; - sqlite3_stmt* q_any_; sqlite3_stmt* q_count_; sqlite3_stmt* q_previous_; sqlite3_stmt* q_nsec3_; @@ -69,6 +70,9 @@ public: if (params_.q_zone_ != NULL) { sqlite3_finalize(params_.q_zone_); } + if (params_.q_any_ != NULL) { + sqlite3_finalize(params_.q_any_); + } /* if (params_.q_record_ != NULL) { sqlite3_finalize(params_.q_record_); @@ -79,9 +83,6 @@ public: if (params_.q_referral_ != NULL) { sqlite3_finalize(params_.q_referral_); } - if (params_.q_any_ != NULL) { - sqlite3_finalize(params_.q_any_); - } if (params_.q_count_ != NULL) { sqlite3_finalize(params_.q_count_); } @@ -132,6 +133,9 @@ const char* const SCHEMA_LIST[] = { const char* const q_zone_str = "SELECT id FROM zones WHERE name=?1 AND rdclass = ?2"; +const char* const q_any_str = "SELECT rdtype, ttl, sigtype, rdata " + "FROM records WHERE zone_id=?1 AND name=?2"; + /* TODO: Prune the statements, not everything will be needed maybe? const char* const q_record_str = "SELECT rdtype, ttl, sigtype, rdata " "FROM records WHERE zone_id=?1 AND name=?2 AND " @@ -148,9 +152,6 @@ const char* const q_referral_str = "SELECT rdtype, ttl, sigtype, rdata FROM " "(rdtype='NS' OR sigtype='NS' OR rdtype='DS' OR sigtype='DS' OR " "rdtype='DNAME' OR sigtype='DNAME')"; -const char* const q_any_str = "SELECT rdtype, ttl, sigtype, rdata " - "FROM records WHERE zone_id=?1 AND name=?2"; - const char* const q_count_str = "SELECT COUNT(*) FROM records " "WHERE zone_id=?1 AND rname LIKE (?2 || '%');"; @@ -200,11 +201,11 @@ checkAndSetupSchema(Initializer* initializer) { } initializer->params_.q_zone_ = prepare(db, q_zone_str); + initializer->params_.q_any_ = prepare(db, q_any_str); /* TODO: Yet unneeded statements initializer->params_.q_record_ = prepare(db, q_record_str); initializer->params_.q_addrs_ = prepare(db, q_addrs_str); initializer->params_.q_referral_ = prepare(db, q_referral_str); - initializer->params_.q_any_ = prepare(db, q_any_str); initializer->params_.q_count_ = prepare(db, q_count_str); initializer->params_.q_previous_ = prepare(db, q_previous_str); initializer->params_.q_nsec3_ = prepare(db, q_nsec3_str); @@ -252,6 +253,9 @@ SQLite3Connection::close(void) { sqlite3_finalize(dbparameters_->q_zone_); dbparameters_->q_zone_ = NULL; + sqlite3_finalize(dbparameters_->q_any_); + dbparameters_->q_any_ = NULL; + /* TODO: Once they are needed or not, uncomment or drop sqlite3_finalize(dbparameters->q_record_); dbparameters->q_record_ = NULL; @@ -262,9 +266,6 @@ SQLite3Connection::close(void) { sqlite3_finalize(dbparameters->q_referral_); dbparameters->q_referral_ = NULL; - sqlite3_finalize(dbparameters->q_any_); - dbparameters->q_any_ = NULL; - sqlite3_finalize(dbparameters->q_count_); dbparameters->q_count_ = NULL; @@ -318,5 +319,51 @@ SQLite3Connection::getZone(const isc::dns::Name& name) const { return (result); } +void +SQLite3Connection::searchForRecords(int zone_id, const std::string& name) const { + sqlite3_reset(dbparameters_->q_any_); + sqlite3_clear_bindings(dbparameters_->q_any_); + sqlite3_bind_int(dbparameters_->q_any_, 1, zone_id); + // use transient since name is a ref and may disappear + sqlite3_bind_text(dbparameters_->q_any_, 2, name.c_str(), -1, + SQLITE_TRANSIENT); +}; + +namespace { +const char* +convertToPlainChar(const unsigned char* ucp) { + if (ucp == NULL) { + return (""); + } + const void* p = ucp; + return (static_cast(p)); +} +} + +bool +SQLite3Connection::getNextRecord(std::vector& columns) const { + sqlite3_stmt* current_stmt = dbparameters_->q_any_; + const int rc = sqlite3_step(current_stmt); + + if (rc == SQLITE_ROW) { + columns.clear(); + for (int column = 0; column < 4; ++column) { + columns.push_back(convertToPlainChar(sqlite3_column_text( + current_stmt, column))); + } + return true; + } else if (rc == SQLITE_DONE) { + // reached the end of matching rows + sqlite3_reset(current_stmt); + sqlite3_clear_bindings(current_stmt); + return false; + } + sqlite3_reset(current_stmt); + isc_throw(DataSourceError, "Unexpected failure in sqlite3_step"); + + // Compilers might not realize isc_throw always throws + return false; +} + } } diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index 484571599e..bb1a30f867 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -88,6 +88,8 @@ public: * element and the zone id in the second if it was. */ virtual std::pair getZone(const isc::dns::Name& name) const; + virtual void searchForRecords(int zone_id, const std::string& name) const; + virtual bool getNextRecord(std::vector& columns) const; private: /// \brief Private database data SQLite3Parameters* dbparameters_; diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index c271a76dc8..be55a487dc 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -39,6 +39,8 @@ public: return (std::pair(false, 0)); } } + virtual void searchForRecords(int, const std::string&) const {}; + virtual bool getNextRecord(std::vector&) const { return false; }; }; class DatabaseClientTest : public ::testing::Test { From ff14da4f9b706a47f152491eae60586b75430c6e Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Wed, 3 Aug 2011 17:55:32 +0200 Subject: [PATCH 036/175] [trac1062] initial addition of find() code --- src/lib/datasrc/database.cc | 69 ++++++++++++++++++++++++++++++++++--- 1 file changed, 65 insertions(+), 4 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 2264f2c7ab..dfde94014e 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -16,6 +16,9 @@ #include #include +#include +#include +#include using isc::dns::Name; @@ -62,13 +65,71 @@ DatabaseClient::Finder::Finder(boost::shared_ptr { } ZoneFinder::FindResult -DatabaseClient::Finder::find(const isc::dns::Name&, - const isc::dns::RRType&, +DatabaseClient::Finder::find(const isc::dns::Name& name, + const isc::dns::RRType& type, isc::dns::RRsetList*, const FindOptions) const { - // TODO Implement - return (FindResult(SUCCESS, isc::dns::ConstRRsetPtr())); + bool records_found = false; + connection_.searchForRecords(zone_id_, name.toText()); + + isc::dns::RRsetPtr result_rrset; + + std::vector columns; + while (connection_.getNextRecord(columns)) { + if (!records_found) { + records_found = true; + } + + if (columns.size() != 4) { + isc_throw(DataSourceError, + "Datasource backend did not return 4 columns in getNextRecord()"); + } + + const isc::dns::RRType cur_type(columns[0]); + const isc::dns::RRTTL cur_ttl(columns[1]); + //cur_sigtype(columns[2]); + + if (cur_type == type) { + if (!result_rrset) { + result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, + getClass(), + cur_type, + cur_ttl)); + } else { + // We have existing data from earlier calls, do some checks + // and updates if necessary + if (cur_ttl < result_rrset->getTTL()) { + result_rrset->setTTL(cur_ttl); + } + } + + result_rrset->addRdata(isc::dns::rdata::createRdata(cur_type, + getClass(), + columns[3])); + } else if (cur_type == isc::dns::RRType::CNAME()) { + // There should be no other data, so cur_rrset should be empty, + // except for signatures + if (result_rrset && result_rrset->getRdataCount() > 0) { + isc_throw(DataSourceError, "CNAME found but it is not the only record for " + name.toText()); + } + result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, + getClass(), + cur_type, + cur_ttl)); + result_rrset->addRdata(isc::dns::rdata::createRdata(cur_type, + getClass(), + columns[3])); + } + } + + if (result_rrset) { + return (FindResult(SUCCESS, result_rrset)); + } else if (records_found) { + return (FindResult(NXRRSET, isc::dns::ConstRRsetPtr())); + } else { + return (FindResult(NXDOMAIN, isc::dns::ConstRRsetPtr())); + } } Name From 1b96c2563342098e05ac4b240c66e60222249cf4 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 4 Aug 2011 11:27:58 +0200 Subject: [PATCH 037/175] [trac1062] update tests --- src/lib/datasrc/sqlite3_connection.cc | 1 + .../tests/sqlite3_connection_unittest.cc | 36 +++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index cab92388ad..b9e595884e 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -359,6 +359,7 @@ SQLite3Connection::getNextRecord(std::vector& columns) const { return false; } sqlite3_reset(current_stmt); + sqlite3_clear_bindings(current_stmt); isc_throw(DataSourceError, "Unexpected failure in sqlite3_step"); // Compilers might not realize isc_throw always throws diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 1bdbe90206..1c80eb5990 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -100,4 +100,40 @@ TEST_F(SQLite3Conn, noClass) { EXPECT_FALSE(conn->getZone(Name("example.com")).first); } +namespace { + // Simple function to cound the number of records for + // any name + size_t countRecords(std::auto_ptr& conn, + int zone_id, const std::string& name) { + conn->searchForRecords(zone_id, name); + size_t count = 0; + std::vector columns; + while (conn->getNextRecord(columns)) { + EXPECT_EQ(4, columns.size()); + ++count; + } + return count; + } +} +} + +TEST_F(SQLite3Conn, getRecords) { + std::pair zone_info(conn->getZone(Name("example.com"))); + ASSERT_TRUE(zone_info.first); + + int zone_id = zone_info.second; + ASSERT_EQ(1, zone_id); + + // without search, getNext() should return false + std::vector columns; + EXPECT_FALSE(conn->getNextRecord(columns)); + EXPECT_EQ(0, columns.size()); + + EXPECT_EQ(4, countRecords(conn, zone_id, "foo.example.com.")); + EXPECT_EQ(15, countRecords(conn, zone_id, "example.com.")); + EXPECT_EQ(0, countRecords(conn, zone_id, "foo.bar.")); + EXPECT_EQ(0, countRecords(conn, zone_id, "")); + + EXPECT_FALSE(conn->getNextRecord(columns)); + EXPECT_EQ(0, columns.size()); } From 71b0ae9ddbcbf4093900ff879e2e1c82be89867f Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 4 Aug 2011 14:32:53 +0200 Subject: [PATCH 038/175] [trac1062] tests and some additional code --- src/lib/datasrc/database.cc | 8 +- src/lib/datasrc/database.h | 8 +- src/lib/datasrc/sqlite3_connection.cc | 4 +- src/lib/datasrc/sqlite3_connection.h | 4 +- src/lib/datasrc/tests/database_unittest.cc | 119 ++++++++++++++++++++- 5 files changed, 134 insertions(+), 9 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index dfde94014e..8b4a669539 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -74,6 +74,7 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, connection_.searchForRecords(zone_id_, name.toText()); isc::dns::RRsetPtr result_rrset; + ZoneFinder::Result result_status = NXRRSET; std::vector columns; while (connection_.getNextRecord(columns)) { @@ -96,6 +97,7 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, getClass(), cur_type, cur_ttl)); + result_status = SUCCESS; } else { // We have existing data from earlier calls, do some checks // and updates if necessary @@ -120,11 +122,15 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, result_rrset->addRdata(isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); + result_status = CNAME; + } else if (cur_type == isc::dns::RRType::RRSIG()) { + // if we have data already, check covered type + // if not, covered type must be CNAME or type requested } } if (result_rrset) { - return (FindResult(SUCCESS, result_rrset)); + return (FindResult(result_status, result_rrset)); } else if (records_found) { return (FindResult(NXRRSET, isc::dns::ConstRRsetPtr())); } else { diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 2ed9cd5f51..a1f566a23f 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -78,7 +78,7 @@ public: * \param zone_id The zone to search in, as returned by getZone() * \param name The name of the records to find */ - virtual void searchForRecords(int zone_id, const std::string& name) const = 0; + virtual void searchForRecords(int zone_id, const std::string& name) = 0; /** * \brief Retrieves the next record from the search started with searchForRecords() @@ -93,7 +93,7 @@ public: * and rdata). If there was no data, the vector is untouched. * \return true if there was a next record, false if there was not */ - virtual bool getNextRecord(std::vector& columns) const = 0; + virtual bool getNextRecord(std::vector& columns) = 0; }; /** @@ -154,6 +154,10 @@ public: Finder(boost::shared_ptr connection, int zone_id); virtual isc::dns::Name getOrigin() const; virtual isc::dns::RRClass getClass() const; + + /** + * \brief Find an RRset in the datasource + */ virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index b9e595884e..fa5f8310d2 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -320,7 +320,7 @@ SQLite3Connection::getZone(const isc::dns::Name& name) const { } void -SQLite3Connection::searchForRecords(int zone_id, const std::string& name) const { +SQLite3Connection::searchForRecords(int zone_id, const std::string& name) { sqlite3_reset(dbparameters_->q_any_); sqlite3_clear_bindings(dbparameters_->q_any_); sqlite3_bind_int(dbparameters_->q_any_, 1, zone_id); @@ -341,7 +341,7 @@ convertToPlainChar(const unsigned char* ucp) { } bool -SQLite3Connection::getNextRecord(std::vector& columns) const { +SQLite3Connection::getNextRecord(std::vector& columns) { sqlite3_stmt* current_stmt = dbparameters_->q_any_; const int rc = sqlite3_step(current_stmt); diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index bb1a30f867..ca41a0621c 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -88,8 +88,8 @@ public: * element and the zone id in the second if it was. */ virtual std::pair getZone(const isc::dns::Name& name) const; - virtual void searchForRecords(int zone_id, const std::string& name) const; - virtual bool getNextRecord(std::vector& columns) const; + virtual void searchForRecords(int zone_id, const std::string& name); + virtual bool getNextRecord(std::vector& columns); private: /// \brief Private database data SQLite3Parameters* dbparameters_; diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index be55a487dc..f9b8a0a41b 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -18,6 +18,10 @@ #include #include +#include +#include + +#include using namespace isc::datasrc; using namespace std; @@ -32,6 +36,8 @@ namespace { */ class MockConnection : public DatabaseConnection { public: + MockConnection() { fillData(); } + virtual std::pair getZone(const Name& name) const { if (name == Name("example.org")) { return (std::pair(true, 42)); @@ -39,8 +45,74 @@ public: return (std::pair(false, 0)); } } - virtual void searchForRecords(int, const std::string&) const {}; - virtual bool getNextRecord(std::vector&) const { return false; }; + + virtual void searchForRecords(int zone_id, const std::string& name) { + // we're not aiming for efficiency in this test, simply + // copy the relevant vector from records + cur_record = 0; + + if (zone_id == 42) { + if (records.count(name) > 0) { + cur_name = records.find(name)->second; + } else { + cur_name.clear(); + } + } else { + cur_name.clear(); + } + }; + + virtual bool getNextRecord(std::vector& columns) { + if (cur_record < cur_name.size()) { + columns = cur_name[cur_record++]; + return true; + } else { + return false; + } + }; + +private: + std::map > > records; + // used as internal index for getNextRecord() + size_t cur_record; + // used as temporary storage after searchForRecord() and during + // getNextRecord() calls, as well as during the building of the + // fake data + std::vector< std::vector > cur_name; + + void addRecord(const std::string& name, + const std::string& type, + const std::string& sigtype, + const std::string& rdata) { + std::vector columns; + columns.push_back(name); + columns.push_back(type); + columns.push_back(sigtype); + columns.push_back(rdata); + cur_name.push_back(columns); + } + + void addCurName(const std::string& name) { + records[name] = cur_name; + cur_name.clear(); + } + + void fillData() { + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("AAAA", "3600", "", "2001:db8::1"); + addRecord("AAAA", "3600", "", "2001:db8::2"); + addCurName("www.example.org."); + addRecord("CNAME", "3600", "", "www.example.org."); + addCurName("cname.example.org."); + + // also add some intentionally bad data + cur_name.push_back(std::vector()); + addCurName("emptyvector.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("CNAME", "3600", "", "www.example.org."); + addCurName("badcname.example.org."); + + } }; class DatabaseClientTest : public ::testing::Test { @@ -98,4 +170,47 @@ TEST_F(DatabaseClientTest, noConnException) { isc::InvalidParameter); } +TEST_F(DatabaseClientTest, find) { + DataSourceClient::FindResult zone(client_->findZone(Name("example.org"))); + ASSERT_EQ(result::SUCCESS, zone.code); + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + EXPECT_EQ(42, finder->zone_id()); + isc::dns::Name name("www.example.org."); + + ZoneFinder::FindResult result1 = finder->find(name, isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(ZoneFinder::SUCCESS, result1.code); + EXPECT_EQ(1, result1.rrset->getRdataCount()); + EXPECT_EQ(isc::dns::RRType::A(), result1.rrset->getType()); + + ZoneFinder::FindResult result2 = finder->find(name, isc::dns::RRType::AAAA(), + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(ZoneFinder::SUCCESS, result2.code); + EXPECT_EQ(2, result2.rrset->getRdataCount()); + EXPECT_EQ(isc::dns::RRType::AAAA(), result2.rrset->getType()); + + ZoneFinder::FindResult result3 = finder->find(name, isc::dns::RRType::TXT(), + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(ZoneFinder::NXRRSET, result3.code); + EXPECT_EQ(isc::dns::ConstRRsetPtr(), result3.rrset); + + ZoneFinder::FindResult result4 = finder->find(isc::dns::Name("cname.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(ZoneFinder::CNAME, result4.code); + EXPECT_EQ(1, result4.rrset->getRdataCount()); + EXPECT_EQ(isc::dns::RRType::CNAME(), result4.rrset->getType()); + + EXPECT_THROW(finder->find(isc::dns::Name("emptyvector.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badcname.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + +} + } From 82667b0cdd6592053f5b2f4cfa1cbd0ec92db0b2 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 4 Aug 2011 14:49:49 +0200 Subject: [PATCH 039/175] [trac1062] minor cleanup --- src/lib/datasrc/database.cc | 5 +---- src/lib/datasrc/tests/database_unittest.cc | 6 ++++++ 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 8b4a669539..776b7faead 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -74,7 +74,7 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, connection_.searchForRecords(zone_id_, name.toText()); isc::dns::RRsetPtr result_rrset; - ZoneFinder::Result result_status = NXRRSET; + ZoneFinder::Result result_status = SUCCESS; std::vector columns; while (connection_.getNextRecord(columns)) { @@ -123,9 +123,6 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, getClass(), columns[3])); result_status = CNAME; - } else if (cur_type == isc::dns::RRType::RRSIG()) { - // if we have data already, check covered type - // if not, covered type must be CNAME or type requested } } diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index f9b8a0a41b..695805a1af 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -202,6 +202,12 @@ TEST_F(DatabaseClientTest, find) { EXPECT_EQ(1, result4.rrset->getRdataCount()); EXPECT_EQ(isc::dns::RRType::CNAME(), result4.rrset->getType()); + ZoneFinder::FindResult result5 = finder->find(isc::dns::Name("doesnotexist.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(ZoneFinder::NXDOMAIN, result5.code); + EXPECT_EQ(isc::dns::ConstRRsetPtr(), result5.rrset); + EXPECT_THROW(finder->find(isc::dns::Name("emptyvector.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), From bc281e8b48c92102d3c64318e07598c8e96e493c Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 4 Aug 2011 21:27:38 +0200 Subject: [PATCH 040/175] [trac1062] initial support for RRSIGS for matches and CNAME --- src/lib/datasrc/database.cc | 22 ++++++++++ src/lib/datasrc/database.h | 5 +++ src/lib/datasrc/tests/database_unittest.cc | 49 ++++++++++++++++++++++ src/lib/dns/rdata/generic/rrsig_46.cc | 5 +++ src/lib/dns/rdata/generic/rrsig_46.h | 3 ++ 5 files changed, 84 insertions(+) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 776b7faead..fb8b452467 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -18,6 +18,8 @@ #include #include #include +#include + #include using isc::dns::Name; @@ -123,6 +125,26 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, getClass(), columns[3])); result_status = CNAME; + } else if (cur_type == isc::dns::RRType::RRSIG()) { + isc::dns::rdata::RdataPtr cur_rrsig( + isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); + const isc::dns::RRType& type_covered = + static_cast( + cur_rrsig.get())->typeCovered(); + // Ignore the RRSIG data we got if it does not cover the type + // that was requested or CNAME + // see if we have RRset data yet, and whether it has an RRsig yet + if (type_covered == type || type_covered == isc::dns::RRType::CNAME()) { + if (!result_rrset) { + // no data at all yet, assume the RRset data is coming, and + // that the type covered will match + result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, + getClass(), + type_covered, + cur_ttl)); + } + result_rrset->addRRsig(cur_rrsig); + } } } diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index a1f566a23f..047db3d090 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -157,6 +157,11 @@ public: /** * \brief Find an RRset in the datasource + * + * target is unused at this point, it was used in the original + * API to store the results for ANY queries, and we may reuse it + * for that, but we might choose a different approach. + * */ virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 695805a1af..3ad7c6cc1d 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -98,6 +98,7 @@ private: } void fillData() { + // some plain data addRecord("A", "3600", "", "192.0.2.1"); addRecord("AAAA", "3600", "", "2001:db8::1"); addRecord("AAAA", "3600", "", "2001:db8::2"); @@ -105,6 +106,27 @@ private: addRecord("CNAME", "3600", "", "www.example.org."); addCurName("cname.example.org."); + // some DNSSEC-'signed' data + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("AAAA", "3600", "", "2001:db8::1"); + addRecord("AAAA", "3600", "", "2001:db8::2"); + addRecord("RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("signed1.example.org."); + + // let's pretend we have a database that is not careful + // about the order in which it returns data + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("AAAA", "3600", "", "2001:db8::2"); + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("AAAA", "3600", "", "2001:db8::1"); + addCurName("signed2.example.org."); + + addRecord("CNAME", "3600", "", "www.example.org."); + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("signedcname.example.org."); + // also add some intentionally bad data cur_name.push_back(std::vector()); addCurName("emptyvector.example.org."); @@ -183,12 +205,14 @@ TEST_F(DatabaseClientTest, find) { ASSERT_EQ(ZoneFinder::SUCCESS, result1.code); EXPECT_EQ(1, result1.rrset->getRdataCount()); EXPECT_EQ(isc::dns::RRType::A(), result1.rrset->getType()); + EXPECT_EQ(isc::dns::RRsetPtr(), result1.rrset->getRRsig()); ZoneFinder::FindResult result2 = finder->find(name, isc::dns::RRType::AAAA(), NULL, ZoneFinder::FIND_DEFAULT); ASSERT_EQ(ZoneFinder::SUCCESS, result2.code); EXPECT_EQ(2, result2.rrset->getRdataCount()); EXPECT_EQ(isc::dns::RRType::AAAA(), result2.rrset->getType()); + EXPECT_EQ(isc::dns::RRsetPtr(), result2.rrset->getRRsig()); ZoneFinder::FindResult result3 = finder->find(name, isc::dns::RRType::TXT(), NULL, ZoneFinder::FIND_DEFAULT); @@ -201,6 +225,7 @@ TEST_F(DatabaseClientTest, find) { ASSERT_EQ(ZoneFinder::CNAME, result4.code); EXPECT_EQ(1, result4.rrset->getRdataCount()); EXPECT_EQ(isc::dns::RRType::CNAME(), result4.rrset->getType()); + EXPECT_EQ(isc::dns::RRsetPtr(), result4.rrset->getRRsig()); ZoneFinder::FindResult result5 = finder->find(isc::dns::Name("doesnotexist.example.org."), isc::dns::RRType::A(), @@ -208,6 +233,30 @@ TEST_F(DatabaseClientTest, find) { ASSERT_EQ(ZoneFinder::NXDOMAIN, result5.code); EXPECT_EQ(isc::dns::ConstRRsetPtr(), result5.rrset); + ZoneFinder::FindResult result6 = finder->find(isc::dns::Name("signed1.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(ZoneFinder::SUCCESS, result6.code); + EXPECT_EQ(1, result6.rrset->getRdataCount()); + EXPECT_EQ(isc::dns::RRType::A(), result6.rrset->getType()); + EXPECT_NE(isc::dns::RRsetPtr(), result6.rrset->getRRsig()); + + ZoneFinder::FindResult result7 = finder->find(isc::dns::Name("signed1.example.org."), + isc::dns::RRType::AAAA(), + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(ZoneFinder::SUCCESS, result7.code); + EXPECT_EQ(2, result7.rrset->getRdataCount()); + EXPECT_EQ(isc::dns::RRType::AAAA(), result7.rrset->getType()); + EXPECT_NE(isc::dns::RRsetPtr(), result7.rrset->getRRsig()); + + ZoneFinder::FindResult result8 = finder->find(isc::dns::Name("signedcname.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(ZoneFinder::SUCCESS, result8.code); + EXPECT_EQ(1, result8.rrset->getRdataCount()); + EXPECT_EQ(isc::dns::RRType::CNAME(), result8.rrset->getType()); + EXPECT_NE(isc::dns::RRsetPtr(), result8.rrset->getRRsig()); + EXPECT_THROW(finder->find(isc::dns::Name("emptyvector.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), diff --git a/src/lib/dns/rdata/generic/rrsig_46.cc b/src/lib/dns/rdata/generic/rrsig_46.cc index 0c82406895..7d8c000119 100644 --- a/src/lib/dns/rdata/generic/rrsig_46.cc +++ b/src/lib/dns/rdata/generic/rrsig_46.cc @@ -243,5 +243,10 @@ RRSIG::compare(const Rdata& other) const { } } +const RRType& +RRSIG::typeCovered() { + return impl_->covered_; +} + // END_RDATA_NAMESPACE // END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/rrsig_46.h b/src/lib/dns/rdata/generic/rrsig_46.h index 19acc40c81..b8e630631e 100644 --- a/src/lib/dns/rdata/generic/rrsig_46.h +++ b/src/lib/dns/rdata/generic/rrsig_46.h @@ -38,6 +38,9 @@ public: // END_COMMON_MEMBERS RRSIG& operator=(const RRSIG& source); ~RRSIG(); + + // specialized methods + const RRType& typeCovered(); private: RRSIGImpl* impl_; }; From dba1e2c7884b5bc68f945fd5d2dd500f9a258c6b Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 4 Aug 2011 21:57:16 +0200 Subject: [PATCH 041/175] [trac1062] refactor/cleanup of tests --- src/lib/datasrc/database.cc | 22 ++-- src/lib/datasrc/tests/database_unittest.cc | 129 ++++++++++++--------- 2 files changed, 83 insertions(+), 68 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index fb8b452467..29a871c231 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -74,7 +74,6 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, { bool records_found = false; connection_.searchForRecords(zone_id_, name.toText()); - isc::dns::RRsetPtr result_rrset; ZoneFinder::Result result_status = SUCCESS; @@ -114,13 +113,16 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, } else if (cur_type == isc::dns::RRType::CNAME()) { // There should be no other data, so cur_rrset should be empty, // except for signatures - if (result_rrset && result_rrset->getRdataCount() > 0) { - isc_throw(DataSourceError, "CNAME found but it is not the only record for " + name.toText()); + if (result_rrset) { + if (result_rrset->getRdataCount() > 0) { + isc_throw(DataSourceError, "CNAME found but it is not the only record for " + name.toText()); + } + } else { + result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, + getClass(), + cur_type, + cur_ttl)); } - result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, - getClass(), - cur_type, - cur_ttl)); result_rrset->addRdata(isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); @@ -139,9 +141,9 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, // no data at all yet, assume the RRset data is coming, and // that the type covered will match result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, - getClass(), - type_covered, - cur_ttl)); + getClass(), + type_covered, + cur_ttl)); } result_rrset->addRRsig(cur_rrsig); } diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 3ad7c6cc1d..eef8103b79 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -113,6 +113,9 @@ private: addRecord("AAAA", "3600", "", "2001:db8::2"); addRecord("RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); addCurName("signed1.example.org."); + addRecord("CNAME", "3600", "", "www.example.org."); + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("signedcname1.example.org."); // let's pretend we have a database that is not careful // about the order in which it returns data @@ -122,10 +125,9 @@ private: addRecord("RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); addRecord("AAAA", "3600", "", "2001:db8::1"); addCurName("signed2.example.org."); - - addRecord("CNAME", "3600", "", "www.example.org."); addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); - addCurName("signedcname.example.org."); + addRecord("CNAME", "3600", "", "www.example.org."); + addCurName("signedcname2.example.org."); // also add some intentionally bad data cur_name.push_back(std::vector()); @@ -192,6 +194,33 @@ TEST_F(DatabaseClientTest, noConnException) { isc::InvalidParameter); } +namespace { +void +doFindTest(shared_ptr finder, + const isc::dns::Name& name, + const isc::dns::RRType& type, + const isc::dns::RRType& expected_type, + ZoneFinder::Result expected_result, + unsigned int expected_rdata_count, + unsigned int expected_signature_count) +{ + ZoneFinder::FindResult result = finder->find(name, type, + NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(expected_result, result.code) << name.toText() << " " << type.toText(); + if (expected_rdata_count > 0) { + EXPECT_EQ(expected_rdata_count, result.rrset->getRdataCount()); + EXPECT_EQ(expected_type, result.rrset->getType()); + if (expected_signature_count > 0) { + EXPECT_EQ(expected_signature_count, result.rrset->getRRsig()->getRdataCount()); + } else { + EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset->getRRsig()); + } + } else { + EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset); + } +} +} // end anonymous namespace + TEST_F(DatabaseClientTest, find) { DataSourceClient::FindResult zone(client_->findZone(Name("example.org"))); ASSERT_EQ(result::SUCCESS, zone.code); @@ -200,62 +229,46 @@ TEST_F(DatabaseClientTest, find) { EXPECT_EQ(42, finder->zone_id()); isc::dns::Name name("www.example.org."); - ZoneFinder::FindResult result1 = finder->find(name, isc::dns::RRType::A(), - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(ZoneFinder::SUCCESS, result1.code); - EXPECT_EQ(1, result1.rrset->getRdataCount()); - EXPECT_EQ(isc::dns::RRType::A(), result1.rrset->getType()); - EXPECT_EQ(isc::dns::RRsetPtr(), result1.rrset->getRRsig()); + doFindTest(finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + ZoneFinder::SUCCESS, 1, 0); + doFindTest(finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + ZoneFinder::SUCCESS, 2, 0); + doFindTest(finder, isc::dns::Name("www.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + ZoneFinder::NXRRSET, 0, 0); + doFindTest(finder, isc::dns::Name("cname.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + ZoneFinder::CNAME, 1, 0); + doFindTest(finder, isc::dns::Name("doesnotexist.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + ZoneFinder::NXDOMAIN, 0, 0); + doFindTest(finder, isc::dns::Name("signed1.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + ZoneFinder::SUCCESS, 1, 1); + doFindTest(finder, isc::dns::Name("signed1.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + ZoneFinder::SUCCESS, 2, 1); + doFindTest(finder, isc::dns::Name("signed1.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + ZoneFinder::NXRRSET, 0, 0); + doFindTest(finder, isc::dns::Name("signedcname1.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + ZoneFinder::CNAME, 1, 1); - ZoneFinder::FindResult result2 = finder->find(name, isc::dns::RRType::AAAA(), - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(ZoneFinder::SUCCESS, result2.code); - EXPECT_EQ(2, result2.rrset->getRdataCount()); - EXPECT_EQ(isc::dns::RRType::AAAA(), result2.rrset->getType()); - EXPECT_EQ(isc::dns::RRsetPtr(), result2.rrset->getRRsig()); - - ZoneFinder::FindResult result3 = finder->find(name, isc::dns::RRType::TXT(), - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(ZoneFinder::NXRRSET, result3.code); - EXPECT_EQ(isc::dns::ConstRRsetPtr(), result3.rrset); - - ZoneFinder::FindResult result4 = finder->find(isc::dns::Name("cname.example.org."), - isc::dns::RRType::A(), - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(ZoneFinder::CNAME, result4.code); - EXPECT_EQ(1, result4.rrset->getRdataCount()); - EXPECT_EQ(isc::dns::RRType::CNAME(), result4.rrset->getType()); - EXPECT_EQ(isc::dns::RRsetPtr(), result4.rrset->getRRsig()); - - ZoneFinder::FindResult result5 = finder->find(isc::dns::Name("doesnotexist.example.org."), - isc::dns::RRType::A(), - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(ZoneFinder::NXDOMAIN, result5.code); - EXPECT_EQ(isc::dns::ConstRRsetPtr(), result5.rrset); - - ZoneFinder::FindResult result6 = finder->find(isc::dns::Name("signed1.example.org."), - isc::dns::RRType::A(), - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(ZoneFinder::SUCCESS, result6.code); - EXPECT_EQ(1, result6.rrset->getRdataCount()); - EXPECT_EQ(isc::dns::RRType::A(), result6.rrset->getType()); - EXPECT_NE(isc::dns::RRsetPtr(), result6.rrset->getRRsig()); - - ZoneFinder::FindResult result7 = finder->find(isc::dns::Name("signed1.example.org."), - isc::dns::RRType::AAAA(), - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(ZoneFinder::SUCCESS, result7.code); - EXPECT_EQ(2, result7.rrset->getRdataCount()); - EXPECT_EQ(isc::dns::RRType::AAAA(), result7.rrset->getType()); - EXPECT_NE(isc::dns::RRsetPtr(), result7.rrset->getRRsig()); - - ZoneFinder::FindResult result8 = finder->find(isc::dns::Name("signedcname.example.org."), - isc::dns::RRType::A(), - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(ZoneFinder::SUCCESS, result8.code); - EXPECT_EQ(1, result8.rrset->getRdataCount()); - EXPECT_EQ(isc::dns::RRType::CNAME(), result8.rrset->getType()); - EXPECT_NE(isc::dns::RRsetPtr(), result8.rrset->getRRsig()); + doFindTest(finder, isc::dns::Name("signed2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + ZoneFinder::SUCCESS, 1, 1); + doFindTest(finder, isc::dns::Name("signed2.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + ZoneFinder::SUCCESS, 2, 1); + doFindTest(finder, isc::dns::Name("signed2.example.org."), + isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + ZoneFinder::NXRRSET, 0, 0); + doFindTest(finder, isc::dns::Name("signedcname2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + ZoneFinder::CNAME, 1, 1); EXPECT_THROW(finder->find(isc::dns::Name("emptyvector.example.org."), isc::dns::RRType::A(), From ce0544bd0852415891cb31e0c1b7d0ba0b3d19f3 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 4 Aug 2011 22:12:04 +0200 Subject: [PATCH 042/175] [trac1062] add some comments in the tests --- src/lib/datasrc/tests/database_unittest.cc | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index eef8103b79..7e80b4304c 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -80,6 +80,9 @@ private: // fake data std::vector< std::vector > cur_name; + // Adds one record to the current name in the database + // The actual data will not be added to 'records' until + // addCurName() is called void addRecord(const std::string& name, const std::string& type, const std::string& sigtype, @@ -92,17 +95,32 @@ private: cur_name.push_back(columns); } + // Adds all records we just built with calls to addRecords + // to the actual fake database. This will clear cur_name, + // so we can immediately start adding new records. void addCurName(const std::string& name) { + ASSERT_EQ(0, records.count(name)); records[name] = cur_name; cur_name.clear(); } + // Fills the database with zone data. + // This method constructs a number of resource records (with addRecord), + // which will all be added for one domain name to the fake database + // (with addCurName). So for instance the first set of calls create + // data for the name 'www.example.org', which will consist of one A RRset + // of one record, and one AAAA RRset of two records. + // The order in which they are added is the order in which getNextRecord() + // will return them (so we can test whether find() etc. support data that + // might not come in 'normal' order) + // It shall immediately fail if you try to add the same name twice. void fillData() { // some plain data addRecord("A", "3600", "", "192.0.2.1"); addRecord("AAAA", "3600", "", "2001:db8::1"); addRecord("AAAA", "3600", "", "2001:db8::2"); addCurName("www.example.org."); + addRecord("CNAME", "3600", "", "www.example.org."); addCurName("cname.example.org."); @@ -135,7 +153,6 @@ private: addRecord("A", "3600", "", "192.0.2.1"); addRecord("CNAME", "3600", "", "www.example.org."); addCurName("badcname.example.org."); - } }; From ac9fd0a240cbfa8c448cb01bb69ac92313eb7e56 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 4 Aug 2011 22:14:25 +0200 Subject: [PATCH 043/175] [trac1062] add comment --- src/lib/datasrc/database.cc | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 29a871c231..28cf6d5acf 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -72,11 +72,14 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, isc::dns::RRsetList*, const FindOptions) const { + // This variable is used to determine the difference between + // NXDOMAIN and NXRRSET bool records_found = false; - connection_.searchForRecords(zone_id_, name.toText()); isc::dns::RRsetPtr result_rrset; ZoneFinder::Result result_status = SUCCESS; + connection_.searchForRecords(zone_id_, name.toText()); + std::vector columns; while (connection_.getNextRecord(columns)) { if (!records_found) { From 15428e5a9c1bb01f5e7a04979c17ec5f1de9d1db Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 4 Aug 2011 22:28:08 +0200 Subject: [PATCH 044/175] [trac1062] update for change in 1061 --- src/lib/datasrc/database.cc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 28cf6d5acf..da823a2600 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -78,10 +78,10 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, isc::dns::RRsetPtr result_rrset; ZoneFinder::Result result_status = SUCCESS; - connection_.searchForRecords(zone_id_, name.toText()); + connection_->searchForRecords(zone_id_, name.toText()); std::vector columns; - while (connection_.getNextRecord(columns)) { + while (connection_->getNextRecord(columns)) { if (!records_found) { records_found = true; } From 7e1e150e056d0dcf5a58b2a8036f47c2e5dac820 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Fri, 5 Aug 2011 15:17:27 +0200 Subject: [PATCH 045/175] [trac1062] use shared_ptr instead of auto_ptr --- src/lib/datasrc/tests/sqlite3_connection_unittest.cc | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 1c80eb5990..1279910114 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -11,6 +11,7 @@ // LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include #include #include @@ -18,6 +19,7 @@ #include #include +#include using namespace isc::datasrc; using isc::data::ConstElementPtr; @@ -74,7 +76,7 @@ public: conn.reset(new SQLite3Connection(filename, rrclass)); } // The tested connection - std::auto_ptr conn; + boost::shared_ptr conn; }; // This zone exists in the data, so it should be found @@ -103,7 +105,7 @@ TEST_F(SQLite3Conn, noClass) { namespace { // Simple function to cound the number of records for // any name - size_t countRecords(std::auto_ptr& conn, + size_t countRecords(boost::shared_ptr& conn, int zone_id, const std::string& name) { conn->searchForRecords(zone_id, name); size_t count = 0; From 5f13949918d125f851bd2ba8ab092c301835d3ac Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Fri, 5 Aug 2011 18:02:33 +0200 Subject: [PATCH 046/175] [trac1062] refactor and fixes - check for bad data in the db - work with data in 'bad' order (sigs before data for example) --- src/lib/datasrc/database.cc | 185 ++++++++++++++------- src/lib/datasrc/tests/database_unittest.cc | 50 +++++- 2 files changed, 172 insertions(+), 63 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index da823a2600..69a46108e8 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -22,6 +22,8 @@ #include +#include + using isc::dns::Name; namespace isc { @@ -66,6 +68,88 @@ DatabaseClient::Finder::Finder(boost::shared_ptr zone_id_(zone_id) { } +namespace { + // Adds the given Rdata to the given RRset + // If the rrset does not exist, one is created + // adds the given rdata to the set + void addOrCreate(isc::dns::RRsetPtr& rrset, + const isc::dns::Name& name, + const isc::dns::RRClass& cls, + const isc::dns::RRType& type, + const isc::dns::RRTTL& ttl, + const std::string& rdata_str) + { + if (!rrset) { + rrset.reset(new isc::dns::RRset(name, cls, type, ttl)); + } else { + if (ttl < rrset->getTTL()) { + rrset->setTTL(ttl); + } + // make sure the type is correct + if (type != rrset->getType()) { + isc_throw(DataSourceError, + "attempt to add multiple types to RRset in find()"); + } + } + if (rdata_str != "") { + try { + rrset->addRdata(isc::dns::rdata::createRdata(type, cls, rdata_str)); + } catch (const isc::dns::rdata::InvalidRdataText& ivrt) { + // at this point, rrset may have been initialised for no reason, + // and won't be used. But the caller would drop the shared_ptr + // on such an error anyway, so we don't care. + isc_throw(DataSourceError, + "bad rdata in database for " << name.toText() << " " + << type.toText() << " " << ivrt.what()); + } + } + } + + // This class keeps a short-lived store of RRSIG records encountered + // during a call to find(). If the backend happens to return signatures + // before the actual data, we might not know which signatures we will need + // So if they may be relevant, we store the in this class. + // + // (If this class seems useful in other places, we might want to move + // it to util. That would also provide an opportunity to add unit tests) + class RRsigStore { + public: + // add the given signature Rdata to the store + // The signature MUST be of the RRSIG type (the caller + // must make sure of this) + void addSig(isc::dns::rdata::RdataPtr sig_rdata) { + const isc::dns::RRType& type_covered = + static_cast( + sig_rdata.get())->typeCovered(); + if (!haveSigsFor(type_covered)) { + sigs[type_covered] = std::vector(); + } + sigs.find(type_covered)->second.push_back(sig_rdata); + } + + // Returns true if this store contains signatures covering the + // given type + bool haveSigsFor(isc::dns::RRType type) { + return (sigs.count(type) > 0); + } + + // If the store contains signatures for the type of the given + // rrset, they are appended to it. + void appendSignatures(isc::dns::RRsetPtr& rrset) { + if (haveSigsFor(rrset->getType())) { + BOOST_FOREACH(isc::dns::rdata::RdataPtr sig, + sigs.find(rrset->getType())->second) { + rrset->addRRsig(sig); + } + } + } + + private: + std::map > sigs; + }; +} + + ZoneFinder::FindResult DatabaseClient::Finder::find(const isc::dns::Name& name, const isc::dns::RRType& type, @@ -77,6 +161,7 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, bool records_found = false; isc::dns::RRsetPtr result_rrset; ZoneFinder::Result result_status = SUCCESS; + RRsigStore sig_store; connection_->searchForRecords(zone_id_, name.toText()); @@ -91,75 +176,53 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, "Datasource backend did not return 4 columns in getNextRecord()"); } - const isc::dns::RRType cur_type(columns[0]); - const isc::dns::RRTTL cur_ttl(columns[1]); - //cur_sigtype(columns[2]); + try { + const isc::dns::RRType cur_type(columns[0]); + const isc::dns::RRTTL cur_ttl(columns[1]); + //cur_sigtype(columns[2]); - if (cur_type == type) { - if (!result_rrset) { - result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, - getClass(), - cur_type, - cur_ttl)); - result_status = SUCCESS; - } else { - // We have existing data from earlier calls, do some checks - // and updates if necessary - if (cur_ttl < result_rrset->getTTL()) { - result_rrset->setTTL(cur_ttl); + if (cur_type == type) { + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[3]); + //isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); + } else if (cur_type == isc::dns::RRType::CNAME()) { + // There should be no other data, so cur_rrset should be empty, + // except for signatures, of course + if (result_rrset) { + if (result_rrset->getRdataCount() > 0) { + isc_throw(DataSourceError, "CNAME found but it is not the only record for " + name.toText()); + } } + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[3]); + //isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); + result_status = CNAME; + } else if (cur_type == isc::dns::RRType::RRSIG()) { + // If we get signatures before we get the actual data, we can't know + // which ones to keep and which to drop... + // So we keep a separate store of any signature that may be relevant + // and add them to the final RRset when we are done. + isc::dns::rdata::RdataPtr cur_rrsig( + isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); + sig_store.addSig(cur_rrsig); } - - result_rrset->addRdata(isc::dns::rdata::createRdata(cur_type, - getClass(), - columns[3])); - } else if (cur_type == isc::dns::RRType::CNAME()) { - // There should be no other data, so cur_rrset should be empty, - // except for signatures - if (result_rrset) { - if (result_rrset->getRdataCount() > 0) { - isc_throw(DataSourceError, "CNAME found but it is not the only record for " + name.toText()); - } - } else { - result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, - getClass(), - cur_type, - cur_ttl)); - } - result_rrset->addRdata(isc::dns::rdata::createRdata(cur_type, - getClass(), - columns[3])); - result_status = CNAME; - } else if (cur_type == isc::dns::RRType::RRSIG()) { - isc::dns::rdata::RdataPtr cur_rrsig( - isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); - const isc::dns::RRType& type_covered = - static_cast( - cur_rrsig.get())->typeCovered(); - // Ignore the RRSIG data we got if it does not cover the type - // that was requested or CNAME - // see if we have RRset data yet, and whether it has an RRsig yet - if (type_covered == type || type_covered == isc::dns::RRType::CNAME()) { - if (!result_rrset) { - // no data at all yet, assume the RRset data is coming, and - // that the type covered will match - result_rrset = isc::dns::RRsetPtr(new isc::dns::RRset(name, - getClass(), - type_covered, - cur_ttl)); - } - result_rrset->addRRsig(cur_rrsig); - } + } catch (const isc::dns::InvalidRRType& irt) { + isc_throw(DataSourceError, + "Invalid RRType in database for " << name << ": " << columns[0]); + } catch (const isc::dns::InvalidRRTTL& irttl) { + isc_throw(DataSourceError, + "Invalid TTL in database for " << name << ": " << columns[1]); } } - if (result_rrset) { - return (FindResult(result_status, result_rrset)); - } else if (records_found) { - return (FindResult(NXRRSET, isc::dns::ConstRRsetPtr())); + if (!result_rrset) { + if (records_found) { + result_status = NXRRSET; + } else { + result_status = NXDOMAIN; + } } else { - return (FindResult(NXDOMAIN, isc::dns::ConstRRsetPtr())); + sig_store.appendSignatures(result_rrset); } + return (FindResult(result_status, result_rrset)); } Name diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 7e80b4304c..76c801c702 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -127,6 +127,7 @@ private: // some DNSSEC-'signed' data addRecord("A", "3600", "", "192.0.2.1"); addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); addRecord("AAAA", "3600", "", "2001:db8::1"); addRecord("AAAA", "3600", "", "2001:db8::2"); addRecord("RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); @@ -134,11 +135,18 @@ private: addRecord("CNAME", "3600", "", "www.example.org."); addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); addCurName("signedcname1.example.org."); + // special case might fail; sig is for cname, which isn't there (should be ignored) + // (ignoring of 'normal' other type is done above by www.) + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("acnamesig1.example.org."); // let's pretend we have a database that is not careful // about the order in which it returns data addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); addRecord("AAAA", "3600", "", "2001:db8::2"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); addRecord("A", "3600", "", "192.0.2.1"); addRecord("RRSIG", "3600", "", "AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); addRecord("AAAA", "3600", "", "2001:db8::1"); @@ -147,12 +155,28 @@ private: addRecord("CNAME", "3600", "", "www.example.org."); addCurName("signedcname2.example.org."); + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("acnamesig2.example.org."); + + addRecord("RRSIG", "3600", "", "CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("acnamesig3.example.org."); + // also add some intentionally bad data cur_name.push_back(std::vector()); addCurName("emptyvector.example.org."); addRecord("A", "3600", "", "192.0.2.1"); addRecord("CNAME", "3600", "", "www.example.org."); addCurName("badcname.example.org."); + addRecord("A", "3600", "", "bad"); + addCurName("badrdata.example.org."); + addRecord("BAD_TYPE", "3600", "", "192.0.2.1"); + addCurName("badtype.example.org."); + addRecord("A", "badttl", "", "192.0.2.1"); + addCurName("badttl.example.org."); } }; @@ -263,7 +287,7 @@ TEST_F(DatabaseClientTest, find) { ZoneFinder::NXDOMAIN, 0, 0); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), - ZoneFinder::SUCCESS, 1, 1); + ZoneFinder::SUCCESS, 1, 2); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), ZoneFinder::SUCCESS, 2, 1); @@ -276,7 +300,7 @@ TEST_F(DatabaseClientTest, find) { doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), - ZoneFinder::SUCCESS, 1, 1); + ZoneFinder::SUCCESS, 1, 2); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), ZoneFinder::SUCCESS, 2, 1); @@ -287,6 +311,16 @@ TEST_F(DatabaseClientTest, find) { isc::dns::RRType::A(), isc::dns::RRType::CNAME(), ZoneFinder::CNAME, 1, 1); + doFindTest(finder, isc::dns::Name("acnamesig1.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + ZoneFinder::SUCCESS, 1, 1); + doFindTest(finder, isc::dns::Name("acnamesig2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + ZoneFinder::SUCCESS, 1, 1); + doFindTest(finder, isc::dns::Name("acnamesig3.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + ZoneFinder::SUCCESS, 1, 1); + EXPECT_THROW(finder->find(isc::dns::Name("emptyvector.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), @@ -295,6 +329,18 @@ TEST_F(DatabaseClientTest, find) { isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badrdata.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badtype.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_THROW(finder->find(isc::dns::Name("badttl.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); } From 86257c05755c8adbb19ce684546b718dd48a5ef8 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Fri, 5 Aug 2011 18:24:20 +0200 Subject: [PATCH 047/175] [trac1062] minor style cleanup --- src/lib/datasrc/database.cc | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 69a46108e8..dc688325fa 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -183,23 +183,25 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, if (cur_type == type) { addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[3]); - //isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); } else if (cur_type == isc::dns::RRType::CNAME()) { // There should be no other data, so cur_rrset should be empty, // except for signatures, of course if (result_rrset) { if (result_rrset->getRdataCount() > 0) { - isc_throw(DataSourceError, "CNAME found but it is not the only record for " + name.toText()); + isc_throw(DataSourceError, + "CNAME found but it is not the only record for " + + name.toText()); } } addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[3]); - //isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); result_status = CNAME; } else if (cur_type == isc::dns::RRType::RRSIG()) { // If we get signatures before we get the actual data, we can't know // which ones to keep and which to drop... // So we keep a separate store of any signature that may be relevant // and add them to the final RRset when we are done. + // A possible optimization here is to not store them for types we + // are certain we don't need isc::dns::rdata::RdataPtr cur_rrsig( isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); sig_store.addSig(cur_rrsig); From 69642fb8f55cb4741f977d3fbaacd5d12d742625 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Fri, 5 Aug 2011 19:09:36 +0200 Subject: [PATCH 048/175] [trac1062] some comment updates --- src/lib/datasrc/database.cc | 27 ++++++++++++++++----------- src/lib/datasrc/database.h | 24 +++++++++++++++++++++--- 2 files changed, 37 insertions(+), 14 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index dc688325fa..3002056338 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -70,8 +70,16 @@ DatabaseClient::Finder::Finder(boost::shared_ptr namespace { // Adds the given Rdata to the given RRset - // If the rrset does not exist, one is created - // adds the given rdata to the set + // If the rrset is an empty pointer, a new one is + // created with the given name, class, type and ttl + // The type is checked if the rrset exists, but the + // name is not. + // + // Then adds the given rdata to the set + // + // Raises a DataSourceError if the type does not + // match, or if the given rdata string does not + // parse correctly for the given type and class void addOrCreate(isc::dns::RRsetPtr& rrset, const isc::dns::Name& name, const isc::dns::RRClass& cls, @@ -114,9 +122,9 @@ namespace { // it to util. That would also provide an opportunity to add unit tests) class RRsigStore { public: - // add the given signature Rdata to the store - // The signature MUST be of the RRSIG type (the caller - // must make sure of this) + // Adds the given signature Rdata to the store + // The signature rdata MUST be of the RRSIG rdata type + // (the caller must make sure of this) void addSig(isc::dns::rdata::RdataPtr sig_rdata) { const isc::dns::RRType& type_covered = static_cast( @@ -185,13 +193,10 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[3]); } else if (cur_type == isc::dns::RRType::CNAME()) { // There should be no other data, so cur_rrset should be empty, - // except for signatures, of course if (result_rrset) { - if (result_rrset->getRdataCount() > 0) { - isc_throw(DataSourceError, - "CNAME found but it is not the only record for " + - name.toText()); - } + isc_throw(DataSourceError, + "CNAME found but it is not the only record for " + + name.toText()); } addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[3]); result_status = CNAME; diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 047db3d090..5498694687 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -158,16 +158,34 @@ public: /** * \brief Find an RRset in the datasource * - * target is unused at this point, it was used in the original - * API to store the results for ANY queries, and we may reuse it - * for that, but we might choose a different approach. + * Searches the datasource for an RRset of the given name and + * type. If there is a CNAME at the given name, the CNAME rrset + * is returned. + * (this implementation is not complete, and currently only + * does full matches, CNAMES, and the signatures for matches and + * CNAMEs) + * \note target was used in the original design to handle ANY + * queries. This is not implemented yet, and may use + * target again for that, but it might also use something + * different. It is left in for compatibility at the moment. + * \note options are ignored at this moment * + * \exception DataSourceError when there is a problem reading + * the data from the dabase backend. + * This can be a connection, code, or + * data (parse) error. + * + * \param name The name to find + * \param type The RRType to find + * \param target Unused at this moment + * \param options Unused at this moment */ virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, const FindOptions options = FIND_DEFAULT) const; + /** * \brief The zone ID * From cc6d6b14603924a4ef2d86dfaf758447cca6a7ff Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Fri, 5 Aug 2011 20:30:22 +0200 Subject: [PATCH 049/175] [trac1062] test for rrsig typeCovered() --- src/lib/dns/rdata/generic/rrsig_46.cc | 2 +- src/lib/dns/tests/rdata_rrsig_unittest.cc | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/src/lib/dns/rdata/generic/rrsig_46.cc b/src/lib/dns/rdata/generic/rrsig_46.cc index 7d8c000119..fc8e3400c9 100644 --- a/src/lib/dns/rdata/generic/rrsig_46.cc +++ b/src/lib/dns/rdata/generic/rrsig_46.cc @@ -245,7 +245,7 @@ RRSIG::compare(const Rdata& other) const { const RRType& RRSIG::typeCovered() { - return impl_->covered_; + return (impl_->covered_); } // END_RDATA_NAMESPACE diff --git a/src/lib/dns/tests/rdata_rrsig_unittest.cc b/src/lib/dns/tests/rdata_rrsig_unittest.cc index 903021fb5e..ad49f76caf 100644 --- a/src/lib/dns/tests/rdata_rrsig_unittest.cc +++ b/src/lib/dns/tests/rdata_rrsig_unittest.cc @@ -47,6 +47,7 @@ TEST_F(Rdata_RRSIG_Test, fromText) { "f49t+sXKPzbipN9g+s1ZPiIyofc="); generic::RRSIG rdata_rrsig(rrsig_txt); EXPECT_EQ(rrsig_txt, rdata_rrsig.toText()); + EXPECT_EQ(isc::dns::RRType::A(), rdata_rrsig.typeCovered()); } From 2b6bcb84a17fc98ea0ea87df65e6a77829857ecd Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Fri, 5 Aug 2011 20:56:19 +0200 Subject: [PATCH 050/175] [trac1062] doxygen update --- src/lib/datasrc/database.cc | 2 +- src/lib/datasrc/database.h | 8 +++++++- src/lib/datasrc/sqlite3_connection.cc | 9 +++++---- src/lib/datasrc/sqlite3_connection.h | 23 +++++++++++++++++++++++ src/lib/dns/tests/rdata_rrsig_unittest.cc | 1 - 5 files changed, 36 insertions(+), 7 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 3002056338..c3f67cd786 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -74,7 +74,7 @@ namespace { // created with the given name, class, type and ttl // The type is checked if the rrset exists, but the // name is not. - // + // // Then adds the given rdata to the set // // Raises a DataSourceError if the type does not diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 5498694687..d82c86f771 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -75,6 +75,12 @@ public: /** * \brief Starts a new search for records of the given name in the given zone * + * The data searched by this call can be retrieved with subsequent calls to + * getNextRecord(). + * + * \exception DataSourceError if there is a problem connecting to the + * backend database + * * \param zone_id The zone to search in, as returned by getZone() * \param name The name of the records to find */ @@ -169,7 +175,7 @@ public: * target again for that, but it might also use something * different. It is left in for compatibility at the moment. * \note options are ignored at this moment - * + * * \exception DataSourceError when there is a problem reading * the data from the dabase backend. * This can be a connection, code, or diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index fa5f8310d2..70adde4f6f 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -351,19 +351,20 @@ SQLite3Connection::getNextRecord(std::vector& columns) { columns.push_back(convertToPlainChar(sqlite3_column_text( current_stmt, column))); } - return true; + return (true); } else if (rc == SQLITE_DONE) { // reached the end of matching rows sqlite3_reset(current_stmt); sqlite3_clear_bindings(current_stmt); - return false; + return (false); } sqlite3_reset(current_stmt); sqlite3_clear_bindings(current_stmt); - isc_throw(DataSourceError, "Unexpected failure in sqlite3_step"); + isc_throw(DataSourceError, + "Unexpected failure in sqlite3_step (sqlite result code " << rc << ")"); // Compilers might not realize isc_throw always throws - return false; + return (false); } } diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index ca41a0621c..ffb2470b8e 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -88,7 +88,30 @@ public: * element and the zone id in the second if it was. */ virtual std::pair getZone(const isc::dns::Name& name) const; + + /** + * \brief Start a new search for the given name in the given zone. + * + * This implements the searchForRecords from DatabaseConnection. + * This particular implementation does not raise DataSourceError. + * + * \param zone_id The zone to seach in, as returned by getZone() + * \param name The name to find records for + */ virtual void searchForRecords(int zone_id, const std::string& name); + + /** + * \brief Retrieve the next record from the search started with + * searchForRecords + * + * This implements the getNextRecord from DatabaseConnection. + * See the documentation there for more information. + * + * \param columns This vector will be cleared, and the fields of the record will + * be appended here as strings (in the order rdtype, ttl, sigtype, + * and rdata). If there was no data, the vector is untouched. + * \return true if there was a next record, false if there was not + */ virtual bool getNextRecord(std::vector& columns); private: /// \brief Private database data diff --git a/src/lib/dns/tests/rdata_rrsig_unittest.cc b/src/lib/dns/tests/rdata_rrsig_unittest.cc index ad49f76caf..3324b99de1 100644 --- a/src/lib/dns/tests/rdata_rrsig_unittest.cc +++ b/src/lib/dns/tests/rdata_rrsig_unittest.cc @@ -48,7 +48,6 @@ TEST_F(Rdata_RRSIG_Test, fromText) { generic::RRSIG rdata_rrsig(rrsig_txt); EXPECT_EQ(rrsig_txt, rdata_rrsig.toText()); EXPECT_EQ(isc::dns::RRType::A(), rdata_rrsig.typeCovered()); - } TEST_F(Rdata_RRSIG_Test, badText) { From c9d7e29600f7a80094bcda2c3bd87d8f07d813e9 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Sun, 7 Aug 2011 11:57:03 +0200 Subject: [PATCH 051/175] [801] Editorial fixes --- src/bin/bind10/creatorapi.txt | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/bin/bind10/creatorapi.txt b/src/bin/bind10/creatorapi.txt index a55099e01b..6100f39a1d 100644 --- a/src/bin/bind10/creatorapi.txt +++ b/src/bin/bind10/creatorapi.txt @@ -7,7 +7,7 @@ ports for now, but we should have some function where we can abstract it later. Goals ----- -* Be able to request a socket of any combination IP/IPv6 UDP/TCP bound to given +* Be able to request a socket of any combination IPv4/IPv6 UDP/TCP bound to given port and address (sockets that are not bound to anything can be created without privileges, therefore are not requested from the socket creator). * Allow to provide the same socket to multiple modules (eg. multiple running @@ -52,7 +52,7 @@ started by boss. There are two possibilities: * Let the msgq send messages about disconnected clients (eg. group message to some name). This one is better if we want to migrate to dbus, since dbus - already has this capability as well as sending the sockets inbound (at last it + already has this capability as well as sending the sockets inbound (at least it seems so on unix) and we could get rid of the unix-domain socket completely. * Keep the unix-domain connections open forever. Boss can remember which socket was sent to which connection and when the connection closes (because the From 56083614ae0e8c5177786528e85d348686bf9bc2 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Sun, 7 Aug 2011 12:26:36 +0200 Subject: [PATCH 052/175] [1061] Rename DatabaseConnection to DatabaseAbstraction As database connection might be confusing, use something better. Also renamed SQLiteConnection to SQLiteDatabase --- src/lib/datasrc/Makefile.am | 2 +- src/lib/datasrc/database.cc | 24 ++++---- src/lib/datasrc/database.h | 55 ++++++++++--------- ...ite3_connection.cc => sqlite3_database.cc} | 12 ++-- ...qlite3_connection.h => sqlite3_database.h} | 16 +++--- src/lib/datasrc/tests/Makefile.am | 2 +- src/lib/datasrc/tests/database_unittest.cc | 16 +++--- ...ittest.cc => sqlite3_database_unittest.cc} | 34 ++++++------ 8 files changed, 81 insertions(+), 80 deletions(-) rename src/lib/datasrc/{sqlite3_connection.cc => sqlite3_database.cc} (97%) rename src/lib/datasrc/{sqlite3_connection.h => sqlite3_database.h} (86%) rename src/lib/datasrc/tests/{sqlite3_connection_unittest.cc => sqlite3_database_unittest.cc} (73%) diff --git a/src/lib/datasrc/Makefile.am b/src/lib/datasrc/Makefile.am index e6bff58fea..6792365ccd 100644 --- a/src/lib/datasrc/Makefile.am +++ b/src/lib/datasrc/Makefile.am @@ -23,7 +23,7 @@ libdatasrc_la_SOURCES += result.h libdatasrc_la_SOURCES += logger.h logger.cc libdatasrc_la_SOURCES += client.h libdatasrc_la_SOURCES += database.h database.cc -libdatasrc_la_SOURCES += sqlite3_connection.h sqlite3_connection.cc +libdatasrc_la_SOURCES += sqlite3_database.h sqlite3_database.cc nodist_libdatasrc_la_SOURCES = datasrc_messages.h datasrc_messages.cc libdatasrc_la_LIBADD = $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 2264f2c7ab..2d30ba28c6 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -22,32 +22,32 @@ using isc::dns::Name; namespace isc { namespace datasrc { -DatabaseClient::DatabaseClient(boost::shared_ptr - connection) : - connection_(connection) +DatabaseClient::DatabaseClient(boost::shared_ptr + database) : + database_(database) { - if (connection_.get() == NULL) { + if (database_.get() == NULL) { isc_throw(isc::InvalidParameter, - "No connection provided to DatabaseClient"); + "No database provided to DatabaseClient"); } } DataSourceClient::FindResult DatabaseClient::findZone(const Name& name) const { - std::pair zone(connection_->getZone(name)); + std::pair zone(database_->getZone(name)); // Try exact first if (zone.first) { return (FindResult(result::SUCCESS, - ZoneFinderPtr(new Finder(connection_, + ZoneFinderPtr(new Finder(database_, zone.second)))); } // Than super domains // Start from 1, as 0 is covered above for (size_t i(1); i < name.getLabelCount(); ++i) { - zone = connection_->getZone(name.split(i)); + zone = database_->getZone(name.split(i)); if (zone.first) { return (FindResult(result::PARTIALMATCH, - ZoneFinderPtr(new Finder(connection_, + ZoneFinderPtr(new Finder(database_, zone.second)))); } } @@ -55,9 +55,9 @@ DatabaseClient::findZone(const Name& name) const { return (FindResult(result::NOTFOUND, ZoneFinderPtr())); } -DatabaseClient::Finder::Finder(boost::shared_ptr - connection, int zone_id) : - connection_(connection), +DatabaseClient::Finder::Finder(boost::shared_ptr + database, int zone_id) : + database_(database), zone_id_(zone_id) { } diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 8e5c1564e5..aed4fcc05c 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -21,7 +21,7 @@ namespace isc { namespace datasrc { /** - * \brief Abstract connection to database with DNS data + * \brief Abstraction of lowlevel database with DNS data * * This class is defines interface to databases. Each supported database * will provide methods for accessing the data stored there in a generic @@ -39,10 +39,11 @@ namespace datasrc { * be better for that than copy constructor. * * \note The same application may create multiple connections to the same - * database. If the database allows having multiple open queries at one - * connection, the connection class may share it. + * database, having multiple instances of this class. If the database + * allows having multiple open queries at one connection, the connection + * class may share it. */ -class DatabaseConnection : boost::noncopyable { +class DatabaseAbstraction : boost::noncopyable { public: /** * \brief Destructor @@ -50,7 +51,7 @@ public: * It is empty, but needs a virtual one, since we will use the derived * classes in polymorphic way. */ - virtual ~DatabaseConnection() { } + virtual ~DatabaseAbstraction() { } /** * \brief Retrieve a zone identifier * @@ -67,7 +68,7 @@ public: * was found. In case it was, the second part is internal zone ID. * This one will be passed to methods finding data in the zone. * It is not required to keep them, in which case whatever might - * be returned - the ID is only passed back to the connection as + * be returned - the ID is only passed back to the database as * an opaque handle. */ virtual std::pair getZone(const isc::dns::Name& name) const = 0; @@ -78,36 +79,36 @@ public: * * This class (together with corresponding versions of ZoneFinder, * ZoneIterator, etc.) translates high-level data source queries to - * low-level calls on DatabaseConnection. It calls multiple queries + * low-level calls on DatabaseAbstraction. It calls multiple queries * if necessary and validates data from the database, allowing the - * DatabaseConnection to be just simple translation to SQL/other + * DatabaseAbstraction to be just simple translation to SQL/other * queries to database. * * While it is possible to subclass it for specific database in case * of special needs, it is not expected to be needed. This should just - * work as it is with whatever DatabaseConnection. + * work as it is with whatever DatabaseAbstraction. */ class DatabaseClient : public DataSourceClient { public: /** * \brief Constructor * - * It initializes the client with a connection. + * It initializes the client with a database. * - * \exception isc::InvalidParameter if connection is NULL. It might throw + * \exception isc::InvalidParameter if database is NULL. It might throw * standard allocation exception as well, but doesn't throw anything else. * - * \param connection The connection to use to get data. As the parameter - * suggests, the client takes ownership of the connection and will + * \param database The database to use to get data. As the parameter + * suggests, the client takes ownership of the database and will * delete it when itself deleted. */ - DatabaseClient(boost::shared_ptr connection); + DatabaseClient(boost::shared_ptr database); /** * \brief Corresponding ZoneFinder implementation * * The zone finder implementation for database data sources. Similarly * to the DatabaseClient, it translates the queries to methods of the - * connection. + * database. * * Application should not come directly in contact with this class * (it should handle it trough generic ZoneFinder pointer), therefore @@ -122,13 +123,13 @@ public: /** * \brief Constructor * - * \param connection The connection (shared with DatabaseClient) to + * \param database The database (shared with DatabaseClient) to * be used for queries (the one asked for ID before). * \param zone_id The zone ID which was returned from - * DatabaseConnection::getZone and which will be passed to further - * calls to the connection. + * DatabaseAbstraction::getZone and which will be passed to further + * calls to the database. */ - Finder(boost::shared_ptr connection, int zone_id); + Finder(boost::shared_ptr database, int zone_id); virtual isc::dns::Name getOrigin() const; virtual isc::dns::RRClass getClass() const; virtual FindResult find(const isc::dns::Name& name, @@ -145,23 +146,23 @@ public: */ int zone_id() const { return (zone_id_); } /** - * \brief The database connection. + * \brief The database. * - * This function provides the database connection stored inside as + * This function provides the database stored inside as * passed to the constructor. This is meant for testing purposes and * normal applications shouldn't need it. */ - const DatabaseConnection& connection() const { - return (*connection_); + const DatabaseAbstraction& database() const { + return (*database_); } private: - boost::shared_ptr connection_; + boost::shared_ptr database_; const int zone_id_; }; /** * \brief Find a zone in the database * - * This queries connection's getZone to find the best matching zone. + * This queries database's getZone to find the best matching zone. * It will propagate whatever exceptions are thrown from that method * (which is not restricted in any way). * @@ -172,8 +173,8 @@ public: */ virtual FindResult findZone(const isc::dns::Name& name) const; private: - /// \brief Our connection. - const boost::shared_ptr connection_; + /// \brief Our database. + const boost::shared_ptr database_; }; } diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_database.cc similarity index 97% rename from src/lib/datasrc/sqlite3_connection.cc rename to src/lib/datasrc/sqlite3_database.cc index 35db44620d..2fdd240cbe 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_database.cc @@ -14,7 +14,7 @@ #include -#include +#include #include #include @@ -44,7 +44,7 @@ struct SQLite3Parameters { */ }; -SQLite3Connection::SQLite3Connection(const std::string& filename, +SQLite3Database::SQLite3Database(const std::string& filename, const isc::dns::RRClass& rrclass) : dbparameters_(new SQLite3Parameters), class_(rrclass.toText()) @@ -215,7 +215,7 @@ checkAndSetupSchema(Initializer* initializer) { } void -SQLite3Connection::open(const std::string& name) { +SQLite3Database::open(const std::string& name) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_CONNOPEN).arg(name); if (dbparameters_->db_ != NULL) { // There shouldn't be a way to trigger this anyway @@ -232,7 +232,7 @@ SQLite3Connection::open(const std::string& name) { initializer.move(dbparameters_); } -SQLite3Connection::~SQLite3Connection() { +SQLite3Database::~SQLite3Database() { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_DROPCONN); if (dbparameters_->db_ != NULL) { close(); @@ -241,7 +241,7 @@ SQLite3Connection::~SQLite3Connection() { } void -SQLite3Connection::close(void) { +SQLite3Database::close(void) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_CONNCLOSE); if (dbparameters_->db_ == NULL) { isc_throw(DataSourceError, @@ -283,7 +283,7 @@ SQLite3Connection::close(void) { } std::pair -SQLite3Connection::getZone(const isc::dns::Name& name) const { +SQLite3Database::getZone(const isc::dns::Name& name) const { int rc; // Take the statement (simple SELECT id FROM zones WHERE...) diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_database.h similarity index 86% rename from src/lib/datasrc/sqlite3_connection.h rename to src/lib/datasrc/sqlite3_database.h index 484571599e..9607e691b6 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_database.h @@ -45,13 +45,13 @@ public: struct SQLite3Parameters; /** - * \brief Concrete implementation of DatabaseConnection for SQLite3 databases + * \brief Concrete implementation of DatabaseAbstraction for SQLite3 databases * * This opens one database file with our schema and serves data from there. * According to the design, it doesn't interpret the data in any way, it just * provides unified access to the DB. */ -class SQLite3Connection : public DatabaseConnection { +class SQLite3Database : public DatabaseAbstraction { public: /** * \brief Constructor @@ -63,21 +63,21 @@ public: * * \param filename The database file to be used. * \param rrclass Which class of data it should serve (while the database - * can contain multiple classes of data, single connection can provide - * only one class). + * file can contain multiple classes of data, single database can + * provide only one class). */ - SQLite3Connection(const std::string& filename, - const isc::dns::RRClass& rrclass); + SQLite3Database(const std::string& filename, + const isc::dns::RRClass& rrclass); /** * \brief Destructor * * Closes the database. */ - ~SQLite3Connection(); + ~SQLite3Database(); /** * \brief Look up a zone * - * This implements the getZone from DatabaseConnection and looks up a zone + * This implements the getZone from DatabaseAbstraction and looks up a zone * in the data. It looks for a zone with the exact given origin and class * passed to the constructor. * diff --git a/src/lib/datasrc/tests/Makefile.am b/src/lib/datasrc/tests/Makefile.am index c2e2b5caad..3667306f9a 100644 --- a/src/lib/datasrc/tests/Makefile.am +++ b/src/lib/datasrc/tests/Makefile.am @@ -29,7 +29,7 @@ run_unittests_SOURCES += zonetable_unittest.cc run_unittests_SOURCES += memory_datasrc_unittest.cc run_unittests_SOURCES += logger_unittest.cc run_unittests_SOURCES += database_unittest.cc -run_unittests_SOURCES += sqlite3_connection_unittest.cc +run_unittests_SOURCES += sqlite3_database_unittest.cc run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) run_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index c271a76dc8..7de3e8098d 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -30,7 +30,7 @@ namespace { * A virtual database connection that pretends it contains single zone -- * example.org. */ -class MockConnection : public DatabaseConnection { +class MockAbstraction : public DatabaseAbstraction { public: virtual std::pair getZone(const Name& name) const { if (name == Name("example.org")) { @@ -51,16 +51,16 @@ public: * times per test. */ void createClient() { - current_connection_ = new MockConnection(); - client_.reset(new DatabaseClient(shared_ptr( - current_connection_))); + current_database_ = new MockAbstraction(); + client_.reset(new DatabaseClient(shared_ptr( + current_database_))); } // Will be deleted by client_, just keep the current value for comparison. - MockConnection* current_connection_; + MockAbstraction* current_database_; shared_ptr client_; /** * Check the zone finder is a valid one and references the zone ID and - * connection available here. + * database available here. */ void checkZoneFinder(const DataSourceClient::FindResult& zone) { ASSERT_NE(ZoneFinderPtr(), zone.zone_finder) << "No zone finder"; @@ -69,7 +69,7 @@ public: ASSERT_NE(shared_ptr(), finder) << "Wrong type of finder"; EXPECT_EQ(42, finder->zone_id()); - EXPECT_EQ(current_connection_, &finder->connection()); + EXPECT_EQ(current_database_, &finder->database()); } }; @@ -92,7 +92,7 @@ TEST_F(DatabaseClientTest, superZone) { } TEST_F(DatabaseClientTest, noConnException) { - EXPECT_THROW(DatabaseClient(shared_ptr()), + EXPECT_THROW(DatabaseClient(shared_ptr()), isc::InvalidParameter); } diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_database_unittest.cc similarity index 73% rename from src/lib/datasrc/tests/sqlite3_connection_unittest.cc rename to src/lib/datasrc/tests/sqlite3_database_unittest.cc index 1bdbe90206..cf82d190b5 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_database_unittest.cc @@ -12,7 +12,7 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. -#include +#include #include #include @@ -41,29 +41,29 @@ std::string SQLITE_DBFILE_NOTEXIST = TEST_DATA_DIR "/nodir/notexist"; // Opening works (the content is tested in different tests) TEST(SQLite3Open, common) { - EXPECT_NO_THROW(SQLite3Connection conn(SQLITE_DBFILE_EXAMPLE, - RRClass::IN())); + EXPECT_NO_THROW(SQLite3Database db(SQLITE_DBFILE_EXAMPLE, + RRClass::IN())); } // The file can't be opened TEST(SQLite3Open, notExist) { - EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_NOTEXIST, - RRClass::IN()), SQLite3Error); + EXPECT_THROW(SQLite3Database db(SQLITE_DBFILE_NOTEXIST, + RRClass::IN()), SQLite3Error); } // It rejects broken DB TEST(SQLite3Open, brokenDB) { - EXPECT_THROW(SQLite3Connection conn(SQLITE_DBFILE_BROKENDB, - RRClass::IN()), SQLite3Error); + EXPECT_THROW(SQLite3Database db(SQLITE_DBFILE_BROKENDB, + RRClass::IN()), SQLite3Error); } // Test we can create the schema on the fly TEST(SQLite3Open, memoryDB) { - EXPECT_NO_THROW(SQLite3Connection conn(SQLITE_DBFILE_MEMORY, - RRClass::IN())); + EXPECT_NO_THROW(SQLite3Database db(SQLITE_DBFILE_MEMORY, + RRClass::IN())); } -// Test fixture for querying the connection +// Test fixture for querying the db class SQLite3Conn : public ::testing::Test { public: SQLite3Conn() { @@ -71,33 +71,33 @@ public: } // So it can be re-created with different data void initConn(const std::string& filename, const RRClass& rrclass) { - conn.reset(new SQLite3Connection(filename, rrclass)); + db.reset(new SQLite3Database(filename, rrclass)); } - // The tested connection - std::auto_ptr conn; + // The tested db + std::auto_ptr db; }; // This zone exists in the data, so it should be found TEST_F(SQLite3Conn, getZone) { - std::pair result(conn->getZone(Name("example.com"))); + std::pair result(db->getZone(Name("example.com"))); EXPECT_TRUE(result.first); EXPECT_EQ(1, result.second); } // But it should find only the zone, nothing below it TEST_F(SQLite3Conn, subZone) { - EXPECT_FALSE(conn->getZone(Name("sub.example.com")).first); + EXPECT_FALSE(db->getZone(Name("sub.example.com")).first); } // This zone is not there at all TEST_F(SQLite3Conn, noZone) { - EXPECT_FALSE(conn->getZone(Name("example.org")).first); + EXPECT_FALSE(db->getZone(Name("example.org")).first); } // This zone is there, but in different class TEST_F(SQLite3Conn, noClass) { initConn(SQLITE_DBFILE_EXAMPLE, RRClass::CH()); - EXPECT_FALSE(conn->getZone(Name("example.com")).first); + EXPECT_FALSE(db->getZone(Name("example.com")).first); } } From 97153d16eb9ecb7281ed9dc76783091964e769dd Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Sun, 7 Aug 2011 12:33:33 +0200 Subject: [PATCH 053/175] [1061] Few comments --- src/lib/datasrc/database.h | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index aed4fcc05c..7a6cd6bc46 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -130,6 +130,8 @@ public: * calls to the database. */ Finder(boost::shared_ptr database, int zone_id); + // The following three methods are just implementations of inherited + // ZoneFinder's pure virtual methods. virtual isc::dns::Name getOrigin() const; virtual isc::dns::RRClass getClass() const; virtual FindResult find(const isc::dns::Name& name, @@ -167,9 +169,11 @@ public: * (which is not restricted in any way). * * \param name Name of the zone or data contained there. - * \return Result containing the code and instance of Finder, if anything - * is found. Applications should not rely on the specific class being - * returned, though. + * \return FindResult containing the code and an instance of Finder, if + * anything is found. However, application should not rely on the + * ZoneFinder being instance of Finder (possible subclass of this class + * may return something else and it may change in future versions), it + * should use it as a ZoneFinder only. */ virtual FindResult findZone(const isc::dns::Name& name) const; private: From 46b961d69aff3a2e4d1cb7f3d0910bfcc66d1e19 Mon Sep 17 00:00:00 2001 From: JINMEI Tatuya Date: Mon, 8 Aug 2011 01:41:39 -0700 Subject: [PATCH 054/175] [1062] various types of editorial/trivial fixes. --- src/lib/datasrc/database.cc | 47 ++++++++++--------- src/lib/datasrc/tests/database_unittest.cc | 6 +-- .../tests/sqlite3_connection_unittest.cc | 11 +++-- 3 files changed, 35 insertions(+), 29 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index c3f67cd786..ee257887da 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -12,6 +12,8 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. +#include + #include #include @@ -101,7 +103,8 @@ namespace { } if (rdata_str != "") { try { - rrset->addRdata(isc::dns::rdata::createRdata(type, cls, rdata_str)); + rrset->addRdata(isc::dns::rdata::createRdata(type, cls, + rdata_str)); } catch (const isc::dns::rdata::InvalidRdataText& ivrt) { // at this point, rrset may have been initialised for no reason, // and won't be used. But the caller would drop the shared_ptr @@ -137,13 +140,13 @@ namespace { // Returns true if this store contains signatures covering the // given type - bool haveSigsFor(isc::dns::RRType type) { + bool haveSigsFor(isc::dns::RRType type) const { return (sigs.count(type) > 0); } // If the store contains signatures for the type of the given // rrset, they are appended to it. - void appendSignatures(isc::dns::RRsetPtr& rrset) { + void appendSignatures(isc::dns::RRsetPtr& rrset) const { if (haveSigsFor(rrset->getType())) { BOOST_FOREACH(isc::dns::rdata::RdataPtr sig, sigs.find(rrset->getType())->second) { @@ -180,8 +183,8 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, } if (columns.size() != 4) { - isc_throw(DataSourceError, - "Datasource backend did not return 4 columns in getNextRecord()"); + isc_throw(DataSourceError, "Datasource backend did not return 4 " + "columns in getNextRecord()"); } try { @@ -190,33 +193,35 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, //cur_sigtype(columns[2]); if (cur_type == type) { - addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[3]); + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, + columns[3]); } else if (cur_type == isc::dns::RRType::CNAME()) { // There should be no other data, so cur_rrset should be empty, if (result_rrset) { - isc_throw(DataSourceError, - "CNAME found but it is not the only record for " + - name.toText()); + isc_throw(DataSourceError, "CNAME found but it is not " + "the only record for " + name.toText()); } - addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[3]); + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, + columns[3]); result_status = CNAME; } else if (cur_type == isc::dns::RRType::RRSIG()) { - // If we get signatures before we get the actual data, we can't know - // which ones to keep and which to drop... - // So we keep a separate store of any signature that may be relevant - // and add them to the final RRset when we are done. - // A possible optimization here is to not store them for types we - // are certain we don't need + // If we get signatures before we get the actual data, we + // can't know which ones to keep and which to drop... + // So we keep a separate store of any signature that may be + // relevant and add them to the final RRset when we are done. + // A possible optimization here is to not store them for types + // we are certain we don't need isc::dns::rdata::RdataPtr cur_rrsig( - isc::dns::rdata::createRdata(cur_type, getClass(), columns[3])); + isc::dns::rdata::createRdata(cur_type, getClass(), + columns[3])); sig_store.addSig(cur_rrsig); } } catch (const isc::dns::InvalidRRType& irt) { - isc_throw(DataSourceError, - "Invalid RRType in database for " << name << ": " << columns[0]); + isc_throw(DataSourceError, "Invalid RRType in database for " << + name << ": " << columns[0]); } catch (const isc::dns::InvalidRRTTL& irttl) { - isc_throw(DataSourceError, - "Invalid TTL in database for " << name << ": " << columns[1]); + isc_throw(DataSourceError, "Invalid TTL in database for " << + name << ": " << columns[1]); } } diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 76c801c702..c31593217f 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -65,9 +65,9 @@ public: virtual bool getNextRecord(std::vector& columns) { if (cur_record < cur_name.size()) { columns = cur_name[cur_record++]; - return true; + return (true); } else { - return false; + return (false); } }; @@ -268,7 +268,7 @@ TEST_F(DatabaseClientTest, find) { shared_ptr finder( dynamic_pointer_cast(zone.zone_finder)); EXPECT_EQ(42, finder->zone_id()); - isc::dns::Name name("www.example.org."); + const isc::dns::Name name("www.example.org."); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 1279910114..0d0b8c35f6 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -103,10 +103,11 @@ TEST_F(SQLite3Conn, noClass) { } namespace { - // Simple function to cound the number of records for + // Simple function to count the number of records for // any name size_t countRecords(boost::shared_ptr& conn, - int zone_id, const std::string& name) { + int zone_id, const std::string& name) + { conn->searchForRecords(zone_id, name); size_t count = 0; std::vector columns; @@ -114,16 +115,16 @@ namespace { EXPECT_EQ(4, columns.size()); ++count; } - return count; + return (count); } } } TEST_F(SQLite3Conn, getRecords) { - std::pair zone_info(conn->getZone(Name("example.com"))); + const std::pair zone_info(conn->getZone(Name("example.com"))); ASSERT_TRUE(zone_info.first); - int zone_id = zone_info.second; + const int zone_id = zone_info.second; ASSERT_EQ(1, zone_id); // without search, getNext() should return false From 9351dbcc88ccdd6aa83d72f432f19a76c031124b Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Mon, 8 Aug 2011 10:14:06 -0500 Subject: [PATCH 055/175] [1011] little more docbook formatting --- doc/guide/bind10-guide.xml | 148 ++++++++++++++++--------------------- 1 file changed, 62 insertions(+), 86 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 3024467822..5c8840b324 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1704,37 +1704,27 @@ then change those defaults with config set Resolver/forward_addresses[0]/address severity):
- - - - FATAL - - + + + FATAL + - - - ERROR - - + + ERROR + - - - WARN - - + + WARN + - - - INFO - - + + INFO + - - - DEBUG - - - + + DEBUG + + @@ -1828,37 +1818,27 @@ then change those defaults with config set Resolver/forward_addresses[0]/address
destination (string) - + - The destination is the type of output. It can be one of: + The destination is the type of output. It can be one of: - + - + - + + console + - - - console - - - - - - file - - - - - - syslog - - - - + + file + + + syslog + +
@@ -1878,27 +1858,30 @@ then change those defaults with config set Resolver/forward_addresses[0]/address destination is 'console' - 'output' must be one of 'stdout' (messages printed to standard output) or 'stderr' (messages printed to standard error). + The value of output must be one of 'stdout' + (messages printed to standard output) or 'stderr' + (messages printed to standard error). - destination is 'file' - The value of output is interpreted as a file name; log messages will be appended to this file. + The value of output is interpreted as a file name; + log messages will be appended to this file. - destination is 'syslog' - The value of output is interpreted as the syslog facility (e.g. 'local0') that should be used for log messages. + The value of output is interpreted as the syslog + facility (e.g. 'local0') that should be used for + log messages. @@ -1911,57 +1894,50 @@ then change those defaults with config set Resolver/forward_addresses[0]/address
-
-
flush (true of false) - - - Flush buffers after each log message. Doing this will - reduce performance but will ensure that if the program - terminates abnormally, all messages up to the point of - termination are output. - - + + Flush buffers after each log message. Doing this will + reduce performance but will ensure that if the program + terminates abnormally, all messages up to the point of + termination are output. +
maxsize (integer) - + + Only relevant when destination is file, this is maximum + file size of output files in bytes. When the maximum + size is reached, the file is renamed (a ".1" is appended + to the name - if a ".1" file exists, it is renamed ".2" + etc.) and a new file opened. + - Only relevant when destination is file, this is maximum - file size of output files in bytes. When the maximum size - is reached, the file is renamed (a ".1" is appended to - the name - if a ".1" file exists, it is renamed ".2" - etc.) and a new file opened. - - - - - - If this is 0, no maximum file size is used. - - + + If this is 0, no maximum file size is used. +
maxver (integer) - + + Maximum number of old log files to keep around when + rolling the output file. Only relevant when destination + if 'file'. + - Maximum number of old log files to keep around when - rolling the output file. Only relevant when destination - if 'file'. - -
+
+
Example session From 65e4595c21bf9c01fb0b7da61577ae8a79d29c30 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 9 Aug 2011 09:02:03 +0200 Subject: [PATCH 056/175] [1061] Yet another renaming This time to DatabaseAccessor, Abstraction might be misleading in other ways. --- src/lib/datasrc/Makefile.am | 2 +- src/lib/datasrc/database.cc | 4 ++-- src/lib/datasrc/database.h | 22 +++++++++---------- ...qlite3_database.cc => sqlite3_accessor.cc} | 2 +- ...{sqlite3_database.h => sqlite3_accessor.h} | 10 ++++----- src/lib/datasrc/tests/Makefile.am | 2 +- src/lib/datasrc/tests/database_unittest.cc | 10 ++++----- ...ittest.cc => sqlite3_accessor_unittest.cc} | 2 +- 8 files changed, 27 insertions(+), 27 deletions(-) rename src/lib/datasrc/{sqlite3_database.cc => sqlite3_accessor.cc} (99%) rename src/lib/datasrc/{sqlite3_database.h => sqlite3_accessor.h} (91%) rename src/lib/datasrc/tests/{sqlite3_database_unittest.cc => sqlite3_accessor_unittest.cc} (98%) diff --git a/src/lib/datasrc/Makefile.am b/src/lib/datasrc/Makefile.am index 6792365ccd..db67781917 100644 --- a/src/lib/datasrc/Makefile.am +++ b/src/lib/datasrc/Makefile.am @@ -23,7 +23,7 @@ libdatasrc_la_SOURCES += result.h libdatasrc_la_SOURCES += logger.h logger.cc libdatasrc_la_SOURCES += client.h libdatasrc_la_SOURCES += database.h database.cc -libdatasrc_la_SOURCES += sqlite3_database.h sqlite3_database.cc +libdatasrc_la_SOURCES += sqlite3_accessor.h sqlite3_accessor.cc nodist_libdatasrc_la_SOURCES = datasrc_messages.h datasrc_messages.cc libdatasrc_la_LIBADD = $(top_builddir)/src/lib/exceptions/libexceptions.la diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 2d30ba28c6..0e1418dfbd 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -22,7 +22,7 @@ using isc::dns::Name; namespace isc { namespace datasrc { -DatabaseClient::DatabaseClient(boost::shared_ptr +DatabaseClient::DatabaseClient(boost::shared_ptr database) : database_(database) { @@ -55,7 +55,7 @@ DatabaseClient::findZone(const Name& name) const { return (FindResult(result::NOTFOUND, ZoneFinderPtr())); } -DatabaseClient::Finder::Finder(boost::shared_ptr +DatabaseClient::Finder::Finder(boost::shared_ptr database, int zone_id) : database_(database), zone_id_(zone_id) diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 7a6cd6bc46..1f6bd229ea 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -43,7 +43,7 @@ namespace datasrc { * allows having multiple open queries at one connection, the connection * class may share it. */ -class DatabaseAbstraction : boost::noncopyable { +class DatabaseAccessor : boost::noncopyable { public: /** * \brief Destructor @@ -51,7 +51,7 @@ public: * It is empty, but needs a virtual one, since we will use the derived * classes in polymorphic way. */ - virtual ~DatabaseAbstraction() { } + virtual ~DatabaseAccessor() { } /** * \brief Retrieve a zone identifier * @@ -79,14 +79,14 @@ public: * * This class (together with corresponding versions of ZoneFinder, * ZoneIterator, etc.) translates high-level data source queries to - * low-level calls on DatabaseAbstraction. It calls multiple queries + * low-level calls on DatabaseAccessor. It calls multiple queries * if necessary and validates data from the database, allowing the - * DatabaseAbstraction to be just simple translation to SQL/other + * DatabaseAccessor to be just simple translation to SQL/other * queries to database. * * While it is possible to subclass it for specific database in case * of special needs, it is not expected to be needed. This should just - * work as it is with whatever DatabaseAbstraction. + * work as it is with whatever DatabaseAccessor. */ class DatabaseClient : public DataSourceClient { public: @@ -102,7 +102,7 @@ public: * suggests, the client takes ownership of the database and will * delete it when itself deleted. */ - DatabaseClient(boost::shared_ptr database); + DatabaseClient(boost::shared_ptr database); /** * \brief Corresponding ZoneFinder implementation * @@ -126,10 +126,10 @@ public: * \param database The database (shared with DatabaseClient) to * be used for queries (the one asked for ID before). * \param zone_id The zone ID which was returned from - * DatabaseAbstraction::getZone and which will be passed to further + * DatabaseAccessor::getZone and which will be passed to further * calls to the database. */ - Finder(boost::shared_ptr database, int zone_id); + Finder(boost::shared_ptr database, int zone_id); // The following three methods are just implementations of inherited // ZoneFinder's pure virtual methods. virtual isc::dns::Name getOrigin() const; @@ -154,11 +154,11 @@ public: * passed to the constructor. This is meant for testing purposes and * normal applications shouldn't need it. */ - const DatabaseAbstraction& database() const { + const DatabaseAccessor& database() const { return (*database_); } private: - boost::shared_ptr database_; + boost::shared_ptr database_; const int zone_id_; }; /** @@ -178,7 +178,7 @@ public: virtual FindResult findZone(const isc::dns::Name& name) const; private: /// \brief Our database. - const boost::shared_ptr database_; + const boost::shared_ptr database_; }; } diff --git a/src/lib/datasrc/sqlite3_database.cc b/src/lib/datasrc/sqlite3_accessor.cc similarity index 99% rename from src/lib/datasrc/sqlite3_database.cc rename to src/lib/datasrc/sqlite3_accessor.cc index 2fdd240cbe..352768de68 100644 --- a/src/lib/datasrc/sqlite3_database.cc +++ b/src/lib/datasrc/sqlite3_accessor.cc @@ -14,7 +14,7 @@ #include -#include +#include #include #include diff --git a/src/lib/datasrc/sqlite3_database.h b/src/lib/datasrc/sqlite3_accessor.h similarity index 91% rename from src/lib/datasrc/sqlite3_database.h rename to src/lib/datasrc/sqlite3_accessor.h index 9607e691b6..0d7ddeebc7 100644 --- a/src/lib/datasrc/sqlite3_database.h +++ b/src/lib/datasrc/sqlite3_accessor.h @@ -13,8 +13,8 @@ // PERFORMANCE OF THIS SOFTWARE. -#ifndef __DATASRC_SQLITE3_CONNECTION_H -#define __DATASRC_SQLITE3_CONNECTION_H +#ifndef __DATASRC_SQLITE3_ACCESSOR_H +#define __DATASRC_SQLITE3_ACCESSOR_H #include @@ -45,13 +45,13 @@ public: struct SQLite3Parameters; /** - * \brief Concrete implementation of DatabaseAbstraction for SQLite3 databases + * \brief Concrete implementation of DatabaseAccessor for SQLite3 databases * * This opens one database file with our schema and serves data from there. * According to the design, it doesn't interpret the data in any way, it just * provides unified access to the DB. */ -class SQLite3Database : public DatabaseAbstraction { +class SQLite3Database : public DatabaseAccessor { public: /** * \brief Constructor @@ -77,7 +77,7 @@ public: /** * \brief Look up a zone * - * This implements the getZone from DatabaseAbstraction and looks up a zone + * This implements the getZone from DatabaseAccessor and looks up a zone * in the data. It looks for a zone with the exact given origin and class * passed to the constructor. * diff --git a/src/lib/datasrc/tests/Makefile.am b/src/lib/datasrc/tests/Makefile.am index 3667306f9a..4a7f322b24 100644 --- a/src/lib/datasrc/tests/Makefile.am +++ b/src/lib/datasrc/tests/Makefile.am @@ -29,7 +29,7 @@ run_unittests_SOURCES += zonetable_unittest.cc run_unittests_SOURCES += memory_datasrc_unittest.cc run_unittests_SOURCES += logger_unittest.cc run_unittests_SOURCES += database_unittest.cc -run_unittests_SOURCES += sqlite3_database_unittest.cc +run_unittests_SOURCES += sqlite3_accessor_unittest.cc run_unittests_CPPFLAGS = $(AM_CPPFLAGS) $(GTEST_INCLUDES) run_unittests_LDFLAGS = $(AM_LDFLAGS) $(GTEST_LDFLAGS) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 7de3e8098d..ab4423ec53 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -30,7 +30,7 @@ namespace { * A virtual database connection that pretends it contains single zone -- * example.org. */ -class MockAbstraction : public DatabaseAbstraction { +class MockAccessor : public DatabaseAccessor { public: virtual std::pair getZone(const Name& name) const { if (name == Name("example.org")) { @@ -51,12 +51,12 @@ public: * times per test. */ void createClient() { - current_database_ = new MockAbstraction(); - client_.reset(new DatabaseClient(shared_ptr( + current_database_ = new MockAccessor(); + client_.reset(new DatabaseClient(shared_ptr( current_database_))); } // Will be deleted by client_, just keep the current value for comparison. - MockAbstraction* current_database_; + MockAccessor* current_database_; shared_ptr client_; /** * Check the zone finder is a valid one and references the zone ID and @@ -92,7 +92,7 @@ TEST_F(DatabaseClientTest, superZone) { } TEST_F(DatabaseClientTest, noConnException) { - EXPECT_THROW(DatabaseClient(shared_ptr()), + EXPECT_THROW(DatabaseClient(shared_ptr()), isc::InvalidParameter); } diff --git a/src/lib/datasrc/tests/sqlite3_database_unittest.cc b/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc similarity index 98% rename from src/lib/datasrc/tests/sqlite3_database_unittest.cc rename to src/lib/datasrc/tests/sqlite3_accessor_unittest.cc index cf82d190b5..101c02b420 100644 --- a/src/lib/datasrc/tests/sqlite3_database_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc @@ -12,7 +12,7 @@ // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. -#include +#include #include #include From f6a1807c25d85a0ca762bfa276ebac4a3430e7c7 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 9 Aug 2011 09:18:56 -0500 Subject: [PATCH 057/175] [1011] small formatting change This change makes no difference. I am only doing this to test my pre-receive hook in a trac branch. --- README | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README b/README index a6509da2d2..8d495e0ca3 100644 --- a/README +++ b/README @@ -67,8 +67,8 @@ e.g., Operating-System specific tips: - FreeBSD - You may need to install a python binding for sqlite3 by hand. A - sample procedure is as follows: + You may need to install a python binding for sqlite3 by hand. + A sample procedure is as follows: - add the following to /etc/make.conf PYTHON_VERSION=3.1 - build and install the python binding from ports, assuming the top From f82dc7b09f470f79ed2bf099216fa64c76528d3b Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Tue, 9 Aug 2011 11:19:13 +0200 Subject: [PATCH 058/175] [1062] addressed review comments --- src/lib/datasrc/database.cc | 270 +++++++++--------- src/lib/datasrc/database.h | 53 +++- src/lib/datasrc/sqlite3_connection.cc | 76 +++-- src/lib/datasrc/sqlite3_connection.h | 26 +- src/lib/datasrc/tests/database_unittest.cc | 215 +++++++++++++- .../tests/sqlite3_connection_unittest.cc | 148 ++++++++-- 6 files changed, 588 insertions(+), 200 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index ee257887da..8f13f525a4 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -71,93 +71,87 @@ DatabaseClient::Finder::Finder(boost::shared_ptr { } namespace { - // Adds the given Rdata to the given RRset - // If the rrset is an empty pointer, a new one is - // created with the given name, class, type and ttl - // The type is checked if the rrset exists, but the - // name is not. - // - // Then adds the given rdata to the set - // - // Raises a DataSourceError if the type does not - // match, or if the given rdata string does not - // parse correctly for the given type and class - void addOrCreate(isc::dns::RRsetPtr& rrset, - const isc::dns::Name& name, - const isc::dns::RRClass& cls, - const isc::dns::RRType& type, - const isc::dns::RRTTL& ttl, - const std::string& rdata_str) - { - if (!rrset) { - rrset.reset(new isc::dns::RRset(name, cls, type, ttl)); - } else { - if (ttl < rrset->getTTL()) { - rrset->setTTL(ttl); - } - // make sure the type is correct - if (type != rrset->getType()) { - isc_throw(DataSourceError, - "attempt to add multiple types to RRset in find()"); - } +// Adds the given Rdata to the given RRset +// If the rrset is an empty pointer, a new one is +// created with the given name, class, type and ttl +// The type is checked if the rrset exists, but the +// name is not. +// +// Then adds the given rdata to the set +// +// Raises a DataSourceError if the type does not +// match, or if the given rdata string does not +// parse correctly for the given type and class +void addOrCreate(isc::dns::RRsetPtr& rrset, + const isc::dns::Name& name, + const isc::dns::RRClass& cls, + const isc::dns::RRType& type, + const isc::dns::RRTTL& ttl, + const std::string& rdata_str) +{ + if (!rrset) { + rrset.reset(new isc::dns::RRset(name, cls, type, ttl)); + } else { + if (ttl < rrset->getTTL()) { + rrset->setTTL(ttl); } - if (rdata_str != "") { - try { - rrset->addRdata(isc::dns::rdata::createRdata(type, cls, - rdata_str)); - } catch (const isc::dns::rdata::InvalidRdataText& ivrt) { - // at this point, rrset may have been initialised for no reason, - // and won't be used. But the caller would drop the shared_ptr - // on such an error anyway, so we don't care. - isc_throw(DataSourceError, - "bad rdata in database for " << name.toText() << " " - << type.toText() << " " << ivrt.what()); + // make sure the type is correct + // TODO Assert? + if (type != rrset->getType()) { + isc_throw(DataSourceError, + "attempt to add multiple types to RRset in find()"); + } + } + try { + rrset->addRdata(isc::dns::rdata::createRdata(type, cls, rdata_str)); + } catch (const isc::dns::rdata::InvalidRdataText& ivrt) { + // at this point, rrset may have been initialised for no reason, + // and won't be used. But the caller would drop the shared_ptr + // on such an error anyway, so we don't care. + isc_throw(DataSourceError, + "bad rdata in database for " << name << " " + << type << ": " << ivrt.what()); + } +} + +// This class keeps a short-lived store of RRSIG records encountered +// during a call to find(). If the backend happens to return signatures +// before the actual data, we might not know which signatures we will need +// So if they may be relevant, we store the in this class. +// +// (If this class seems useful in other places, we might want to move +// it to util. That would also provide an opportunity to add unit tests) +class RRsigStore { +public: + // Adds the given signature Rdata to the store + // The signature rdata MUST be of the RRSIG rdata type + // (the caller must make sure of this). + // NOTE: if we move this class to a public namespace, + // we should add a type_covered argument, so as not + // to have to do this cast here. + void addSig(isc::dns::rdata::RdataPtr sig_rdata) { + const isc::dns::RRType& type_covered = + static_cast( + sig_rdata.get())->typeCovered(); + sigs[type_covered].push_back(sig_rdata); + } + + // If the store contains signatures for the type of the given + // rrset, they are appended to it. + void appendSignatures(isc::dns::RRsetPtr& rrset) const { + std::map >::const_iterator + found = sigs.find(rrset->getType()); + if (found != sigs.end()) { + BOOST_FOREACH(isc::dns::rdata::RdataPtr sig, found->second) { + rrset->addRRsig(sig); } } } - // This class keeps a short-lived store of RRSIG records encountered - // during a call to find(). If the backend happens to return signatures - // before the actual data, we might not know which signatures we will need - // So if they may be relevant, we store the in this class. - // - // (If this class seems useful in other places, we might want to move - // it to util. That would also provide an opportunity to add unit tests) - class RRsigStore { - public: - // Adds the given signature Rdata to the store - // The signature rdata MUST be of the RRSIG rdata type - // (the caller must make sure of this) - void addSig(isc::dns::rdata::RdataPtr sig_rdata) { - const isc::dns::RRType& type_covered = - static_cast( - sig_rdata.get())->typeCovered(); - if (!haveSigsFor(type_covered)) { - sigs[type_covered] = std::vector(); - } - sigs.find(type_covered)->second.push_back(sig_rdata); - } - - // Returns true if this store contains signatures covering the - // given type - bool haveSigsFor(isc::dns::RRType type) const { - return (sigs.count(type) > 0); - } - - // If the store contains signatures for the type of the given - // rrset, they are appended to it. - void appendSignatures(isc::dns::RRsetPtr& rrset) const { - if (haveSigsFor(rrset->getType())) { - BOOST_FOREACH(isc::dns::rdata::RdataPtr sig, - sigs.find(rrset->getType())->second) { - rrset->addRRsig(sig); - } - } - } - - private: - std::map > sigs; - }; +private: + std::map > sigs; +}; } @@ -174,55 +168,75 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, ZoneFinder::Result result_status = SUCCESS; RRsigStore sig_store; - connection_->searchForRecords(zone_id_, name.toText()); + try { + connection_->searchForRecords(zone_id_, name.toText()); - std::vector columns; - while (connection_->getNextRecord(columns)) { - if (!records_found) { - records_found = true; - } - - if (columns.size() != 4) { - isc_throw(DataSourceError, "Datasource backend did not return 4 " - "columns in getNextRecord()"); - } - - try { - const isc::dns::RRType cur_type(columns[0]); - const isc::dns::RRTTL cur_ttl(columns[1]); - //cur_sigtype(columns[2]); - - if (cur_type == type) { - addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, - columns[3]); - } else if (cur_type == isc::dns::RRType::CNAME()) { - // There should be no other data, so cur_rrset should be empty, - if (result_rrset) { - isc_throw(DataSourceError, "CNAME found but it is not " - "the only record for " + name.toText()); - } - addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, - columns[3]); - result_status = CNAME; - } else if (cur_type == isc::dns::RRType::RRSIG()) { - // If we get signatures before we get the actual data, we - // can't know which ones to keep and which to drop... - // So we keep a separate store of any signature that may be - // relevant and add them to the final RRset when we are done. - // A possible optimization here is to not store them for types - // we are certain we don't need - isc::dns::rdata::RdataPtr cur_rrsig( - isc::dns::rdata::createRdata(cur_type, getClass(), - columns[3])); - sig_store.addSig(cur_rrsig); + std::string columns[DatabaseConnection::RecordColumnCount]; + while (connection_->getNextRecord(columns, + DatabaseConnection::RecordColumnCount)) { + if (!records_found) { + records_found = true; + } + + try { + const isc::dns::RRType cur_type(columns[DatabaseConnection::TYPE_COLUMN]); + const isc::dns::RRTTL cur_ttl(columns[DatabaseConnection::TTL_COLUMN]); + // Ths sigtype column was an optimization for finding the relevant + // RRSIG RRs for a lookup. Currently this column is not used in this + // revised datasource implementation. We should either start using it + // again, or remove it from use completely (i.e. also remove it from + // the schema and the backend implementation). + // Note that because we don't use it now, we also won't notice it if + // the value is wrong (i.e. if the sigtype column contains an rrtype + // that is different from the actual value of the 'type covered' field + // in the RRSIG Rdata). + //cur_sigtype(columns[SIGTYPE_COLUMN]); + + if (cur_type == type) { + addOrCreate(result_rrset, name, getClass(), cur_type, + cur_ttl, columns[DatabaseConnection::RDATA_COLUMN]); + } else if (cur_type == isc::dns::RRType::CNAME()) { + // There should be no other data, so result_rrset should be empty. + if (result_rrset) { + isc_throw(DataSourceError, "CNAME found but it is not " + "the only record for " + name.toText()); + } + addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, + columns[DatabaseConnection::RDATA_COLUMN]); + result_status = CNAME; + } else if (cur_type == isc::dns::RRType::RRSIG()) { + // If we get signatures before we get the actual data, we + // can't know which ones to keep and which to drop... + // So we keep a separate store of any signature that may be + // relevant and add them to the final RRset when we are done. + // A possible optimization here is to not store them for types + // we are certain we don't need + sig_store.addSig(isc::dns::rdata::createRdata(cur_type, + getClass(), + columns[DatabaseConnection::RDATA_COLUMN])); + } + } catch (const isc::dns::InvalidRRType& irt) { + isc_throw(DataSourceError, "Invalid RRType in database for " << + name << ": " << columns[DatabaseConnection::TYPE_COLUMN]); + } catch (const isc::dns::InvalidRRTTL& irttl) { + isc_throw(DataSourceError, "Invalid TTL in database for " << + name << ": " << columns[DatabaseConnection::TTL_COLUMN]); + } catch (const isc::dns::rdata::InvalidRdataText& ird) { + isc_throw(DataSourceError, "Invalid rdata in database for " << + name << ": " << columns[DatabaseConnection::RDATA_COLUMN]); } - } catch (const isc::dns::InvalidRRType& irt) { - isc_throw(DataSourceError, "Invalid RRType in database for " << - name << ": " << columns[0]); - } catch (const isc::dns::InvalidRRTTL& irttl) { - isc_throw(DataSourceError, "Invalid TTL in database for " << - name << ": " << columns[1]); } + } catch (const DataSourceError& dse) { + // call cleanup and rethrow + connection_->resetSearch(); + throw; + } catch (const isc::Exception& isce) { +// // cleanup, change it to a DataSourceError and rethrow + connection_->resetSearch(); + isc_throw(DataSourceError, isce.what()); + } catch (const std::exception& ex) { + connection_->resetSearch(); + throw; } if (!result_rrset) { diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index d82c86f771..0632f64321 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -92,14 +92,59 @@ public: * Returns a boolean specifying whether or not there was more data to read. * In the case of a database error, a DatasourceError is thrown. * + * The columns passed is an array of std::strings consisting of + * DatabaseConnection::RecordColumnCount elements, the elements of which + * are defined in DatabaseConnection::RecordColumns, in their basic + * string representation. + * + * If you are implementing a derived database connection class, you + * should have this method check the column_count value, and fill the + * array with strings conforming to their description in RecordColumn. + * * \exception DatasourceError if there was an error reading from the database * - * \param columns This vector will be cleared, and the fields of the record will - * be appended here as strings (in the order rdtype, ttl, sigtype, - * and rdata). If there was no data, the vector is untouched. + * \param columns The elements of this array will be filled with the data + * for one record as defined by RecordColumns + * If there was no data, the array is untouched. * \return true if there was a next record, false if there was not */ - virtual bool getNextRecord(std::vector& columns) = 0; + virtual bool getNextRecord(std::string columns[], size_t column_count) = 0; + + /** + * \brief Resets the current search initiated with searchForRecords() + * + * This method will be called when the called of searchForRecords() and + * getNextRecord() finds bad data, and aborts the current search. + * It should clean up whatever handlers searchForRecords() created, and + * any other state modified or needed by getNextRecord() + * + * Of course, the implementation of getNextRecord may also use it when + * it is done with a search. If it does, the implementation of this + * method should make sure it can handle being called multiple times. + * + * The implementation for this method should make sure it never throws. + */ + virtual void resetSearch() = 0; + + /** + * Definitions of the fields as they are required to be filled in + * by getNextRecord() + * + * When implementing getNextRecord(), the columns array should + * be filled with the values as described in this enumeration, + * in this order. + */ + enum RecordColumns { + TYPE_COLUMN = 0, ///< The RRType of the record (A/NS/TXT etc.) + TTL_COLUMN = 1, ///< The TTL of the record (a + SIGTYPE_COLUMN = 2, ///< For RRSIG records, this contains the RRTYPE + ///< the RRSIG covers. In the current implementation, + ///< this field is ignored. + RDATA_COLUMN = 3 ///< Full text representation of the record's RDATA + }; + + /// The number of fields the columns array passed to getNextRecord should have + static const size_t RecordColumnCount = 4; }; /** diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index 70adde4f6f..750a62cf4c 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -321,15 +321,30 @@ SQLite3Connection::getZone(const isc::dns::Name& name) const { void SQLite3Connection::searchForRecords(int zone_id, const std::string& name) { - sqlite3_reset(dbparameters_->q_any_); - sqlite3_clear_bindings(dbparameters_->q_any_); - sqlite3_bind_int(dbparameters_->q_any_, 1, zone_id); + resetSearch(); + int result; + result = sqlite3_bind_int(dbparameters_->q_any_, 1, zone_id); + if (result != SQLITE_OK) { + isc_throw(DataSourceError, + "Error in sqlite3_bind_int() for zone_id " << + zone_id << ", sqlite3 result code: " << result); + } // use transient since name is a ref and may disappear - sqlite3_bind_text(dbparameters_->q_any_, 2, name.c_str(), -1, - SQLITE_TRANSIENT); + result = sqlite3_bind_text(dbparameters_->q_any_, 2, name.c_str(), -1, + SQLITE_TRANSIENT); + if (result != SQLITE_OK) { + isc_throw(DataSourceError, + "Error in sqlite3_bind_text() for name " << + name << ", sqlite3 result code: " << result); + } }; namespace { +// This helper function converts from the unsigned char* type (used by +// sqlite3) to char* (wanted by std::string). Technically these types +// might not be directly convertable +// In case sqlite3_column_text() returns NULL, we just make it an +// empty string. const char* convertToPlainChar(const unsigned char* ucp) { if (ucp == NULL) { @@ -341,31 +356,44 @@ convertToPlainChar(const unsigned char* ucp) { } bool -SQLite3Connection::getNextRecord(std::vector& columns) { - sqlite3_stmt* current_stmt = dbparameters_->q_any_; - const int rc = sqlite3_step(current_stmt); +SQLite3Connection::getNextRecord(std::string columns[], size_t column_count) { + try { + sqlite3_stmt* current_stmt = dbparameters_->q_any_; + const int rc = sqlite3_step(current_stmt); - if (rc == SQLITE_ROW) { - columns.clear(); - for (int column = 0; column < 4; ++column) { - columns.push_back(convertToPlainChar(sqlite3_column_text( - current_stmt, column))); + if (column_count != RecordColumnCount) { + isc_throw(DataSourceError, + "Datasource backend caller did not pass a column array " + "of size " << RecordColumnCount << + " to getNextRecord()"); } - return (true); - } else if (rc == SQLITE_DONE) { - // reached the end of matching rows - sqlite3_reset(current_stmt); - sqlite3_clear_bindings(current_stmt); - return (false); - } - sqlite3_reset(current_stmt); - sqlite3_clear_bindings(current_stmt); - isc_throw(DataSourceError, - "Unexpected failure in sqlite3_step (sqlite result code " << rc << ")"); + if (rc == SQLITE_ROW) { + for (int column = 0; column < column_count; ++column) { + columns[column] = convertToPlainChar(sqlite3_column_text( + current_stmt, column)); + } + return (true); + } else if (rc == SQLITE_DONE) { + // reached the end of matching rows + resetSearch(); + return (false); + } + resetSearch(); + isc_throw(DataSourceError, + "Unexpected failure in sqlite3_step (sqlite result code " << rc << ")"); + } catch (std::bad_alloc) { + isc_throw(DataSourceError, "bad_alloc in Sqlite3Connection::getNextRecord"); + } // Compilers might not realize isc_throw always throws return (false); } +void +SQLite3Connection::resetSearch() { + sqlite3_reset(dbparameters_->q_any_); + sqlite3_clear_bindings(dbparameters_->q_any_); +} + } } diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index ffb2470b8e..c1968c4a34 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -95,6 +95,9 @@ public: * This implements the searchForRecords from DatabaseConnection. * This particular implementation does not raise DataSourceError. * + * \exception DataSourceError when sqlite3_bind_int() or + * sqlite3_bind_text() fails + * * \param zone_id The zone to seach in, as returned by getZone() * \param name The name to find records for */ @@ -107,12 +110,31 @@ public: * This implements the getNextRecord from DatabaseConnection. * See the documentation there for more information. * + * If this method raises an exception, the contents of columns are undefined. + * + * \exception DataSourceError if there is an error returned by sqlite_step() + * When this exception is raised, the current + * search as initialized by searchForRecords() is + * NOT reset, and the caller is expected to take + * care of that. * \param columns This vector will be cleared, and the fields of the record will * be appended here as strings (in the order rdtype, ttl, sigtype, - * and rdata). If there was no data, the vector is untouched. + * and rdata). If there was no data (i.e. if this call returns + * false), the vector is untouched. * \return true if there was a next record, false if there was not */ - virtual bool getNextRecord(std::vector& columns); + virtual bool getNextRecord(std::string columns[], size_t column_count); + + /** + * \brief Resets any state created by searchForRecords + * + * This implements the resetSearch from DatabaseConnection. + * See the documentation there for more information. + * + * This function never throws. + */ + virtual void resetSearch(); + private: /// \brief Private database data SQLite3Parameters* dbparameters_; diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index c31593217f..69678f0047 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -15,6 +15,7 @@ #include #include +#include #include #include @@ -36,7 +37,7 @@ namespace { */ class MockConnection : public DatabaseConnection { public: - MockConnection() { fillData(); } + MockConnection() : search_running_(false) { fillData(); } virtual std::pair getZone(const Name& name) const { if (name == Name("example.org")) { @@ -47,10 +48,23 @@ public: } virtual void searchForRecords(int zone_id, const std::string& name) { + search_running_ = true; + + // 'hardcoded' name to trigger exceptions (for testing + // the error handling of find() (the other on is below in + // if the name is "exceptiononsearch" it'll raise an exception here + if (name == "dsexception.in.search.") { + isc_throw(DataSourceError, "datasource exception on search"); + } else if (name == "iscexception.in.search.") { + isc_throw(isc::Exception, "isc exception on search"); + } else if (name == "basicexception.in.search.") { + throw std::exception(); + } + searched_name_ = name; + // we're not aiming for efficiency in this test, simply // copy the relevant vector from records cur_record = 0; - if (zone_id == 42) { if (records.count(name) > 0) { cur_name = records.find(name)->second; @@ -62,15 +76,38 @@ public: } }; - virtual bool getNextRecord(std::vector& columns) { + virtual bool getNextRecord(std::string columns[], size_t column_count) { + if (searched_name_ == "dsexception.in.getnext.") { + isc_throw(DataSourceError, "datasource exception on getnextrecord"); + } else if (searched_name_ == "iscexception.in.getnext.") { + isc_throw(isc::Exception, "isc exception on getnextrecord"); + } else if (searched_name_ == "basicexception.in.getnext.") { + throw std::exception(); + } + + if (column_count != DatabaseConnection::RecordColumnCount) { + isc_throw(DataSourceError, "Wrong column count in getNextRecord"); + } if (cur_record < cur_name.size()) { - columns = cur_name[cur_record++]; + for (size_t i = 0; i < column_count; ++i) { + columns[i] = cur_name[cur_record][i]; + } + cur_record++; return (true); } else { + resetSearch(); return (false); } }; + virtual void resetSearch() { + search_running_ = false; + }; + + bool searchRunning() const { + return (search_running_); + } + private: std::map > > records; // used as internal index for getNextRecord() @@ -80,6 +117,14 @@ private: // fake data std::vector< std::vector > cur_name; + // This boolean is used to make sure find() calls resetSearch + // when it encounters an error + bool search_running_; + + // We store the name passed to searchForRecords, so we can + // hardcode some exceptions into getNextRecord + std::string searched_name_; + // Adds one record to the current name in the database // The actual data will not be added to 'records' until // addCurName() is called @@ -121,6 +166,11 @@ private: addRecord("AAAA", "3600", "", "2001:db8::2"); addCurName("www.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("AAAA", "3600", "", "2001:db8::1"); + addRecord("A", "3600", "", "192.0.2.2"); + addCurName("www2.example.org."); + addRecord("CNAME", "3600", "", "www.example.org."); addCurName("cname.example.org."); @@ -165,18 +215,42 @@ private: addRecord("A", "3600", "", "192.0.2.1"); addCurName("acnamesig3.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("A", "360", "", "192.0.2.2"); + addCurName("ttldiff1.example.org."); + addRecord("A", "360", "", "192.0.2.1"); + addRecord("A", "3600", "", "192.0.2.2"); + addCurName("ttldiff2.example.org."); + // also add some intentionally bad data - cur_name.push_back(std::vector()); - addCurName("emptyvector.example.org."); addRecord("A", "3600", "", "192.0.2.1"); addRecord("CNAME", "3600", "", "www.example.org."); - addCurName("badcname.example.org."); + addCurName("badcname1.example.org."); + + addRecord("CNAME", "3600", "", "www.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("badcname2.example.org."); + + addRecord("CNAME", "3600", "", "www.example.org."); + addRecord("CNAME", "3600", "", "www.example2.org."); + addCurName("badcname3.example.org."); + addRecord("A", "3600", "", "bad"); addCurName("badrdata.example.org."); + addRecord("BAD_TYPE", "3600", "", "192.0.2.1"); addCurName("badtype.example.org."); + addRecord("A", "badttl", "", "192.0.2.1"); addCurName("badttl.example.org."); + + addRecord("A", "badttl", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "A 5 3 3600 somebaddata 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("badsig.example.org."); + + addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "TXT", "A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + addCurName("badsigtype.example.org."); } }; @@ -241,18 +315,21 @@ doFindTest(shared_ptr finder, const isc::dns::Name& name, const isc::dns::RRType& type, const isc::dns::RRType& expected_type, + const isc::dns::RRTTL expected_ttl, ZoneFinder::Result expected_result, unsigned int expected_rdata_count, unsigned int expected_signature_count) { - ZoneFinder::FindResult result = finder->find(name, type, - NULL, ZoneFinder::FIND_DEFAULT); - ASSERT_EQ(expected_result, result.code) << name.toText() << " " << type.toText(); + ZoneFinder::FindResult result = + finder->find(name, type, NULL, ZoneFinder::FIND_DEFAULT); + ASSERT_EQ(expected_result, result.code) << name << " " << type; if (expected_rdata_count > 0) { EXPECT_EQ(expected_rdata_count, result.rrset->getRdataCount()); + EXPECT_EQ(expected_ttl, result.rrset->getTTL()); EXPECT_EQ(expected_type, result.rrset->getType()); if (expected_signature_count > 0) { - EXPECT_EQ(expected_signature_count, result.rrset->getRRsig()->getRdataCount()); + EXPECT_EQ(expected_signature_count, + result.rrset->getRRsig()->getRdataCount()); } else { EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset->getRRsig()); } @@ -268,79 +345,189 @@ TEST_F(DatabaseClientTest, find) { shared_ptr finder( dynamic_pointer_cast(zone.zone_finder)); EXPECT_EQ(42, finder->zone_id()); - const isc::dns::Name name("www.example.org."); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 1, 0); + EXPECT_FALSE(current_connection_->searchRunning()); + doFindTest(finder, isc::dns::Name("www2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, 2, 0); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 2, 0); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, 0, 0); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("cname.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + isc::dns::RRTTL(3600), ZoneFinder::CNAME, 1, 0); + EXPECT_FALSE(current_connection_->searchRunning()); + doFindTest(finder, isc::dns::Name("cname.example.org."), + isc::dns::RRType::CNAME(), isc::dns::RRType::CNAME(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, 1, 0); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("doesnotexist.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::NXDOMAIN, 0, 0); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 1, 2); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 2, 1); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, 0, 0); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("signedcname1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + isc::dns::RRTTL(3600), ZoneFinder::CNAME, 1, 1); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 1, 2); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 2, 1); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), + isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, 0, 0); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("signedcname2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), + isc::dns::RRTTL(3600), ZoneFinder::CNAME, 1, 1); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("acnamesig1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 1, 1); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("acnamesig2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 1, 1); + EXPECT_FALSE(current_connection_->searchRunning()); doFindTest(finder, isc::dns::Name("acnamesig3.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, 1, 1); + EXPECT_FALSE(current_connection_->searchRunning()); - EXPECT_THROW(finder->find(isc::dns::Name("emptyvector.example.org."), + doFindTest(finder, isc::dns::Name("ttldiff1.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(360), + ZoneFinder::SUCCESS, 2, 0); + EXPECT_FALSE(current_connection_->searchRunning()); + doFindTest(finder, isc::dns::Name("ttldiff2.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(360), + ZoneFinder::SUCCESS, 2, 0); + EXPECT_FALSE(current_connection_->searchRunning()); + + EXPECT_THROW(finder->find(isc::dns::Name("badcname1.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), DataSourceError); - EXPECT_THROW(finder->find(isc::dns::Name("badcname.example.org."), + EXPECT_FALSE(current_connection_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badcname2.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badcname3.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); EXPECT_THROW(finder->find(isc::dns::Name("badrdata.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); EXPECT_THROW(finder->find(isc::dns::Name("badtype.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); EXPECT_THROW(finder->find(isc::dns::Name("badttl.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badsig.example.org."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); + + // Trigger the hardcoded exceptions and see if find() has cleaned up + /* + EXPECT_THROW(finder->find(isc::dns::Name("dsexception.in.search."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("iscexception.in.search."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("basicexception.in.search."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + std::exception); + EXPECT_FALSE(current_connection_->searchRunning()); + */ + EXPECT_THROW(finder->find(isc::dns::Name("dsexception.in.getnext."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("iscexception.in.getnext."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + DataSourceError); + EXPECT_FALSE(current_connection_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("basicexception.in.getnext."), + isc::dns::RRType::A(), + NULL, ZoneFinder::FIND_DEFAULT), + std::exception); + EXPECT_FALSE(current_connection_->searchRunning()); + + // This RRSIG has the wrong sigtype field, which should be + // an error if we decide to keep using that field + // Right now the field is ignored, so it does not error + doFindTest(finder, isc::dns::Name("badsigtype.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), + ZoneFinder::SUCCESS, 1, 1); + EXPECT_FALSE(current_connection_->searchRunning()); } diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 0d0b8c35f6..7f7032238d 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -11,8 +11,6 @@ // LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE // OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR // PERFORMANCE OF THIS SOFTWARE. -#include - #include #include @@ -20,6 +18,7 @@ #include #include +#include using namespace isc::datasrc; using isc::data::ConstElementPtr; @@ -76,7 +75,7 @@ public: conn.reset(new SQLite3Connection(filename, rrclass)); } // The tested connection - boost::shared_ptr conn; + boost::scoped_ptr conn; }; // This zone exists in the data, so it should be found @@ -102,22 +101,19 @@ TEST_F(SQLite3Conn, noClass) { EXPECT_FALSE(conn->getZone(Name("example.com")).first); } -namespace { - // Simple function to count the number of records for - // any name - size_t countRecords(boost::shared_ptr& conn, - int zone_id, const std::string& name) - { - conn->searchForRecords(zone_id, name); - size_t count = 0; - std::vector columns; - while (conn->getNextRecord(columns)) { - EXPECT_EQ(4, columns.size()); - ++count; - } - return (count); - } -} +// Simple function to cound the number of records for +// any name +void +checkRecordRow(const std::string columns[], + const std::string& field0, + const std::string& field1, + const std::string& field2, + const std::string& field3) +{ + EXPECT_EQ(field0, columns[0]); + EXPECT_EQ(field1, columns[1]); + EXPECT_EQ(field2, columns[2]); + EXPECT_EQ(field3, columns[3]); } TEST_F(SQLite3Conn, getRecords) { @@ -127,16 +123,112 @@ TEST_F(SQLite3Conn, getRecords) { const int zone_id = zone_info.second; ASSERT_EQ(1, zone_id); + const size_t column_count = DatabaseConnection::RecordColumnCount; + std::string columns[column_count]; + // without search, getNext() should return false - std::vector columns; - EXPECT_FALSE(conn->getNextRecord(columns)); - EXPECT_EQ(0, columns.size()); + EXPECT_FALSE(conn->getNextRecord(columns, + column_count)); + checkRecordRow(columns, "", "", "", ""); - EXPECT_EQ(4, countRecords(conn, zone_id, "foo.example.com.")); - EXPECT_EQ(15, countRecords(conn, zone_id, "example.com.")); - EXPECT_EQ(0, countRecords(conn, zone_id, "foo.bar.")); - EXPECT_EQ(0, countRecords(conn, zone_id, "")); + conn->searchForRecords(zone_id, "foo.bar."); + EXPECT_FALSE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "", "", "", ""); - EXPECT_FALSE(conn->getNextRecord(columns)); - EXPECT_EQ(0, columns.size()); + conn->searchForRecords(zone_id, ""); + EXPECT_FALSE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "", "", "", ""); + + // Should error on a bad number of columns + EXPECT_THROW(conn->getNextRecord(columns, 3), DataSourceError); + EXPECT_THROW(conn->getNextRecord(columns, 5), DataSourceError); + + // now try some real searches + conn->searchForRecords(zone_id, "foo.example.com."); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "CNAME", "3600", "", + "cnametest.example.org."); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "CNAME", + "CNAME 5 3 3600 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NSEC", "7200", "", + "mail.example.com. CNAME RRSIG NSEC"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 3 7200 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE"); + EXPECT_FALSE(conn->getNextRecord(columns, column_count)); + // with no more records, the array should not have been modified + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 3 7200 20100322084538 20100220084538 33495 " + "example.com. FAKEFAKEFAKEFAKE"); + + conn->searchForRecords(zone_id, "example.com."); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "SOA", "3600", "", + "master.example.com. admin.example.com. " + "1234 3600 1800 2419200 7200"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "SOA", + "SOA 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NS", "1200", "", "dns01.example.com."); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NS", "3600", "", "dns02.example.com."); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NS", "1800", "", "dns03.example.com."); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "NS", + "NS 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "MX", "3600", "", "10 mail.example.com."); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "MX", "3600", "", + "20 mail.subzone.example.com."); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "MX", + "MX 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "NSEC", "7200", "", + "cname-ext.example.com. NS SOA MX RRSIG NSEC DNSKEY"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "7200", "NSEC", + "NSEC 5 2 7200 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "DNSKEY", "3600", "", + "256 3 5 AwEAAcOUBllYc1hf7ND9uDy+Yz1BF3sI0m4q NGV7W" + "cTD0WEiuV7IjXgHE36fCmS9QsUxSSOV o1I/FMxI2PJVqTYHkX" + "FBS7AzLGsQYMU7UjBZ SotBJ6Imt5pXMu+lEDNy8TOUzG3xm7g" + "0qcbW YF6qCEfvZoBtAqi5Rk7Mlrqs8agxYyMx"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "DNSKEY", "3600", "", + "257 3 5 AwEAAe5WFbxdCPq2jZrZhlMj7oJdff3W7syJ tbvzg" + "62tRx0gkoCDoBI9DPjlOQG0UAbj+xUV 4HQZJStJaZ+fHU5AwV" + "NT+bBZdtV+NujSikhd THb4FYLg2b3Cx9NyJvAVukHp/91HnWu" + "G4T36 CzAFrfPwsHIrBz9BsaIQ21VRkcmj7DswfI/i DGd8j6b" + "qiODyNZYQ+ZrLmF0KIJ2yPN3iO6Zq 23TaOrVTjB7d1a/h31OD" + "fiHAxFHrkY3t3D5J R9Nsl/7fdRmSznwtcSDgLXBoFEYmw6p86" + "Acv RyoYNcL1SXjaKVLG5jyU3UR+LcGZT5t/0xGf oIK/aKwEN" + "rsjcKZZj660b1M="); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "4456 example.com. FAKEFAKEFAKEFAKE"); + ASSERT_TRUE(conn->getNextRecord(columns, column_count)); + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); + EXPECT_FALSE(conn->getNextRecord(columns, column_count)); + // getnextrecord returning false should mean array is not altered + checkRecordRow(columns, "RRSIG", "3600", "DNSKEY", + "DNSKEY 5 2 3600 20100322084538 20100220084538 " + "33495 example.com. FAKEFAKEFAKEFAKE"); } + +} // end anonymous namespace From 5951ef6faaffcff62d9a9963260a932666e3decb Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Wed, 10 Aug 2011 13:36:01 +0200 Subject: [PATCH 059/175] [1062] logging in database.cc --- src/lib/datasrc/database.cc | 28 +++++++++++++++------- src/lib/datasrc/database.h | 8 ++++--- src/lib/datasrc/datasrc_messages.mes | 35 ++++++++++++++++++++++++++++ src/lib/datasrc/sqlite3_connection.h | 2 +- 4 files changed, 61 insertions(+), 12 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 8f13f525a4..b13f3e9935 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -23,6 +23,7 @@ #include #include +#include #include @@ -95,12 +96,8 @@ void addOrCreate(isc::dns::RRsetPtr& rrset, if (ttl < rrset->getTTL()) { rrset->setTTL(ttl); } - // make sure the type is correct - // TODO Assert? - if (type != rrset->getType()) { - isc_throw(DataSourceError, - "attempt to add multiple types to RRset in find()"); - } + // This is a check to make sure find() is not messing things up + assert(type == rrset->getType()); } try { rrset->addRdata(isc::dns::rdata::createRdata(type, cls, rdata_str)); @@ -167,6 +164,7 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, isc::dns::RRsetPtr result_rrset; ZoneFinder::Result result_status = SUCCESS; RRsigStore sig_store; + logger.debug(DBG_TRACE_DETAILED, DATASRC_DATABASE_FIND_RECORDS).arg(name).arg(type); try { connection_->searchForRecords(zone_id_, name.toText()); @@ -193,13 +191,18 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, //cur_sigtype(columns[SIGTYPE_COLUMN]); if (cur_type == type) { + if (result_rrset && + result_rrset->getType() == isc::dns::RRType::CNAME()) { + isc_throw(DataSourceError, "CNAME found but it is not " + "the only record for " + name.toText()); + } addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[DatabaseConnection::RDATA_COLUMN]); } else if (cur_type == isc::dns::RRType::CNAME()) { // There should be no other data, so result_rrset should be empty. if (result_rrset) { isc_throw(DataSourceError, "CNAME found but it is not " - "the only record for " + name.toText()); + "the only record for " + name.toText()); } addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[DatabaseConnection::RDATA_COLUMN]); @@ -227,26 +230,35 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, } } } catch (const DataSourceError& dse) { + logger.error(DATASRC_DATABASE_FIND_ERROR).arg(dse.what()); // call cleanup and rethrow connection_->resetSearch(); throw; } catch (const isc::Exception& isce) { -// // cleanup, change it to a DataSourceError and rethrow + logger.error(DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR).arg(isce.what()); + // cleanup, change it to a DataSourceError and rethrow connection_->resetSearch(); isc_throw(DataSourceError, isce.what()); } catch (const std::exception& ex) { + logger.error(DATASRC_DATABASE_FIND_UNCAUGHT_ERROR).arg(ex.what()); connection_->resetSearch(); throw; } if (!result_rrset) { if (records_found) { + logger.debug(DBG_TRACE_DETAILED, DATASRC_DATABASE_FOUND_NXRRSET) + .arg(name).arg(getClass()).arg(type); result_status = NXRRSET; } else { + logger.debug(DBG_TRACE_DETAILED, DATASRC_DATABASE_FOUND_NXDOMAIN) + .arg(name).arg(getClass()).arg(type); result_status = NXDOMAIN; } } else { sig_store.appendSignatures(result_rrset); + logger.debug(DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_RRSET).arg(*result_rrset); } return (FindResult(result_status, result_rrset)); } diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 0632f64321..4ad3f498af 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -96,7 +96,7 @@ public: * DatabaseConnection::RecordColumnCount elements, the elements of which * are defined in DatabaseConnection::RecordColumns, in their basic * string representation. - * + * * If you are implementing a derived database connection class, you * should have this method check the column_count value, and fill the * array with strings conforming to their description in RecordColumn. @@ -129,10 +129,12 @@ public: /** * Definitions of the fields as they are required to be filled in * by getNextRecord() - * + * * When implementing getNextRecord(), the columns array should * be filled with the values as described in this enumeration, - * in this order. + * in this order, i.e. TYPE_COLUMN should be the first element + * (index 0) of the array, TTL_COLUMN should be the second element + * (index 1), etc. */ enum RecordColumns { TYPE_COLUMN = 0, ///< The RRType of the record (A/NS/TXT etc.) diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index 3fbb24d05d..af704d938e 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -63,6 +63,41 @@ The maximum allowed number of items of the hotspot cache is set to the given number. If there are too many, some of them will be dropped. The size of 0 means no limit. +% DATASRC_DATABASE_FIND_ERROR error retrieving data from database datasource: %1 +The was an internal error while reading data from a datasource. This can either +mean the specific data source implementation is not behaving correctly, or the +data it provides is invalid. The current search is aborted. +The error message contains specific information about the error. + +% DATASRC_DATABASE_FIND_RECORDS looking for record %1/%2 +Debug information. The database data source is looking up records with the given +name and type in the database. + +% DATASRC_DATABASE_FIND_UNCAUGHT_ERROR uncaught general error retrieving data from database datasource: %1 +There was an uncaught general exception while reading data from a datasource. +This most likely points to a logic error in the code, and can be considered a +bug. The current search is aborted. Specific information about the exception is +printed in this error message. + +% DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR uncaught error retrieving data from database datasource: %1 +There was an uncaught ISC exception while reading data from a datasource. This +most likely points to a logic error in the code, and can be considered a bug. +The current search is aborted. Specific information about the exception is +printed in this error message. + +% DATASRC_DATABASE_FOUND_NXDOMAIN search in database resulted in NXDOMAIN for %1/%2/%3 +The data returned by the database backend did not contain any data for the given +domain name, class and type. + +% DATASRC_DATABASE_FOUND_NXRRSET search in database resulted in NXRRSET for %1/%2/%3 +The data returned by the database backend contained data for the given domain +name and class, but not for the given type. + +% DATASRC_DATABASE_FOUND_RRSET search in database resulted in RRset %1 +The data returned by the database backend contained data for the given domain +name, and it either matches the type or has a relevant type. The RRset that is +returned is printed. + % DATASRC_DO_QUERY handling query for '%1/%2' A debug message indicating that a query for the given name and RR type is being processed. diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index c1968c4a34..d41b814605 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -130,7 +130,7 @@ public: * * This implements the resetSearch from DatabaseConnection. * See the documentation there for more information. - * + * * This function never throws. */ virtual void resetSearch(); From cfd1d9e142fa2fd8b21f74de0e4a0109e0a04439 Mon Sep 17 00:00:00 2001 From: JINMEI Tatuya Date: Thu, 11 Aug 2011 02:07:39 -0700 Subject: [PATCH 060/175] [1062] some minor editorial changes, mostly just folding long lines. --- src/lib/datasrc/database.cc | 29 ++++++++++--------- src/lib/datasrc/sqlite3_connection.cc | 8 +++-- src/lib/datasrc/tests/database_unittest.cc | 2 +- .../tests/sqlite3_connection_unittest.cc | 3 +- 4 files changed, 23 insertions(+), 19 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index b13f3e9935..abd979c464 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -179,15 +179,16 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, try { const isc::dns::RRType cur_type(columns[DatabaseConnection::TYPE_COLUMN]); const isc::dns::RRTTL cur_ttl(columns[DatabaseConnection::TTL_COLUMN]); - // Ths sigtype column was an optimization for finding the relevant - // RRSIG RRs for a lookup. Currently this column is not used in this - // revised datasource implementation. We should either start using it - // again, or remove it from use completely (i.e. also remove it from - // the schema and the backend implementation). - // Note that because we don't use it now, we also won't notice it if - // the value is wrong (i.e. if the sigtype column contains an rrtype - // that is different from the actual value of the 'type covered' field - // in the RRSIG Rdata). + // Ths sigtype column was an optimization for finding the + // relevant RRSIG RRs for a lookup. Currently this column is + // not used in this revised datasource implementation. We + // should either start using it again, or remove it from use + // completely (i.e. also remove it from the schema and the + // backend implementation). + // Note that because we don't use it now, we also won't notice + // it if the value is wrong (i.e. if the sigtype column + // contains an rrtype that is different from the actual value + // of the 'type covered' field in the RRSIG Rdata). //cur_sigtype(columns[SIGTYPE_COLUMN]); if (cur_type == type) { @@ -199,7 +200,8 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, addOrCreate(result_rrset, name, getClass(), cur_type, cur_ttl, columns[DatabaseConnection::RDATA_COLUMN]); } else if (cur_type == isc::dns::RRType::CNAME()) { - // There should be no other data, so result_rrset should be empty. + // There should be no other data, so result_rrset should + // be empty. if (result_rrset) { isc_throw(DataSourceError, "CNAME found but it is not " "the only record for " + name.toText()); @@ -211,9 +213,10 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, // If we get signatures before we get the actual data, we // can't know which ones to keep and which to drop... // So we keep a separate store of any signature that may be - // relevant and add them to the final RRset when we are done. - // A possible optimization here is to not store them for types - // we are certain we don't need + // relevant and add them to the final RRset when we are + // done. + // A possible optimization here is to not store them for + // types we are certain we don't need sig_store.addSig(isc::dns::rdata::createRdata(cur_type, getClass(), columns[DatabaseConnection::RDATA_COLUMN])); diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index 750a62cf4c..acba0e6227 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -329,6 +329,7 @@ SQLite3Connection::searchForRecords(int zone_id, const std::string& name) { "Error in sqlite3_bind_int() for zone_id " << zone_id << ", sqlite3 result code: " << result); } + // use transient since name is a ref and may disappear result = sqlite3_bind_text(dbparameters_->q_any_, 2, name.c_str(), -1, SQLITE_TRANSIENT); @@ -337,7 +338,7 @@ SQLite3Connection::searchForRecords(int zone_id, const std::string& name) { "Error in sqlite3_bind_text() for name " << name << ", sqlite3 result code: " << result); } -}; +} namespace { // This helper function converts from the unsigned char* type (used by @@ -382,8 +383,9 @@ SQLite3Connection::getNextRecord(std::string columns[], size_t column_count) { resetSearch(); isc_throw(DataSourceError, "Unexpected failure in sqlite3_step (sqlite result code " << rc << ")"); - } catch (std::bad_alloc) { - isc_throw(DataSourceError, "bad_alloc in Sqlite3Connection::getNextRecord"); + } catch (const std::bad_alloc&) { + isc_throw(DataSourceError, + "bad_alloc in Sqlite3Connection::getNextRecord"); } // Compilers might not realize isc_throw always throws return (false); diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 69678f0047..8cfbb08402 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -51,7 +51,7 @@ public: search_running_ = true; // 'hardcoded' name to trigger exceptions (for testing - // the error handling of find() (the other on is below in + // the error handling of find() (the other on is below in // if the name is "exceptiononsearch" it'll raise an exception here if (name == "dsexception.in.search.") { isc_throw(DataSourceError, "datasource exception on search"); diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 7f7032238d..8fdbf9f10b 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -127,8 +127,7 @@ TEST_F(SQLite3Conn, getRecords) { std::string columns[column_count]; // without search, getNext() should return false - EXPECT_FALSE(conn->getNextRecord(columns, - column_count)); + EXPECT_FALSE(conn->getNextRecord(columns, column_count)); checkRecordRow(columns, "", "", "", ""); conn->searchForRecords(zone_id, "foo.bar."); From 12b3473393fb7a471fc7d928476b0ba66da145e9 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 11 Aug 2011 11:39:28 +0200 Subject: [PATCH 061/175] [1062] unconst find() --- src/lib/datasrc/database.cc | 2 +- src/lib/datasrc/database.h | 3 +-- src/lib/datasrc/memory_datasrc.cc | 2 +- src/lib/datasrc/memory_datasrc.h | 2 +- src/lib/datasrc/zone.h | 2 +- 5 files changed, 5 insertions(+), 6 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index abd979c464..6a5f2d3f54 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -156,7 +156,7 @@ ZoneFinder::FindResult DatabaseClient::Finder::find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList*, - const FindOptions) const + const FindOptions) { // This variable is used to determine the difference between // NXDOMAIN and NXRRSET diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 4ad3f498af..000c813d7b 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -236,8 +236,7 @@ public: virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, - const FindOptions options = FIND_DEFAULT) - const; + const FindOptions options = FIND_DEFAULT); /** * \brief The zone ID diff --git a/src/lib/datasrc/memory_datasrc.cc b/src/lib/datasrc/memory_datasrc.cc index 26223dad90..d06cd9ba43 100644 --- a/src/lib/datasrc/memory_datasrc.cc +++ b/src/lib/datasrc/memory_datasrc.cc @@ -618,7 +618,7 @@ InMemoryZoneFinder::getClass() const { ZoneFinder::FindResult InMemoryZoneFinder::find(const Name& name, const RRType& type, - RRsetList* target, const FindOptions options) const + RRsetList* target, const FindOptions options) { return (impl_->find(name, type, target, options)); } diff --git a/src/lib/datasrc/memory_datasrc.h b/src/lib/datasrc/memory_datasrc.h index 9707797299..0234a916f8 100644 --- a/src/lib/datasrc/memory_datasrc.h +++ b/src/lib/datasrc/memory_datasrc.h @@ -73,7 +73,7 @@ public: virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, - const FindOptions options = FIND_DEFAULT) const; + const FindOptions options = FIND_DEFAULT); /// \brief Inserts an rrset into the zone. /// diff --git a/src/lib/datasrc/zone.h b/src/lib/datasrc/zone.h index f67ed4be24..0dacc5da55 100644 --- a/src/lib/datasrc/zone.h +++ b/src/lib/datasrc/zone.h @@ -197,7 +197,7 @@ public: const isc::dns::RRType& type, isc::dns::RRsetList* target = NULL, const FindOptions options - = FIND_DEFAULT) const = 0; + = FIND_DEFAULT) = 0; //@} }; From b19a36e30d0d3829c68f2e0300ea1487da242af8 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 11 Aug 2011 13:09:12 +0200 Subject: [PATCH 062/175] [1062] added getDBName to DatabaseConnection and add database name to logging output --- src/lib/datasrc/database.cc | 12 ++++++++---- src/lib/datasrc/database.h | 12 ++++++++++++ src/lib/datasrc/datasrc_messages.mes | 8 ++++---- src/lib/datasrc/sqlite3_connection.cc | 5 ++++- src/lib/datasrc/sqlite3_connection.h | 3 +++ src/lib/datasrc/tests/database_unittest.cc | 13 ++++++++++++- .../datasrc/tests/sqlite3_connection_unittest.cc | 12 ++++++++++++ src/lib/util/filename.h | 5 +++++ src/lib/util/tests/filename_unittest.cc | 15 +++++++++++++++ 9 files changed, 75 insertions(+), 10 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 6a5f2d3f54..eecaa123e5 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -164,7 +164,8 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, isc::dns::RRsetPtr result_rrset; ZoneFinder::Result result_status = SUCCESS; RRsigStore sig_store; - logger.debug(DBG_TRACE_DETAILED, DATASRC_DATABASE_FIND_RECORDS).arg(name).arg(type); + logger.debug(DBG_TRACE_DETAILED, DATASRC_DATABASE_FIND_RECORDS) + .arg(connection_->getDBName()).arg(name).arg(type); try { connection_->searchForRecords(zone_id_, name.toText()); @@ -233,17 +234,20 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, } } } catch (const DataSourceError& dse) { - logger.error(DATASRC_DATABASE_FIND_ERROR).arg(dse.what()); + logger.error(DATASRC_DATABASE_FIND_ERROR) + .arg(connection_->getDBName()).arg(dse.what()); // call cleanup and rethrow connection_->resetSearch(); throw; } catch (const isc::Exception& isce) { - logger.error(DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR).arg(isce.what()); + logger.error(DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR) + .arg(connection_->getDBName()).arg(isce.what()); // cleanup, change it to a DataSourceError and rethrow connection_->resetSearch(); isc_throw(DataSourceError, isce.what()); } catch (const std::exception& ex) { - logger.error(DATASRC_DATABASE_FIND_UNCAUGHT_ERROR).arg(ex.what()); + logger.error(DATASRC_DATABASE_FIND_UNCAUGHT_ERROR) + .arg(connection_->getDBName()).arg(ex.what()); connection_->resetSearch(); throw; } diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index 000c813d7b..e0ff3d5910 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -147,6 +147,17 @@ public: /// The number of fields the columns array passed to getNextRecord should have static const size_t RecordColumnCount = 4; + + /** + * \brief Returns a string identifying this dabase backend + * + * Any implementation is free to choose the exact string content, + * but it is advisable to make it a name that is distinguishable + * from the others. + * + * \return the name of the dabase + */ + virtual const std::string& getDBName() const = 0; }; /** @@ -273,6 +284,7 @@ public: * returned, though. */ virtual FindResult findZone(const isc::dns::Name& name) const; + private: /// \brief Our connection. const boost::shared_ptr connection_; diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index af704d938e..12a5050c10 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -63,23 +63,23 @@ The maximum allowed number of items of the hotspot cache is set to the given number. If there are too many, some of them will be dropped. The size of 0 means no limit. -% DATASRC_DATABASE_FIND_ERROR error retrieving data from database datasource: %1 +% DATASRC_DATABASE_FIND_ERROR error retrieving data from datasource %1: %2 The was an internal error while reading data from a datasource. This can either mean the specific data source implementation is not behaving correctly, or the data it provides is invalid. The current search is aborted. The error message contains specific information about the error. -% DATASRC_DATABASE_FIND_RECORDS looking for record %1/%2 +% DATASRC_DATABASE_FIND_RECORDS looking in datasource %1 for record %2/%3 Debug information. The database data source is looking up records with the given name and type in the database. -% DATASRC_DATABASE_FIND_UNCAUGHT_ERROR uncaught general error retrieving data from database datasource: %1 +% DATASRC_DATABASE_FIND_UNCAUGHT_ERROR uncaught general error retrieving data from datasource %1: %2 There was an uncaught general exception while reading data from a datasource. This most likely points to a logic error in the code, and can be considered a bug. The current search is aborted. Specific information about the exception is printed in this error message. -% DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR uncaught error retrieving data from database datasource: %1 +% DATASRC_DATABASE_FIND_UNCAUGHT_ISC_ERROR uncaught error retrieving data from datasource %1: %2 There was an uncaught ISC exception while reading data from a datasource. This most likely points to a logic error in the code, and can be considered a bug. The current search is aborted. Specific information about the exception is diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index acba0e6227..af133a4220 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -17,6 +17,7 @@ #include #include #include +#include namespace isc { namespace datasrc { @@ -48,7 +49,9 @@ struct SQLite3Parameters { SQLite3Connection::SQLite3Connection(const std::string& filename, const isc::dns::RRClass& rrclass) : dbparameters_(new SQLite3Parameters), - class_(rrclass.toText()) + class_(rrclass.toText()), + database_name_("sqlite3_" + + isc::util::Filename(filename).nameAndExtension()) { LOG_DEBUG(logger, DBG_TRACE_BASIC, DATASRC_SQLITE_NEWCONN); diff --git a/src/lib/datasrc/sqlite3_connection.h b/src/lib/datasrc/sqlite3_connection.h index d41b814605..8c38b8ab4a 100644 --- a/src/lib/datasrc/sqlite3_connection.h +++ b/src/lib/datasrc/sqlite3_connection.h @@ -135,6 +135,8 @@ public: */ virtual void resetSearch(); + virtual const std::string& getDBName() const { return database_name_; } + private: /// \brief Private database data SQLite3Parameters* dbparameters_; @@ -144,6 +146,7 @@ private: void open(const std::string& filename); /// \brief Closes the database void close(); + const std::string database_name_; }; } diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 8cfbb08402..5f11d7289c 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -37,7 +37,11 @@ namespace { */ class MockConnection : public DatabaseConnection { public: - MockConnection() : search_running_(false) { fillData(); } + MockConnection() : search_running_(false), + database_name_("mock_database") + { + fillData(); + } virtual std::pair getZone(const Name& name) const { if (name == Name("example.org")) { @@ -108,6 +112,9 @@ public: return (search_running_); } + virtual const std::string& getDBName() const { + return database_name_; + } private: std::map > > records; // used as internal index for getNextRecord() @@ -125,6 +132,8 @@ private: // hardcode some exceptions into getNextRecord std::string searched_name_; + const std::string database_name_; + // Adds one record to the current name in the database // The actual data will not be added to 'records' until // addCurName() is called @@ -271,6 +280,8 @@ public: // Will be deleted by client_, just keep the current value for comparison. MockConnection* current_connection_; shared_ptr client_; + const std::string database_name_; + /** * Check the zone finder is a valid one and references the zone ID and * connection available here. diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 8fdbf9f10b..2a1d471258 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -30,7 +30,9 @@ namespace { // Some test data std::string SQLITE_DBFILE_EXAMPLE = TEST_DATA_DIR "/test.sqlite3"; std::string SQLITE_DBFILE_EXAMPLE2 = TEST_DATA_DIR "/example2.com.sqlite3"; +std::string SQLITE_DBNAME_EXAMPLE2 = "sqlite3_example2.com.sqlite3"; std::string SQLITE_DBFILE_EXAMPLE_ROOT = TEST_DATA_DIR "/test-root.sqlite3"; +std::string SQLITE_DBNAME_EXAMPLE_ROOT = "sqlite3_test-root.sqlite3"; std::string SQLITE_DBFILE_BROKENDB = TEST_DATA_DIR "/brokendb.sqlite3"; std::string SQLITE_DBFILE_MEMORY = ":memory:"; @@ -101,6 +103,16 @@ TEST_F(SQLite3Conn, noClass) { EXPECT_FALSE(conn->getZone(Name("example.com")).first); } +TEST(SQLite3Open, getDBNameExample2) { + SQLite3Connection conn(SQLITE_DBFILE_EXAMPLE2, RRClass::IN()); + EXPECT_EQ(SQLITE_DBNAME_EXAMPLE2, conn.getDBName()); +} + +TEST(SQLite3Open, getDBNameExampleROOT) { + SQLite3Connection conn(SQLITE_DBFILE_EXAMPLE_ROOT, RRClass::IN()); + EXPECT_EQ(SQLITE_DBNAME_EXAMPLE_ROOT, conn.getDBName()); +} + // Simple function to cound the number of records for // any name void diff --git a/src/lib/util/filename.h b/src/lib/util/filename.h index c9874ce220..f6259386ef 100644 --- a/src/lib/util/filename.h +++ b/src/lib/util/filename.h @@ -103,6 +103,11 @@ public: return (extension_); } + /// \return Name + extension of Given File Name + std::string nameAndExtension() const { + return (name_ + extension_); + } + /// \brief Expand Name with Default /// /// A default file specified is supplied and used to fill in any missing diff --git a/src/lib/util/tests/filename_unittest.cc b/src/lib/util/tests/filename_unittest.cc index be29ff18ea..b17e374a51 100644 --- a/src/lib/util/tests/filename_unittest.cc +++ b/src/lib/util/tests/filename_unittest.cc @@ -51,42 +51,49 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/alpha/beta/", fname.directory()); EXPECT_EQ("gamma", fname.name()); EXPECT_EQ(".delta", fname.extension()); + EXPECT_EQ("gamma.delta", fname.nameAndExtension()); // Directory only fname.setName("/gamma/delta/"); EXPECT_EQ("/gamma/delta/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // Filename only fname.setName("epsilon"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("epsilon", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("epsilon", fname.nameAndExtension()); // Extension only fname.setName(".zeta"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".zeta", fname.extension()); + EXPECT_EQ(".zeta", fname.nameAndExtension()); // Missing directory fname.setName("eta.theta"); EXPECT_EQ("", fname.directory()); EXPECT_EQ("eta", fname.name()); EXPECT_EQ(".theta", fname.extension()); + EXPECT_EQ("eta.theta", fname.nameAndExtension()); // Missing filename fname.setName("/iota/.kappa"); EXPECT_EQ("/iota/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".kappa", fname.extension()); + EXPECT_EQ(".kappa", fname.nameAndExtension()); // Missing extension fname.setName("lambda/mu/nu"); EXPECT_EQ("lambda/mu/", fname.directory()); EXPECT_EQ("nu", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("nu", fname.nameAndExtension()); // Check that the decomposition can occur in the presence of leading and // trailing spaces @@ -94,18 +101,21 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("lambda/mu/", fname.directory()); EXPECT_EQ("nu", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("nu", fname.nameAndExtension()); // Empty string fname.setName(""); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // ... and just spaces fname.setName(" "); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); // Check corner cases - where separators are present, but strings are // absent. @@ -113,16 +123,19 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ("", fname.extension()); + EXPECT_EQ("", fname.nameAndExtension()); fname.setName("."); EXPECT_EQ("", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(".", fname.nameAndExtension()); fname.setName("/."); EXPECT_EQ("/", fname.directory()); EXPECT_EQ("", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(".", fname.nameAndExtension()); // Note that the space is a valid filename here; only leading and trailing // spaces should be trimmed. @@ -130,11 +143,13 @@ TEST_F(FilenameTest, Components) { EXPECT_EQ("/", fname.directory()); EXPECT_EQ(" ", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(".", fname.nameAndExtension()); fname.setName(" / . "); EXPECT_EQ("/", fname.directory()); EXPECT_EQ(" ", fname.name()); EXPECT_EQ(".", fname.extension()); + EXPECT_EQ(".", fname.nameAndExtension()); } // Check that the expansion with a default works. From eb8ba927115b091bb407cbc29ad2d07dfed318f1 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 11 Aug 2011 07:25:37 -0500 Subject: [PATCH 063/175] [master] point to Year3Goals wikipage instead of Year2Milestones --- README | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README b/README index a6509da2d2..a9e05f1de9 100644 --- a/README +++ b/README @@ -8,10 +8,10 @@ for serving, maintaining, and developing DNS. BIND10-devel is new development leading up to the production BIND 10 release. It contains prototype code and experimental interfaces. Nevertheless it is ready to use now for testing the -new BIND 10 infrastructure ideas. The Year 2 milestones of the -five year plan are described here: +new BIND 10 infrastructure ideas. The Year 3 goals of the five +year plan are described here: - https://bind10.isc.org/wiki/Year2Milestones + http://bind10.isc.org/wiki/Year3Goals This release includes the bind10 master process, b10-msgq message bus, b10-auth authoritative DNS server (with SQLite3 and in-memory From f03688da19c21b4d46761cc4ed9da981cebe43c1 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 11 Aug 2011 07:48:21 -0500 Subject: [PATCH 064/175] [master] document about setproctitle --- doc/guide/bind10-guide.xml | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 6a4218207a..020fbbc6b7 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -742,6 +742,16 @@ Debian and Ubuntu: get additional debugging or diagnostic output. + + + + If the setproctitle Python module is detected at start up, + the process names for the Python-based daemons will be renamed + to better identify them instead of just python. + This is not needed on some operating systems. + + +
From 0081ce40b832f4c5abaeb0316736d772aec3f08d Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 11 Aug 2011 08:34:37 -0500 Subject: [PATCH 065/175] [master] fix output for CC_ESTABLISH socket file discussed via jabber --- src/lib/cc/session.cc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/lib/cc/session.cc b/src/lib/cc/session.cc index 97d5cf14d0..e0e24cf922 100644 --- a/src/lib/cc/session.cc +++ b/src/lib/cc/session.cc @@ -119,7 +119,7 @@ private: void SessionImpl::establish(const char& socket_file) { try { - LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISH).arg(socket_file); + LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISH).arg(&socket_file); socket_.connect(asio::local::stream_protocol::endpoint(&socket_file), error_); LOG_DEBUG(logger, DBG_TRACE_BASIC, CC_ESTABLISHED); From d00042b03e1f85cd1d8ea8340d5ac72222e5123e Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Thu, 11 Aug 2011 16:18:43 +0200 Subject: [PATCH 066/175] [1061] Rename missing names Some "Conn" variables were left out and forgotten in previous renames, fixing what could be found. --- src/lib/datasrc/datasrc_messages.mes | 4 ++-- src/lib/datasrc/tests/database_unittest.cc | 2 +- .../datasrc/tests/sqlite3_accessor_unittest.cc | 18 +++++++++--------- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index 3fbb24d05d..1a911c0adc 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -413,7 +413,7 @@ Debug information. An instance of SQLite data source is being created. % DATASRC_SQLITE_DESTROY SQLite data source destroyed Debug information. An instance of SQLite data source is being destroyed. -% DATASRC_SQLITE_DROPCONN SQLite3Connection is being deinitialized +% DATASRC_SQLITE_DROPCONN SQLite3Database is being deinitialized The object around a database connection is being destroyed. % DATASRC_SQLITE_ENCLOSURE looking for zone containing '%1' @@ -468,7 +468,7 @@ source. The SQLite data source was asked to provide a NSEC3 record for given zone. But it doesn't contain that zone. -% DATASRC_SQLITE_NEWCONN SQLite3Connection is being initialized +% DATASRC_SQLITE_NEWCONN SQLite3Database is being initialized A wrapper object to hold database connection is being initialized. % DATASRC_SQLITE_OPEN opening SQLite database '%1' diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index ab4423ec53..4144a5bf15 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -91,7 +91,7 @@ TEST_F(DatabaseClientTest, superZone) { checkZoneFinder(zone); } -TEST_F(DatabaseClientTest, noConnException) { +TEST_F(DatabaseClientTest, noAccessorException) { EXPECT_THROW(DatabaseClient(shared_ptr()), isc::InvalidParameter); } diff --git a/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc b/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc index 101c02b420..409201d8e8 100644 --- a/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_accessor_unittest.cc @@ -64,13 +64,13 @@ TEST(SQLite3Open, memoryDB) { } // Test fixture for querying the db -class SQLite3Conn : public ::testing::Test { +class SQLite3Access : public ::testing::Test { public: - SQLite3Conn() { - initConn(SQLITE_DBFILE_EXAMPLE, RRClass::IN()); + SQLite3Access() { + initAccessor(SQLITE_DBFILE_EXAMPLE, RRClass::IN()); } // So it can be re-created with different data - void initConn(const std::string& filename, const RRClass& rrclass) { + void initAccessor(const std::string& filename, const RRClass& rrclass) { db.reset(new SQLite3Database(filename, rrclass)); } // The tested db @@ -78,25 +78,25 @@ public: }; // This zone exists in the data, so it should be found -TEST_F(SQLite3Conn, getZone) { +TEST_F(SQLite3Access, getZone) { std::pair result(db->getZone(Name("example.com"))); EXPECT_TRUE(result.first); EXPECT_EQ(1, result.second); } // But it should find only the zone, nothing below it -TEST_F(SQLite3Conn, subZone) { +TEST_F(SQLite3Access, subZone) { EXPECT_FALSE(db->getZone(Name("sub.example.com")).first); } // This zone is not there at all -TEST_F(SQLite3Conn, noZone) { +TEST_F(SQLite3Access, noZone) { EXPECT_FALSE(db->getZone(Name("example.org")).first); } // This zone is there, but in different class -TEST_F(SQLite3Conn, noClass) { - initConn(SQLITE_DBFILE_EXAMPLE, RRClass::CH()); +TEST_F(SQLite3Access, noClass) { + initAccessor(SQLITE_DBFILE_EXAMPLE, RRClass::CH()); EXPECT_FALSE(db->getZone(Name("example.com")).first); } From 0af72968bfd192fa418551ae75def455adcfbb4b Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Thu, 11 Aug 2011 17:19:44 +0200 Subject: [PATCH 067/175] [801] Some more notes about API --- src/bin/bind10/creatorapi.txt | 41 ++++++++++++++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-) diff --git a/src/bin/bind10/creatorapi.txt b/src/bin/bind10/creatorapi.txt index 6100f39a1d..fd6be31a2d 100644 --- a/src/bin/bind10/creatorapi.txt +++ b/src/bin/bind10/creatorapi.txt @@ -33,6 +33,19 @@ token over the connection (so Boss will know which socket to send there, in case multiple applications ask for sockets simultaneously) and Boss sends the socket in return. +In theory, we could send the requests directly over the unix-domain +socket, but it has two disadvantages: +* The msgq handles serializing/deserializing of structured + information (like the parameters to be used), we would have to do it + manually on the socket. +* We could place some kind of security in front of msgq (in case file + permissions are not enough, for example if they are not honored on + socket files, as indicated in the first paragraph of: + http://lkml.indiana.edu/hypermail/linux/kernel/0505.2/0008.html). + The socket would have to be secured separately. With the tokens, + there's some level of security already - someone not having the + token can't request a priviledged socket. + Caching of sockets ------------------ To allow sending the same socket to multiple application, the Boss process will @@ -64,7 +77,10 @@ The commands * Command to release a socket. This one would have single parameter, the token used to get the socket. After this, boss would decrease its reference count and if it drops to zero, close its own copy of the socket. This should be used - when the module stops using the socket (and after closes it). + when the module stops using the socket (and after closes it). The + library could remember the file-descriptor to token mapping (for + common applications that don't request the same socket multiple + times in parallel). * Command to request a socket. It would have parameters to specify which socket (IP address, address family, port) and how to allow sharing. Sharing would be one of: @@ -78,3 +94,26 @@ The commands It would return either error (the socket can't be created or sharing is not possible) or the token. Then there would be some time for the application to pick up the requested socket. + +Examples +-------- +We probably would have a library with blocking calls to request the +sockets, so a code could look like: + +(socket_fd, token) = request_socket(address, port, 'UDP', SHARE_SAMENAME, 'test-application') +sock = socket.fromfd(socket_fd) + +# Some sock.send and sock.recv stuff here + +sock.close() +release_socket(socket_fd) # or release_socket(token) + +Known limitations +----------------- +Currently the socket creator doesn't support specifying any socket +options. If it turns out there are any options that need to be set +before bind(), we'll need to extend it (and extend the protocol as +well). + +The current socket creator doesn't know raw sockets, but if they are +needed, it should be easy to add. From ac15a86eb62832cc22533bc33b802ea297666ad5 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Thu, 11 Aug 2011 17:31:06 +0200 Subject: [PATCH 068/175] [1062] rest of the review comments --- src/lib/datasrc/database.cc | 14 +- src/lib/datasrc/database.h | 4 +- src/lib/datasrc/datasrc_messages.mes | 5 + src/lib/datasrc/sqlite3_connection.cc | 87 ++++---- src/lib/datasrc/tests/database_unittest.cc | 195 +++++++++++++++--- .../tests/sqlite3_connection_unittest.cc | 3 +- 6 files changed, 229 insertions(+), 79 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index eecaa123e5..7bbf28512b 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -93,11 +93,15 @@ void addOrCreate(isc::dns::RRsetPtr& rrset, if (!rrset) { rrset.reset(new isc::dns::RRset(name, cls, type, ttl)); } else { - if (ttl < rrset->getTTL()) { - rrset->setTTL(ttl); - } // This is a check to make sure find() is not messing things up assert(type == rrset->getType()); + if (ttl != rrset->getTTL()) { + if (ttl < rrset->getTTL()) { + rrset->setTTL(ttl); + } + logger.info(DATASRC_DATABASE_FIND_TTL_MISMATCH) + .arg(name).arg(cls).arg(type).arg(rrset->getTTL()); + } } try { rrset->addRdata(isc::dns::rdata::createRdata(type, cls, rdata_str)); @@ -170,9 +174,9 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, try { connection_->searchForRecords(zone_id_, name.toText()); - std::string columns[DatabaseConnection::RecordColumnCount]; + std::string columns[DatabaseConnection::RECORDCOLUMNCOUNT]; while (connection_->getNextRecord(columns, - DatabaseConnection::RecordColumnCount)) { + DatabaseConnection::RECORDCOLUMNCOUNT)) { if (!records_found) { records_found = true; } diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index e0ff3d5910..4a28b7c64b 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -93,7 +93,7 @@ public: * In the case of a database error, a DatasourceError is thrown. * * The columns passed is an array of std::strings consisting of - * DatabaseConnection::RecordColumnCount elements, the elements of which + * DatabaseConnection::RECORDCOLUMNCOUNT elements, the elements of which * are defined in DatabaseConnection::RecordColumns, in their basic * string representation. * @@ -146,7 +146,7 @@ public: }; /// The number of fields the columns array passed to getNextRecord should have - static const size_t RecordColumnCount = 4; + static const size_t RECORDCOLUMNCOUNT = 4; /** * \brief Returns a string identifying this dabase backend diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index 12a5050c10..5a63b0dea6 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -73,6 +73,11 @@ The error message contains specific information about the error. Debug information. The database data source is looking up records with the given name and type in the database. +% DATASRC_DATABASE_FIND_TTL_MISMATCH TTL values differ for elements of %1/%2/%3, setting to %4 +The datasource backend provided resource records for the given RRset with +different TTL values. The TTL of the RRSET is set to the lowest value, which +is printed in the log message. + % DATASRC_DATABASE_FIND_UNCAUGHT_ERROR uncaught general error retrieving data from datasource %1: %2 There was an uncaught general exception while reading data from a datasource. This most likely points to a logic error in the code, and can be considered a diff --git a/src/lib/datasrc/sqlite3_connection.cc b/src/lib/datasrc/sqlite3_connection.cc index af133a4220..4fd48006f7 100644 --- a/src/lib/datasrc/sqlite3_connection.cc +++ b/src/lib/datasrc/sqlite3_connection.cc @@ -325,21 +325,17 @@ SQLite3Connection::getZone(const isc::dns::Name& name) const { void SQLite3Connection::searchForRecords(int zone_id, const std::string& name) { resetSearch(); - int result; - result = sqlite3_bind_int(dbparameters_->q_any_, 1, zone_id); - if (result != SQLITE_OK) { + if (sqlite3_bind_int(dbparameters_->q_any_, 1, zone_id) != SQLITE_OK) { isc_throw(DataSourceError, "Error in sqlite3_bind_int() for zone_id " << - zone_id << ", sqlite3 result code: " << result); + zone_id << ": " << sqlite3_errmsg(dbparameters_->db_)); } - // use transient since name is a ref and may disappear - result = sqlite3_bind_text(dbparameters_->q_any_, 2, name.c_str(), -1, - SQLITE_TRANSIENT); - if (result != SQLITE_OK) { + if (sqlite3_bind_text(dbparameters_->q_any_, 2, name.c_str(), -1, + SQLITE_TRANSIENT) != SQLITE_OK) { isc_throw(DataSourceError, "Error in sqlite3_bind_text() for name " << - name << ", sqlite3 result code: " << result); + name << ": " << sqlite3_errmsg(dbparameters_->db_)); } } @@ -349,10 +345,23 @@ namespace { // might not be directly convertable // In case sqlite3_column_text() returns NULL, we just make it an // empty string. +// The sqlite3parameters value is only used to check the error code if +// ucp == NULL const char* -convertToPlainChar(const unsigned char* ucp) { +convertToPlainChar(const unsigned char* ucp, + SQLite3Parameters* dbparameters) { if (ucp == NULL) { - return (""); + // The field can really be NULL, in which case we return an + // empty string, or sqlite may have run out of memory, in + // which case we raise an error + if (dbparameters && + sqlite3_errcode(dbparameters->db_) == SQLITE_NOMEM) { + isc_throw(DataSourceError, + "Sqlite3 backend encountered a memory allocation " + "error in sqlite3_column_text()"); + } else { + return (""); + } } const void* p = ucp; return (static_cast(p)); @@ -361,35 +370,35 @@ convertToPlainChar(const unsigned char* ucp) { bool SQLite3Connection::getNextRecord(std::string columns[], size_t column_count) { - try { - sqlite3_stmt* current_stmt = dbparameters_->q_any_; - const int rc = sqlite3_step(current_stmt); - - if (column_count != RecordColumnCount) { - isc_throw(DataSourceError, - "Datasource backend caller did not pass a column array " - "of size " << RecordColumnCount << - " to getNextRecord()"); - } - - if (rc == SQLITE_ROW) { - for (int column = 0; column < column_count; ++column) { - columns[column] = convertToPlainChar(sqlite3_column_text( - current_stmt, column)); - } - return (true); - } else if (rc == SQLITE_DONE) { - // reached the end of matching rows - resetSearch(); - return (false); - } - resetSearch(); - isc_throw(DataSourceError, - "Unexpected failure in sqlite3_step (sqlite result code " << rc << ")"); - } catch (const std::bad_alloc&) { - isc_throw(DataSourceError, - "bad_alloc in Sqlite3Connection::getNextRecord"); + if (column_count != RECORDCOLUMNCOUNT) { + isc_throw(DataSourceError, + "Datasource backend caller did not pass a column array " + "of size " << RECORDCOLUMNCOUNT << + " to getNextRecord()"); } + + sqlite3_stmt* current_stmt = dbparameters_->q_any_; + const int rc = sqlite3_step(current_stmt); + + if (rc == SQLITE_ROW) { + for (int column = 0; column < column_count; ++column) { + try { + columns[column] = convertToPlainChar(sqlite3_column_text( + current_stmt, column), + dbparameters_); + } catch (const std::bad_alloc&) { + isc_throw(DataSourceError, + "bad_alloc in Sqlite3Connection::getNextRecord"); + } + } + return (true); + } else if (rc == SQLITE_DONE) { + // reached the end of matching rows + resetSearch(); + return (false); + } + isc_throw(DataSourceError, "Unexpected failure in sqlite3_step: " << + sqlite3_errmsg(dbparameters_->db_)); // Compilers might not realize isc_throw always throws return (false); } diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 5f11d7289c..609ab6ac3b 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -16,12 +16,15 @@ #include #include +#include #include #include #include #include +#include + #include using namespace isc::datasrc; @@ -89,7 +92,7 @@ public: throw std::exception(); } - if (column_count != DatabaseConnection::RecordColumnCount) { + if (column_count != DatabaseConnection::RECORDCOLUMNCOUNT) { isc_throw(DataSourceError, "Wrong column count in getNextRecord"); } if (cur_record < cur_name.size()) { @@ -328,18 +331,31 @@ doFindTest(shared_ptr finder, const isc::dns::RRType& expected_type, const isc::dns::RRTTL expected_ttl, ZoneFinder::Result expected_result, - unsigned int expected_rdata_count, - unsigned int expected_signature_count) + const std::vector& expected_rdatas, + const std::vector& expected_sig_rdatas) { ZoneFinder::FindResult result = finder->find(name, type, NULL, ZoneFinder::FIND_DEFAULT); ASSERT_EQ(expected_result, result.code) << name << " " << type; - if (expected_rdata_count > 0) { - EXPECT_EQ(expected_rdata_count, result.rrset->getRdataCount()); + if (expected_rdatas.size() > 0) { + EXPECT_EQ(expected_rdatas.size(), result.rrset->getRdataCount()); EXPECT_EQ(expected_ttl, result.rrset->getTTL()); EXPECT_EQ(expected_type, result.rrset->getType()); - if (expected_signature_count > 0) { - EXPECT_EQ(expected_signature_count, + + isc::dns::RRsetPtr expected_rrset( + new isc::dns::RRset(name, finder->getClass(), + expected_type, expected_ttl)); + for (unsigned int i = 0; i < expected_rdatas.size(); ++i) { + expected_rrset->addRdata( + isc::dns::rdata::createRdata(expected_type, + finder->getClass(), + expected_rdatas[i])); + } + isc::testutils::rrsetCheck(expected_rrset, result.rrset); + + if (expected_sig_rdatas.size() > 0) { + // TODO same for sigrrset + EXPECT_EQ(expected_sig_rdatas.size(), result.rrset->getRRsig()->getRdataCount()); } else { EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset->getRRsig()); @@ -357,110 +373,224 @@ TEST_F(DatabaseClientTest, find) { dynamic_pointer_cast(zone.zone_finder)); EXPECT_EQ(42, finder->zone_id()); EXPECT_FALSE(current_connection_->searchRunning()); + std::vector expected_rdatas; + std::vector expected_sig_rdatas; + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 1, 0); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_rdatas.push_back("192.0.2.2"); doFindTest(finder, isc::dns::Name("www2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 2, 0); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("2001:db8::1"); + expected_rdatas.push_back("2001:db8::2"); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 2, 0); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); + EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), isc::dns::RRTTL(3600), - ZoneFinder::NXRRSET, 0, 0); + ZoneFinder::NXRRSET, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("www.example.org."); doFindTest(finder, isc::dns::Name("cname.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), isc::dns::RRTTL(3600), - ZoneFinder::CNAME, 1, 0); + ZoneFinder::CNAME, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("www.example.org."); doFindTest(finder, isc::dns::Name("cname.example.org."), isc::dns::RRType::CNAME(), isc::dns::RRType::CNAME(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 1, 0); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); doFindTest(finder, isc::dns::Name("doesnotexist.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::NXDOMAIN, 0, 0); + ZoneFinder::NXDOMAIN, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 1, 2); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("2001:db8::1"); + expected_rdatas.push_back("2001:db8::2"); + expected_sig_rdatas.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 2, 1); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), isc::dns::RRTTL(3600), - ZoneFinder::NXRRSET, 0, 0); + ZoneFinder::NXRRSET, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("www.example.org."); + expected_sig_rdatas.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signedcname1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), isc::dns::RRTTL(3600), - ZoneFinder::CNAME, 1, 1); + ZoneFinder::CNAME, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 1, 2); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("2001:db8::2"); + expected_rdatas.push_back("2001:db8::1"); + expected_sig_rdatas.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 2, 1); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), isc::dns::RRTTL(3600), - ZoneFinder::NXRRSET, 0, 0); + ZoneFinder::NXRRSET, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("www.example.org."); + expected_sig_rdatas.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signedcname2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), isc::dns::RRTTL(3600), - ZoneFinder::CNAME, 1, 1); + ZoneFinder::CNAME, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("acnamesig1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 1, 1); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("acnamesig2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 1, 1); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("acnamesig3.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 1, 1); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_rdatas.push_back("192.0.2.2"); doFindTest(finder, isc::dns::Name("ttldiff1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(360), - ZoneFinder::SUCCESS, 2, 0); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_rdatas.push_back("192.0.2.2"); doFindTest(finder, isc::dns::Name("ttldiff2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(360), - ZoneFinder::SUCCESS, 2, 0); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); + EXPECT_THROW(finder->find(isc::dns::Name("badcname1.example.org."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), @@ -498,7 +628,6 @@ TEST_F(DatabaseClientTest, find) { EXPECT_FALSE(current_connection_->searchRunning()); // Trigger the hardcoded exceptions and see if find() has cleaned up - /* EXPECT_THROW(finder->find(isc::dns::Name("dsexception.in.search."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), @@ -514,7 +643,7 @@ TEST_F(DatabaseClientTest, find) { NULL, ZoneFinder::FIND_DEFAULT), std::exception); EXPECT_FALSE(current_connection_->searchRunning()); - */ + EXPECT_THROW(finder->find(isc::dns::Name("dsexception.in.getnext."), isc::dns::RRType::A(), NULL, ZoneFinder::FIND_DEFAULT), @@ -534,12 +663,16 @@ TEST_F(DatabaseClientTest, find) { // This RRSIG has the wrong sigtype field, which should be // an error if we decide to keep using that field // Right now the field is ignored, so it does not error + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + expected_rdatas.push_back("192.0.2.1"); + expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("badsigtype.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), - ZoneFinder::SUCCESS, 1, 1); + ZoneFinder::SUCCESS, + expected_rdatas, expected_sig_rdatas); EXPECT_FALSE(current_connection_->searchRunning()); - } } diff --git a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc index 2a1d471258..e864e2532c 100644 --- a/src/lib/datasrc/tests/sqlite3_connection_unittest.cc +++ b/src/lib/datasrc/tests/sqlite3_connection_unittest.cc @@ -17,7 +17,6 @@ #include #include -#include #include using namespace isc::datasrc; @@ -135,7 +134,7 @@ TEST_F(SQLite3Conn, getRecords) { const int zone_id = zone_info.second; ASSERT_EQ(1, zone_id); - const size_t column_count = DatabaseConnection::RecordColumnCount; + const size_t column_count = DatabaseConnection::RECORDCOLUMNCOUNT; std::string columns[column_count]; // without search, getNext() should return false From c7db1351d3b1c25bfc31ed9e7b6b491e6bcb1555 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Thu, 11 Aug 2011 11:02:54 -0500 Subject: [PATCH 069/175] [jreed-docs-2] begin documenting stats commands and builtin in stats items --- src/bin/stats/b10-stats.xml | 93 ++++++++++++++++++++++++++++++++++++- 1 file changed, 91 insertions(+), 2 deletions(-) diff --git a/src/bin/stats/b10-stats.xml b/src/bin/stats/b10-stats.xml index f0c472dd29..13e568df63 100644 --- a/src/bin/stats/b10-stats.xml +++ b/src/bin/stats/b10-stats.xml @@ -20,7 +20,7 @@ - Oct 15, 2010 + August 11, 2011 @@ -67,6 +67,7 @@ it. b10-stats invokes "sendstats" command for bind10 after its initial starting because it's sure to collect statistics data from bind10. +
@@ -86,6 +87,94 @@ + + DEFAULT STATISTICS + + + The b10-stats daemon contains + built-in statistics: + + + + + + report_time + + The latest report date and time in + ISO 8601 format. + + + + stats.timestamp + The current date and time represented in + seconds since UNIX epoch (1970-01-01T0 0:00:00Z) with + precision (delimited with a period) up to + one hundred thousandth of second. + + + + + + + + + + + + CONFIGURATION AND COMMANDS + + + The b10-stats command does not have any + configurable settings. + + + + + The configuration commands are: + + + + remove removes the named statistics data. + + + + reset + + + + set + + + + show will send the statistics data + in JSON format. + By default, it outputs all the statistics data it has collected. + An optional item name may be specified to receive individual output. + + + + shutdown will shutdown the + b10-stats process. + (Note that the bind10 parent may restart it.) + + + + status simply indicates that the daemon is + running. + + + + + FILES /usr/local/share/bind10-devel/stats.spec @@ -126,7 +215,7 @@ HISTORY The b10-stats daemon was initially designed - and implemented by Naoki Kambe of JPRS in Oct 2010. + and implemented by Naoki Kambe of JPRS in October 2010. + + + STATISTICS DATA + + + The statistics data collected by the b10-stats + daemon include: + + + + + + bind10.boot_time + + The date and time that the bind10 + process started. + This is represented in ISO 8601 format. + + + + + + + + If you want to specify logging for one specific library - within the module, you set the name to 'module.library'. - For example, the logger used by the nameserver address - store component has the full name of 'Resolver.nsas'. If + within the module, you set the name to + module.library. For example, the + logger used by the nameserver address store component + has the full name of Resolver.nsas. If there is no entry in Logging for a particular library, it will use the configuration given for the module. - + + + To illustrate this, suppose you want the cache library to log messages of severity DEBUG, and the rest of the resolver code to log messages of severity INFO. To achieve - this you specify two loggers, one with the name 'Resolver' - and severity INFO, and one with the name 'Resolver.cache' - with severity DEBUG. As there are no entries for other - libraries (e.g. the nsas), they will use the configuration - for the module ('Resolver'), so giving the desired - behavior. + this you specify two loggers, one with the name + Resolver and severity INFO, and one with + the name Resolver.cache with severity + DEBUG. As there are no entries for other libraries (e.g. + the nsas), they will use the configuration for the module + (Resolver), so giving the desired behavior. - One special case is that of a module name of '*', which - is interpreted as 'any module'. You can set global logging - options by using this, including setting the logging - configuration for a library that is used by multiple - modules (e.g. '*.config" specifies the configuration - library code in whatever module is using it). + One special case is that of a module name of * + (asterisks), which is interpreted as any + module. You can set global logging options by using this, + including setting the logging configuration for a library + that is used by multiple modules (e.g. *.config + specifies the configuration library code in whatever + module is using it). @@ -1661,13 +1676,15 @@ then change those defaults with config set Resolver/forward_addresses[0]/address configuration that might match a particular logger, the specification with the more specific logger name takes precedence. For example, if there are entries for for - both '*' and 'Resolver', the resolver module - and all - libraries it uses - will log messages according to the - configuration in the second entry ('Resolver'). All other - modules will use the configuration of the first entry - ('*'). If there was also a configuration entry for - 'Resolver.cache', the cache library within the resolver - would use that in preference to the entry for 'Resolver'. + both * and Resolver, the + resolver module — and all libraries it uses — + will log messages according to the configuration in the + second entry (Resolver). All other modules + will use the configuration of the first entry + (*). If there was also a configuration + entry for Resolver.cache, the cache library + within the resolver would use that in preference to the + entry for Resolver. @@ -1675,14 +1692,15 @@ then change those defaults with config set Resolver/forward_addresses[0]/address One final note about the naming. When specifying the module name within a logger, use the name of the module - as specified in bindctl, e.g. 'Resolver' for the resolver - module, 'Xfrout' for the xfrout module etc. When the - message is logged, the message will include the name of - the logger generating the message, but with the module + as specified in bindctl, e.g. + Resolver for the resolver module, + Xfrout for the xfrout module, etc. When + the message is logged, the message will include the name + of the logger generating the message, but with the module name replaced by the name of the process implementing the module (so for example, a message generated by the - 'Auth.cache' logger will appear in the output with a - logger name of 'b10-auth.cache'). + Auth.cache logger will appear in the output + with a logger name of b10-auth.cache). @@ -1694,11 +1712,6 @@ then change those defaults with config set Resolver/forward_addresses[0]/address This specifies the category of messages logged. - - - - - Each message is logged with an associated severity which may be one of the following (in descending order of severity): @@ -1730,7 +1743,7 @@ then change those defaults with config set Resolver/forward_addresses[0]/address When the severity of a logger is set to one of these values, it will only log messages of that severity, and - the severities below it. The severity may also be set to + the severities above it. The severity may also be set to NONE, in which case all messages from that logger are inhibited. @@ -1745,9 +1758,9 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - Each logger can have zero or more output_options. These - specify where log messages are sent to. These are explained - in detail below. + Each logger can have zero or more + . These specify where log + messages are sent to. These are explained in detail below. @@ -1766,14 +1779,17 @@ then change those defaults with config set Resolver/forward_addresses[0]/address When a logger's severity is set to DEBUG, this value specifies what debug messages should be printed. It ranges - from 0 (least verbose) to 99 (most verbose). The general - classification of debug message types is - - - + from 0 (least verbose) to 99 (most verbose). - + + @@ -1788,13 +1804,15 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - If this is true, the output_options from the parent will - be used. For example, if there are two loggers configured; - 'Resolver' and 'Resolver.cache', and additive is true in - the second, it will write the log messages not only to - the destinations specified for 'Resolver.cache', but also - to the destinations as specified in the output_options - in the logger named Resolver'. + If this is true, the from + the parent will be used. For example, if there are two + loggers configured; Resolver and + Resolver.cache, and + is true in the second, it will write the log messages + not only to the destinations specified for + Resolver.cache, but also to the destinations + as specified in the in + the logger named Resolver. @@ -1809,9 +1827,10 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - The main settings for an output option are the 'destination' - and a value called 'output', the meaning of which depends - on the destination that is set. + The main settings for an output option are the + and a value called + , the meaning of which depends on + the destination that is set. @@ -1855,18 +1874,19 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - destination is 'console' + is console - The value of output must be one of 'stdout' - (messages printed to standard output) or 'stderr' - (messages printed to standard error). + The value of output must be one of stdout + (messages printed to standard output) or + stderr (messages printed to standard + error). - destination is 'file' + is file The value of output is interpreted as a file name; @@ -1876,12 +1896,13 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - destination is 'syslog' + is syslog - The value of output is interpreted as the syslog - facility (e.g. 'local0') that should be used for - log messages. + The value of output is interpreted as the + syslog facility (e.g. + local0) that should be used + for log messages. @@ -1890,7 +1911,7 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - The other options for output_options are: + The other options for are: @@ -1912,9 +1933,10 @@ then change those defaults with config set Resolver/forward_addresses[0]/address Only relevant when destination is file, this is maximum file size of output files in bytes. When the maximum - size is reached, the file is renamed (a ".1" is appended - to the name - if a ".1" file exists, it is renamed ".2" - etc.) and a new file opened. + size is reached, the file is renamed and a new file opened. + (For example, a ".1" is appended to the name — + if a ".1" file exists, it is renamed ".2", + etc.) @@ -1928,8 +1950,8 @@ then change those defaults with config set Resolver/forward_addresses[0]/address Maximum number of old log files to keep around when - rolling the output file. Only relevant when destination - if 'file'. + rolling the output file. Only relevant when + is file. @@ -1944,15 +1966,16 @@ then change those defaults with config set Resolver/forward_addresses[0]/address In this example we want to set the global logging to - write to the file /var/log/my_bind10.log, at severity - WARN. We want the authoritative server to log at DEBUG - with debuglevel 40, to a different file (/tmp/debug_messages). + write to the file /var/log/my_bind10.log, + at severity WARN. We want the authoritative server to + log at DEBUG with debuglevel 40, to a different file + (/tmp/debug_messages). -Start bindctl + Start bindctl. @@ -2144,7 +2167,7 @@ Logging/loggers[0]/output_options[0]/maxver 8 integer (modified) And every module will now be using the values from the - logger named '*'. + logger named *. From c46b0bc28c22f2ae4b46c592f450e745774846d4 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Fri, 12 Aug 2011 08:21:22 -0500 Subject: [PATCH 091/175] [1011] move Logging Message Format section after Logging configuration section --- doc/guide/bind10-guide.xml | 178 ++++++++++++++++++------------------- 1 file changed, 88 insertions(+), 90 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 48593e5e07..9f3ee80d68 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1470,96 +1470,6 @@ then change those defaults with config set Resolver/forward_addresses[0]/address Logging - - -
- Logging Message Format - - - Each message written by BIND 10 to the configured logging - destinations comprises a number of components that identify - the origin of the message and, if the message indicates - a problem, information about the problem that may be - useful in fixing it. - - - - Consider the message below logged to a file: - 2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink] - ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53) - - - - Note: the layout of messages written to the system logging - file (syslog) may be slightly different. This message has - been split across two lines here for display reasons; in the - logging file, it will appear on one line.) - - - - The log message comprises a number of components: - - - - 2011-06-15 13:48:22.034 - - - The date and time at which the message was generated. - - - - - ERROR - - The severity of the message. - - - - - [b10-resolver.asiolink] - - The source of the message. This comprises two components: - the BIND 10 process generating the message (in this - case, b10-resolver) and the module - within the program from which the message originated - (which in the example is the asynchronous I/O link - module, asiolink). - - - - - ASIODNS_OPENSOCK - - The message identification. Every message in BIND 10 - has a unique identification, which can be used as an - index into the BIND 10 Messages - Manual () from which more information can be obtained. - - - - - error 111 opening TCP socket to 127.0.0.1(53) - - A brief description of the cause of the problem. - Within this text, information relating to the condition - that caused the message to be logged will be included. - In this example, error number 111 (an operating - system-specific error number) was encountered when - trying to open a TCP connection to port 53 on the - local system (address 127.0.0.1). The next step - would be to find out the reason for the failure by - consulting your system's documentation to identify - what error number 111 means. - - - - - -
-
Logging configuration @@ -2175,6 +2085,94 @@ Logging/loggers[0]/output_options[0]/maxver 8 integer (modified)
+
+ Logging Message Format + + + Each message written by BIND 10 to the configured logging + destinations comprises a number of components that identify + the origin of the message and, if the message indicates + a problem, information about the problem that may be + useful in fixing it. + + + + Consider the message below logged to a file: + 2011-06-15 13:48:22.034 ERROR [b10-resolver.asiolink] + ASIODNS_OPENSOCK error 111 opening TCP socket to 127.0.0.1(53) + + + + Note: the layout of messages written to the system logging + file (syslog) may be slightly different. This message has + been split across two lines here for display reasons; in the + logging file, it will appear on one line.) + + + + The log message comprises a number of components: + + + + 2011-06-15 13:48:22.034 + + + The date and time at which the message was generated. + + + + + ERROR + + The severity of the message. + + + + + [b10-resolver.asiolink] + + The source of the message. This comprises two components: + the BIND 10 process generating the message (in this + case, b10-resolver) and the module + within the program from which the message originated + (which in the example is the asynchronous I/O link + module, asiolink). + + + + + ASIODNS_OPENSOCK + + The message identification. Every message in BIND 10 + has a unique identification, which can be used as an + index into the BIND 10 Messages + Manual () from which more information can be obtained. + + + + + error 111 opening TCP socket to 127.0.0.1(53) + + A brief description of the cause of the problem. + Within this text, information relating to the condition + that caused the message to be logged will be included. + In this example, error number 111 (an operating + system-specific error number) was encountered when + trying to open a TCP connection to port 53 on the + local system (address 127.0.0.1). The next step + would be to find out the reason for the failure by + consulting your system's documentation to identify + what error number 111 means. + + + + + +
+
From e021b84f7fc20b3e3927093ed87e9c873d33a443 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Fri, 12 Aug 2011 15:23:04 +0200 Subject: [PATCH 092/175] [1063] Check signatures They work out of the box thanks to the getRRset method, but it needs to be tested explicitly. The tests test only the NS and DNAME, the rest is tested above. --- src/lib/datasrc/tests/database_unittest.cc | 43 +++++++++++++--------- 1 file changed, 26 insertions(+), 17 deletions(-) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index e3d1393154..f4b5d0948c 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -267,12 +267,16 @@ private: // Data for testing delegation (with NS and DNAME) addRecord("NS", "3600", "", "ns.example.com."); addRecord("NS", "3600", "", "ns.delegation.example.org."); + addRecord("RRSIG", "3600", "", "NS 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. FAKEFAKEFAKE"); addCurName("delegation.example.org."); addRecord("A", "3600", "", "192.0.2.1"); addCurName("ns.delegation.example.org."); addRecord("A", "3600", "", "192.0.2.1"); addRecord("DNAME", "3600", "", "dname.example.com."); + addRecord("RRSIG", "3600", "", "DNAME 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. FAKEFAKEFAKE"); addCurName("dname.example.org."); addRecord("A", "3600", "", "192.0.2.1"); addCurName("below.dname.example.org."); @@ -294,6 +298,8 @@ private: // doesn't break anything addRecord("NS", "3600", "", "ns.example.com."); addRecord("A", "3600", "", "192.0.2.1"); + addRecord("RRSIG", "3600", "", "NS 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. FAKEFAKEFAKE"); addCurName("example.org."); } }; @@ -396,9 +402,10 @@ doFindTest(shared_ptr finder, expected_rdatas); if (expected_sig_rdatas.size() > 0) { - checkRRset(result.rrset->getRRsig(), name, - finder->getClass(), isc::dns::RRType::RRSIG(), - expected_ttl, expected_sig_rdatas); + checkRRset(result.rrset->getRRsig(), expected_name != Name(".") ? + expected_name : name, finder->getClass(), + isc::dns::RRType::RRSIG(), expected_ttl, + expected_sig_rdatas); } else { EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset->getRRsig()); } @@ -729,6 +736,8 @@ TEST_F(DatabaseClientTest, find) { expected_rdatas.clear(); expected_rdatas.push_back("ns.example.com."); + expected_sig_rdatas.push_back("NS 5 3 3600 20000101000000 20000201000000 " + "12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("example.org."), isc::dns::RRType::NS(), isc::dns::RRType::NS(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, @@ -741,6 +750,8 @@ TEST_F(DatabaseClientTest, find) { expected_sig_rdatas.clear(); expected_rdatas.push_back("ns.example.com."); expected_rdatas.push_back("ns.delegation.example.org."); + expected_sig_rdatas.push_back("NS 5 3 3600 20000101000000 20000201000000 " + "12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), isc::dns::RRType::A(), isc::dns::RRType::NS(), isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas, @@ -753,12 +764,7 @@ TEST_F(DatabaseClientTest, find) { EXPECT_FALSE(current_database_->searchRunning()); // Even when we check directly at the delegation point, we should get - // the NS (both when the RRset does and doesn't exist in data) - doFindTest(finder, isc::dns::Name("delegation.example.org."), - isc::dns::RRType::A(), isc::dns::RRType::NS(), - isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas, - expected_sig_rdatas); - EXPECT_FALSE(current_database_->searchRunning()); + // the NS doFindTest(finder, isc::dns::Name("delegation.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas, @@ -777,6 +783,10 @@ TEST_F(DatabaseClientTest, find) { // the behaviour anyway just to make sure) expected_rdatas.clear(); expected_rdatas.push_back("dname.example.com."); + expected_sig_rdatas.clear(); + expected_sig_rdatas.push_back("DNAME 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("below.dname.example.org."), isc::dns::RRType::A(), isc::dns::RRType::DNAME(), isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas, @@ -788,9 +798,16 @@ TEST_F(DatabaseClientTest, find) { expected_sig_rdatas, isc::dns::Name("dname.example.org.")); EXPECT_FALSE(current_database_->searchRunning()); + // Asking direcly for DNAME should give SUCCESS + doFindTest(finder, isc::dns::Name("dname.example.org."), + isc::dns::RRType::DNAME(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, + expected_sig_rdatas); + // But we don't delegate at DNAME point expected_rdatas.clear(); expected_rdatas.push_back("192.0.2.1"); + expected_sig_rdatas.clear(); doFindTest(finder, isc::dns::Name("dname.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, @@ -803,14 +820,6 @@ TEST_F(DatabaseClientTest, find) { expected_sig_rdatas); EXPECT_FALSE(current_database_->searchRunning()); - // Asking direcly for DNAME should give SUCCESS - expected_rdatas.clear(); - expected_rdatas.push_back("dname.example.com."); - doFindTest(finder, isc::dns::Name("dname.example.org."), - isc::dns::RRType::DNAME(), isc::dns::RRType::DNAME(), - isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, - expected_sig_rdatas); - // This is broken dname, it contains two targets EXPECT_THROW(finder->find(isc::dns::Name("below.baddname.example.org."), isc::dns::RRType::A(), NULL, From 17a87c6bb9d16e992fadd47b11b3eb26af54ac69 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Fri, 12 Aug 2011 08:27:48 -0500 Subject: [PATCH 093/175] [1011] changelog entry for trac #1011 --- ChangeLog | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/ChangeLog b/ChangeLog index 5a145584ce..56bf8e97d7 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,7 @@ +278. [doc] jelte + Add logging configuration documentation to the guide. + (Trac #1011, git TODO) + 277. [func] jerry Implement the SRV rrtype according to RFC2782. (Trac #1128, git 5fd94aa027828c50e63ae1073d9d6708e0a9c223) From 2f49e3eb0ddf31d601184b516b7f44ab4ea6eece Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Sat, 13 Aug 2011 13:56:28 +0200 Subject: [PATCH 094/175] [801] Notes about limitations --- src/bin/bind10/creatorapi.txt | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/src/bin/bind10/creatorapi.txt b/src/bin/bind10/creatorapi.txt index fd6be31a2d..c23d907f9c 100644 --- a/src/bin/bind10/creatorapi.txt +++ b/src/bin/bind10/creatorapi.txt @@ -85,7 +85,9 @@ The commands (IP address, address family, port) and how to allow sharing. Sharing would be one of: - None - - Same kind of application + - Same kind of application (however, it is not entirely clear what + this means, in case it won't work out intuitively, we'll need to + define it somehow) - Any kind of application And a kind of application would be provided, to decide if the sharing is possible (eg. if auth allows sharing with the same kind and something else @@ -113,7 +115,9 @@ Known limitations Currently the socket creator doesn't support specifying any socket options. If it turns out there are any options that need to be set before bind(), we'll need to extend it (and extend the protocol as -well). +well). If we want to support them, we'll have to solve a possible +conflict (what to do when two applications request the same socket and +want to share it, but want different options). The current socket creator doesn't know raw sockets, but if they are needed, it should be easy to add. From 978ae99ac4aa211ba4ba960f56bb6cdd84b648ae Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Sat, 13 Aug 2011 14:37:14 +0200 Subject: [PATCH 095/175] [1064] Tests for GLUE_OK mode It should just go trough NS delegation points (but not trough DNAME ones). --- src/lib/datasrc/tests/database_unittest.cc | 64 +++++++++++++++++++++- 1 file changed, 62 insertions(+), 2 deletions(-) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index f4b5d0948c..2a1cb3c362 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -390,11 +390,12 @@ doFindTest(shared_ptr finder, ZoneFinder::Result expected_result, const std::vector& expected_rdatas, const std::vector& expected_sig_rdatas, - const isc::dns::Name& expected_name = isc::dns::Name::ROOT_NAME()) + const isc::dns::Name& expected_name = isc::dns::Name::ROOT_NAME(), + const ZoneFinder::FindOptions options = ZoneFinder::FIND_DEFAULT) { SCOPED_TRACE("doFindTest " + name.toText() + " " + type.toText()); ZoneFinder::FindResult result = - finder->find(name, type, NULL, ZoneFinder::FIND_DEFAULT); + finder->find(name, type, NULL, options); ASSERT_EQ(expected_result, result.code) << name << " " << type; if (expected_rdatas.size() > 0) { checkRRset(result.rrset, expected_name != Name(".") ? expected_name : @@ -839,6 +840,65 @@ TEST_F(DatabaseClientTest, find) { ZoneFinder::FIND_DEFAULT), DataSourceError); EXPECT_FALSE(current_database_->searchRunning()); + + // Glue-OK mode. Just go trough NS delegations down (but not trough + // DNAME) and pretend it is not there. + { + SCOPED_TRACE("Glue OK"); + expected_rdatas.clear(); + expected_sig_rdatas.clear(); + doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, expected_rdatas, + expected_sig_rdatas, + isc::dns::Name("ns.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + doFindTest(finder, isc::dns::Name("nothere.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), + isc::dns::RRTTL(3600), ZoneFinder::NXDOMAIN, + expected_rdatas, expected_sig_rdatas, + isc::dns::Name("nothere.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + expected_rdatas.push_back("192.0.2.1"); + doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::A(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, + expected_sig_rdatas, + isc::dns::Name("ns.delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + expected_rdatas.clear(); + expected_rdatas.push_back("ns.example.com."); + expected_rdatas.push_back("ns.delegation.example.org."); + expected_sig_rdatas.clear(); + expected_sig_rdatas.push_back("NS 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + // When we request the NS, it should be SUCCESS, not DELEGATION + // (different in GLUE_OK) + doFindTest(finder, isc::dns::Name("delegation.example.org."), + isc::dns::RRType::NS(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, + expected_sig_rdatas, isc::dns::Name("delegation.example.org."), + ZoneFinder::FIND_GLUE_OK); + expected_rdatas.clear(); + expected_rdatas.push_back("dname.example.com."); + expected_sig_rdatas.clear(); + expected_sig_rdatas.push_back("DNAME 5 3 3600 20000101000000 " + "20000201000000 12345 example.org. " + "FAKEFAKEFAKE"); + doFindTest(finder, isc::dns::Name("below.dname.example.org."), + isc::dns::RRType::A(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas, + expected_sig_rdatas, isc::dns::Name("dname.example.org."), + ZoneFinder::FIND_GLUE_OK); + EXPECT_FALSE(current_database_->searchRunning()); + doFindTest(finder, isc::dns::Name("below.dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas, + expected_sig_rdatas, isc::dns::Name("dname.example.org."), + ZoneFinder::FIND_GLUE_OK); + EXPECT_FALSE(current_database_->searchRunning()); + } // End of GLUE_OK } TEST_F(DatabaseClientTest, getOrigin) { From b9f87e9332895be6915e2f2960a2e921375e8e7f Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Sat, 13 Aug 2011 14:54:57 +0200 Subject: [PATCH 096/175] [1064] Implement GLUE_OK mode It just turns off the flag for the getRRset method when the mode is on, to ignore it on the way. --- src/lib/datasrc/database.cc | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 6afd3dce85..8ed49768a7 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -284,11 +284,12 @@ ZoneFinder::FindResult DatabaseClient::Finder::find(const isc::dns::Name& name, const isc::dns::RRType& type, isc::dns::RRsetList*, - const FindOptions) + const FindOptions options) { // This variable is used to determine the difference between // NXDOMAIN and NXRRSET bool records_found = false; + bool glue_ok(options & FIND_GLUE_OK); isc::dns::RRsetPtr result_rrset; ZoneFinder::Result result_status = SUCCESS; std::pair found; @@ -307,7 +308,7 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, Name superdomain(name.split(i)); // Look if there's NS or DNAME (but ignore the NS in origin) found = getRRset(superdomain, NULL, false, true, - i != removeLabels); + i != removeLabels && !glue_ok); if (found.second) { // We found something redirecting somewhere else // (it can be only NS or DNAME here) @@ -326,10 +327,11 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, // Try getting the final result and extract it // It is special if there's a CNAME or NS, DNAME is ignored here // And we don't consider the NS in origin - found = getRRset(name, &type, true, false, name != origin); + found = getRRset(name, &type, true, false, + name != origin && !glue_ok); records_found = found.first; result_rrset = found.second; - if (result_rrset && name != origin && + if (result_rrset && name != origin && !glue_ok && result_rrset->getType() == isc::dns::RRType::NS()) { result_status = DELEGATION; } else if (result_rrset && type != isc::dns::RRType::CNAME() && From e9e36557849ba6b650e503841596bd31034c1936 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 20 Jul 2011 09:00:53 +0900 Subject: [PATCH 097/175] [trac928] add statistics category and statistics items into some spec files (bob.spec, auth.spec, stats.spec) --- src/bin/auth/auth.spec.pre.in | 18 ++++++++++++++ src/bin/bind10/bob.spec | 11 +++++++++ src/bin/stats/stats.spec | 45 +++++++++++++++++++++++++++++++++++ 3 files changed, 74 insertions(+) diff --git a/src/bin/auth/auth.spec.pre.in b/src/bin/auth/auth.spec.pre.in index d88ffb5e3e..2ce044e440 100644 --- a/src/bin/auth/auth.spec.pre.in +++ b/src/bin/auth/auth.spec.pre.in @@ -122,6 +122,24 @@ } ] } + ], + "statistics": [ + { + "item_name": "queries.tcp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Queries TCP ", + "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially" + }, + { + "item_name": "queries.udp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Queries UDP", + "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially" + } ] } } diff --git a/src/bin/bind10/bob.spec b/src/bin/bind10/bob.spec index 1184fd1fc2..b4cfac6da6 100644 --- a/src/bin/bind10/bob.spec +++ b/src/bin/bind10/bob.spec @@ -37,6 +37,17 @@ "command_description": "List the running BIND 10 processes", "command_args": [] } + ], + "statistics": [ + { + "item_name": "boot_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Boot time", + "item_description": "A date time when bind10 process starts initially", + "item_format": "date-time" + } ] } } diff --git a/src/bin/stats/stats.spec b/src/bin/stats/stats.spec index 25f6b54827..635eb486a1 100644 --- a/src/bin/stats/stats.spec +++ b/src/bin/stats/stats.spec @@ -56,6 +56,51 @@ "command_description": "Shut down the stats module", "command_args": [] } + ], + "statistics": [ + { + "item_name": "report_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Report time", + "item_description": "A date time when stats module reports", + "item_format": "date-time" + }, + { + "item_name": "boot_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Boot time", + "item_description": "A date time when the stats module starts initially or when the stats module restarts", + "item_format": "date-time" + }, + { + "item_name": "last_update_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Last update time", + "item_description": "The latest date time when the stats module receives from other modules like auth server or boss process and so on", + "item_format": "date-time" + }, + { + "item_name": "timestamp", + "item_type": "real", + "item_optional": false, + "item_default": 0.0, + "item_title": "Timestamp", + "item_description": "A current time stamp since epoch time (1970-01-01T00:00:00Z)" + }, + { + "item_name": "lname", + "item_type": "string", + "item_optional": false, + "item_default": "", + "item_title": "Local Name", + "item_description": "A localname of stats module given via CC protocol" + } ] } } From 990247bfd2248be5ae4293928101eec87e1997e9 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:44:40 +0900 Subject: [PATCH 098/175] [trac929] add some spec files for unittest of statistics category --- src/lib/config/tests/testdata/Makefile.am | 8 ++++ src/lib/config/tests/testdata/data33_1.data | 7 +++ src/lib/config/tests/testdata/data33_2.data | 7 +++ src/lib/config/tests/testdata/spec33.spec | 50 +++++++++++++++++++++ src/lib/config/tests/testdata/spec34.spec | 14 ++++++ src/lib/config/tests/testdata/spec35.spec | 15 +++++++ src/lib/config/tests/testdata/spec36.spec | 17 +++++++ src/lib/config/tests/testdata/spec37.spec | 7 +++ src/lib/config/tests/testdata/spec38.spec | 17 +++++++ 9 files changed, 142 insertions(+) create mode 100644 src/lib/config/tests/testdata/data33_1.data create mode 100644 src/lib/config/tests/testdata/data33_2.data create mode 100644 src/lib/config/tests/testdata/spec33.spec create mode 100644 src/lib/config/tests/testdata/spec34.spec create mode 100644 src/lib/config/tests/testdata/spec35.spec create mode 100644 src/lib/config/tests/testdata/spec36.spec create mode 100644 src/lib/config/tests/testdata/spec37.spec create mode 100644 src/lib/config/tests/testdata/spec38.spec diff --git a/src/lib/config/tests/testdata/Makefile.am b/src/lib/config/tests/testdata/Makefile.am index 91d7f04540..0d8b92ecb5 100644 --- a/src/lib/config/tests/testdata/Makefile.am +++ b/src/lib/config/tests/testdata/Makefile.am @@ -25,6 +25,8 @@ EXTRA_DIST += data22_10.data EXTRA_DIST += data32_1.data EXTRA_DIST += data32_2.data EXTRA_DIST += data32_3.data +EXTRA_DIST += data33_1.data +EXTRA_DIST += data33_2.data EXTRA_DIST += spec1.spec EXTRA_DIST += spec2.spec EXTRA_DIST += spec3.spec @@ -57,3 +59,9 @@ EXTRA_DIST += spec29.spec EXTRA_DIST += spec30.spec EXTRA_DIST += spec31.spec EXTRA_DIST += spec32.spec +EXTRA_DIST += spec33.spec +EXTRA_DIST += spec34.spec +EXTRA_DIST += spec35.spec +EXTRA_DIST += spec36.spec +EXTRA_DIST += spec37.spec +EXTRA_DIST += spec38.spec diff --git a/src/lib/config/tests/testdata/data33_1.data b/src/lib/config/tests/testdata/data33_1.data new file mode 100644 index 0000000000..429852c974 --- /dev/null +++ b/src/lib/config/tests/testdata/data33_1.data @@ -0,0 +1,7 @@ +{ + "dummy_str": "Dummy String", + "dummy_int": 118, + "dummy_datetime": "2011-05-27T19:42:57Z", + "dummy_date": "2011-05-27", + "dummy_time": "19:42:57" +} diff --git a/src/lib/config/tests/testdata/data33_2.data b/src/lib/config/tests/testdata/data33_2.data new file mode 100644 index 0000000000..eb0615c1c9 --- /dev/null +++ b/src/lib/config/tests/testdata/data33_2.data @@ -0,0 +1,7 @@ +{ + "dummy_str": "Dummy String", + "dummy_int": 118, + "dummy_datetime": "xxxx", + "dummy_date": "xxxx", + "dummy_time": "xxxx" +} diff --git a/src/lib/config/tests/testdata/spec33.spec b/src/lib/config/tests/testdata/spec33.spec new file mode 100644 index 0000000000..3002488b72 --- /dev/null +++ b/src/lib/config/tests/testdata/spec33.spec @@ -0,0 +1,50 @@ +{ + "module_spec": { + "module_name": "Spec33", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String", + "item_description": "A dummy string" + }, + { + "item_name": "dummy_int", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Dummy Integer", + "item_description": "A dummy integer" + }, + { + "item_name": "dummy_datetime", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Dummy DateTime", + "item_description": "A dummy datetime", + "item_format": "date-time" + }, + { + "item_name": "dummy_date", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01", + "item_title": "Dummy Date", + "item_description": "A dummy date", + "item_format": "date" + }, + { + "item_name": "dummy_time", + "item_type": "string", + "item_optional": false, + "item_default": "00:00:00", + "item_title": "Dummy Time", + "item_description": "A dummy time", + "item_format": "time" + } + ] + } +} diff --git a/src/lib/config/tests/testdata/spec34.spec b/src/lib/config/tests/testdata/spec34.spec new file mode 100644 index 0000000000..dd1f3ca952 --- /dev/null +++ b/src/lib/config/tests/testdata/spec34.spec @@ -0,0 +1,14 @@ +{ + "module_spec": { + "module_name": "Spec34", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_description": "A dummy string" + } + ] + } +} diff --git a/src/lib/config/tests/testdata/spec35.spec b/src/lib/config/tests/testdata/spec35.spec new file mode 100644 index 0000000000..86aaf145a0 --- /dev/null +++ b/src/lib/config/tests/testdata/spec35.spec @@ -0,0 +1,15 @@ +{ + "module_spec": { + "module_name": "Spec35", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String" + } + ] + } +} + diff --git a/src/lib/config/tests/testdata/spec36.spec b/src/lib/config/tests/testdata/spec36.spec new file mode 100644 index 0000000000..fb9ce26084 --- /dev/null +++ b/src/lib/config/tests/testdata/spec36.spec @@ -0,0 +1,17 @@ +{ + "module_spec": { + "module_name": "Spec36", + "statistics": [ + { + "item_name": "dummy_str", + "item_type": "string", + "item_optional": false, + "item_default": "Dummy", + "item_title": "Dummy String", + "item_description": "A dummy string", + "item_format": "dummy" + } + ] + } +} + diff --git a/src/lib/config/tests/testdata/spec37.spec b/src/lib/config/tests/testdata/spec37.spec new file mode 100644 index 0000000000..bc444d107c --- /dev/null +++ b/src/lib/config/tests/testdata/spec37.spec @@ -0,0 +1,7 @@ +{ + "module_spec": { + "module_name": "Spec37", + "statistics": 8 + } +} + diff --git a/src/lib/config/tests/testdata/spec38.spec b/src/lib/config/tests/testdata/spec38.spec new file mode 100644 index 0000000000..1892e887fb --- /dev/null +++ b/src/lib/config/tests/testdata/spec38.spec @@ -0,0 +1,17 @@ +{ + "module_spec": { + "module_name": "Spec38", + "statistics": [ + { + "item_name": "dummy_datetime", + "item_type": "string", + "item_optional": false, + "item_default": "11", + "item_title": "Dummy DateTime", + "item_description": "A dummy datetime", + "item_format": "date-time" + } + ] + } +} + From 7d5c3d56743fb696405f509663b3e1558fa72e25 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:45:15 +0900 Subject: [PATCH 099/175] [trac929] add a statistics category into "spec2.spec" and modify message string to be compared with in EXPECT_EQ --- src/lib/config/tests/ccsession_unittests.cc | 4 ++-- src/lib/config/tests/testdata/spec2.spec | 11 +++++++++++ 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/src/lib/config/tests/ccsession_unittests.cc b/src/lib/config/tests/ccsession_unittests.cc index 5ea4f32e3e..793fa30457 100644 --- a/src/lib/config/tests/ccsession_unittests.cc +++ b/src/lib/config/tests/ccsession_unittests.cc @@ -184,7 +184,7 @@ TEST_F(CCSessionTest, session2) { ConstElementPtr msg; std::string group, to; msg = session.getFirstMessage(group, to); - EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\" } ] }", msg->str()); + EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\", \"statistics\": [ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ] } ] }", msg->str()); EXPECT_EQ("ConfigManager", group); EXPECT_EQ("*", to); EXPECT_EQ(0, session.getMsgQueue()->size()); @@ -231,7 +231,7 @@ TEST_F(CCSessionTest, session3) { ConstElementPtr msg; std::string group, to; msg = session.getFirstMessage(group, to); - EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\" } ] }", msg->str()); + EXPECT_EQ("{ \"command\": [ \"module_spec\", { \"commands\": [ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ], \"config_data\": [ { \"item_default\": 1, \"item_name\": \"item1\", \"item_optional\": false, \"item_type\": \"integer\" }, { \"item_default\": 1.1, \"item_name\": \"item2\", \"item_optional\": false, \"item_type\": \"real\" }, { \"item_default\": true, \"item_name\": \"item3\", \"item_optional\": false, \"item_type\": \"boolean\" }, { \"item_default\": \"test\", \"item_name\": \"item4\", \"item_optional\": false, \"item_type\": \"string\" }, { \"item_default\": [ \"a\", \"b\" ], \"item_name\": \"item5\", \"item_optional\": false, \"item_type\": \"list\", \"list_item_spec\": { \"item_default\": \"\", \"item_name\": \"list_element\", \"item_optional\": false, \"item_type\": \"string\" } }, { \"item_default\": { }, \"item_name\": \"item6\", \"item_optional\": false, \"item_type\": \"map\", \"map_item_spec\": [ { \"item_default\": \"default\", \"item_name\": \"value1\", \"item_optional\": true, \"item_type\": \"string\" }, { \"item_name\": \"value2\", \"item_optional\": true, \"item_type\": \"integer\" } ] } ], \"module_name\": \"Spec2\", \"statistics\": [ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ] } ] }", msg->str()); EXPECT_EQ("ConfigManager", group); EXPECT_EQ("*", to); EXPECT_EQ(1, session.getMsgQueue()->size()); diff --git a/src/lib/config/tests/testdata/spec2.spec b/src/lib/config/tests/testdata/spec2.spec index 59b8ebcbbb..43524224a2 100644 --- a/src/lib/config/tests/testdata/spec2.spec +++ b/src/lib/config/tests/testdata/spec2.spec @@ -66,6 +66,17 @@ "command_description": "Shut down BIND 10", "command_args": [] } + ], + "statistics": [ + { + "item_name": "dummy_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Dummy Time", + "item_description": "A dummy date time", + "item_format": "date-time" + } ] } } From 5e621bce015d2847104303fba574989fdf0399e0 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:45:28 +0900 Subject: [PATCH 100/175] [trac929] add COMMAND_GET_STATISTICS_SPEC for "get_statistics_spec" --- src/lib/python/isc/config/ccsession.py | 1 + 1 file changed, 1 insertion(+) diff --git a/src/lib/python/isc/config/ccsession.py b/src/lib/python/isc/config/ccsession.py index 4fa9d58f9c..ba7724ce55 100644 --- a/src/lib/python/isc/config/ccsession.py +++ b/src/lib/python/isc/config/ccsession.py @@ -91,6 +91,7 @@ COMMAND_CONFIG_UPDATE = "config_update" COMMAND_MODULE_SPECIFICATION_UPDATE = "module_specification_update" COMMAND_GET_COMMANDS_SPEC = "get_commands_spec" +COMMAND_GET_STATISTICS_SPEC = "get_statistics_spec" COMMAND_GET_CONFIG = "get_config" COMMAND_SET_CONFIG = "set_config" COMMAND_GET_MODULE_SPEC = "get_module_spec" From ab5085e81007711f9d18ed77f3d78f51cf37545c Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:46:27 +0900 Subject: [PATCH 101/175] [trac929] add "get_statistics_spec" into cfgmgr.py it pushes contents in statistics category of each spec file. --- src/lib/python/isc/config/cfgmgr.py | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/src/lib/python/isc/config/cfgmgr.py b/src/lib/python/isc/config/cfgmgr.py index 18e001c306..1db9fd389f 100644 --- a/src/lib/python/isc/config/cfgmgr.py +++ b/src/lib/python/isc/config/cfgmgr.py @@ -267,6 +267,19 @@ class ConfigManager: commands[module_name] = self.module_specs[module_name].get_commands_spec() return commands + def get_statistics_spec(self, name = None): + """Returns a dict containing 'module_name': statistics_spec for + all modules. If name is specified, only that module will + be included""" + statistics = {} + if name: + if name in self.module_specs: + statistics[name] = self.module_specs[name].get_statistics_spec() + else: + for module_name in self.module_specs.keys(): + statistics[module_name] = self.module_specs[module_name].get_statistics_spec() + return statistics + def read_config(self): """Read the current configuration from the file specificied at init()""" try: @@ -457,6 +470,8 @@ class ConfigManager: if cmd: if cmd == ccsession.COMMAND_GET_COMMANDS_SPEC: answer = ccsession.create_answer(0, self.get_commands_spec()) + elif cmd == ccsession.COMMAND_GET_STATISTICS_SPEC: + answer = ccsession.create_answer(0, self.get_statistics_spec()) elif cmd == ccsession.COMMAND_GET_MODULE_SPEC: answer = self._handle_get_module_spec(arg) elif cmd == ccsession.COMMAND_GET_CONFIG: From cddcafd790288f5e666198effa142132b6fc43fa Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:46:46 +0900 Subject: [PATCH 102/175] [trac929] add "validate_statistics" which validates statistics specification in the spec file It checks data types and data format of statistics specification --- src/lib/python/isc/config/module_spec.py | 91 +++++++++++++++++++++++- 1 file changed, 89 insertions(+), 2 deletions(-) diff --git a/src/lib/python/isc/config/module_spec.py b/src/lib/python/isc/config/module_spec.py index 9aa49e03e7..976be79dec 100644 --- a/src/lib/python/isc/config/module_spec.py +++ b/src/lib/python/isc/config/module_spec.py @@ -23,6 +23,7 @@ import json import sys +import time import isc.cc.data @@ -117,6 +118,26 @@ class ModuleSpec: return False + def validate_statistics(self, full, stat, errors = None): + """Check whether the given piece of data conforms to this + data definition. If so, it returns True. If not, it will + return false. If errors is given, and is an array, a string + describing the error will be appended to it. The current + version stops as soon as there is one error so this list + will not be exhaustive. If 'full' is true, it also errors on + non-optional missing values. Set this to False if you want to + validate only a part of a statistics tree (like a list of + non-default values). Also it checks 'item_format' in case + of time""" + stat_spec = self.get_statistics_spec() + if stat_spec: + return _validate_spec_list(stat_spec, full, stat, errors) + else: + # no spec, always bad + if errors != None: + errors.append("No statistics specification") + return False + def get_module_name(self): """Returns a string containing the name of the module as specified by the specification given at __init__()""" @@ -152,6 +173,14 @@ class ModuleSpec: else: return None + def get_statistics_spec(self): + """Returns a dict representation of the statistics part of the + specification, or None if there is none.""" + if 'statistics' in self._module_spec: + return self._module_spec['statistics'] + else: + return None + def __str__(self): """Returns a string representation of the full specification""" return self._module_spec.__str__() @@ -160,8 +189,9 @@ def _check(module_spec): """Checks the full specification. This is a dict that contains the element "module_spec", which is in itself a dict that must contain at least a "module_name" (string) and optionally - a "config_data" and a "commands" element, both of which are lists - of dicts. Raises a ModuleSpecError if there is a problem.""" + a "config_data", a "commands" and a "statistics" element, all + of which are lists of dicts. Raises a ModuleSpecError if there + is a problem.""" if type(module_spec) != dict: raise ModuleSpecError("data specification not a dict") if "module_name" not in module_spec: @@ -173,6 +203,8 @@ def _check(module_spec): _check_config_spec(module_spec["config_data"]) if "commands" in module_spec: _check_command_spec(module_spec["commands"]) + if "statistics" in module_spec: + _check_statistics_spec(module_spec["statistics"]) def _check_config_spec(config_data): # config data is a list of items represented by dicts that contain @@ -263,7 +295,46 @@ def _check_item_spec(config_item): if type(map_item) != dict: raise ModuleSpecError("map_item_spec element is not a dict") _check_item_spec(map_item) + if 'item_format' in config_item and 'item_default' in config_item: + item_format = config_item["item_format"] + item_default = config_item["item_default"] + if not _check_format(item_default, item_format): + raise ModuleSpecError( + "Wrong format for " + str(item_default) + " in " + str(item_name)) +def _check_statistics_spec(statistics): + # statistics is a list of items represented by dicts that contain + # things like "item_name", depending on the type they can have + # specific subitems + """Checks a list that contains the statistics part of the + specification. Raises a ModuleSpecError if there is a + problem.""" + if type(statistics) != list: + raise ModuleSpecError("statistics is of type " + str(type(statistics)) + + ", not a list of items") + for stat_item in statistics: + _check_item_spec(stat_item) + # Additionally checks if there are 'item_title' and + # 'item_description' + for item in [ 'item_title', 'item_description' ]: + if item not in stat_item: + raise ModuleSpecError("no " + item + " in statistics item") + +def _check_format(value, format_name): + """Check if specified value and format are correct. Return True if + is is correct.""" + # TODO: should be added other format types if necessary + time_formats = { 'date-time' : "%Y-%m-%dT%H:%M:%SZ", + 'date' : "%Y-%m-%d", + 'time' : "%H:%M:%S" } + for fmt in time_formats: + if format_name == fmt: + try: + time.strptime(value, time_formats[fmt]) + return True + except (ValueError, TypeError): + break + return False def _validate_type(spec, value, errors): """Returns true if the value is of the correct type given the @@ -300,6 +371,18 @@ def _validate_type(spec, value, errors): else: return True +def _validate_format(spec, value, errors): + """Returns true if the value is of the correct format given the + specification. And also return true if no 'item_format'""" + if "item_format" in spec: + item_format = spec['item_format'] + if not _check_format(value, item_format): + if errors != None: + errors.append("format type of " + str(value) + + " should be " + item_format) + return False + return True + def _validate_item(spec, full, data, errors): if not _validate_type(spec, data, errors): return False @@ -308,6 +391,8 @@ def _validate_item(spec, full, data, errors): for data_el in data: if not _validate_type(list_spec, data_el, errors): return False + if not _validate_format(list_spec, data_el, errors): + return False if list_spec['item_type'] == "map": if not _validate_item(list_spec, full, data_el, errors): return False @@ -322,6 +407,8 @@ def _validate_item(spec, full, data, errors): return False if not _validate_item(named_set_spec, full, data_el, errors): return False + elif not _validate_format(spec, data, errors): + return False return True def _validate_spec(spec, full, data, errors): From e443a325b31edefe9cd4da71e10497db6544468c Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:46:57 +0900 Subject: [PATCH 103/175] [trac929] add unittests for the functions: - validate_format - check_format - validate_format --- .../isc/config/tests/module_spec_test.py | 103 ++++++++++++++++++ 1 file changed, 103 insertions(+) diff --git a/src/lib/python/isc/config/tests/module_spec_test.py b/src/lib/python/isc/config/tests/module_spec_test.py index be862c5012..567cfd4945 100644 --- a/src/lib/python/isc/config/tests/module_spec_test.py +++ b/src/lib/python/isc/config/tests/module_spec_test.py @@ -81,6 +81,11 @@ class TestModuleSpec(unittest.TestCase): self.assertRaises(ModuleSpecError, self.read_spec_file, "spec20.spec") self.assertRaises(ModuleSpecError, self.read_spec_file, "spec21.spec") self.assertRaises(ModuleSpecError, self.read_spec_file, "spec26.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec34.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec35.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec36.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec37.spec") + self.assertRaises(ModuleSpecError, self.read_spec_file, "spec38.spec") def validate_data(self, specfile_name, datafile_name): dd = self.read_spec_file(specfile_name); @@ -123,6 +128,17 @@ class TestModuleSpec(unittest.TestCase): self.assertEqual(False, self.validate_command_params("spec27.spec", "data22_8.data", 'cmd1')) self.assertEqual(False, self.validate_command_params("spec27.spec", "data22_8.data", 'cmd2')) + def test_statistics_validation(self): + def _validate_stat(specfile_name, datafile_name): + dd = self.read_spec_file(specfile_name); + data_file = open(self.spec_file(datafile_name)) + data_str = data_file.read() + data = isc.cc.data.parse_value_str(data_str) + return dd.validate_statistics(True, data, []) + self.assertFalse(self.read_spec_file("spec1.spec").validate_statistics(True, None, None)); + self.assertTrue(_validate_stat("spec33.spec", "data33_1.data")) + self.assertFalse(_validate_stat("spec33.spec", "data33_2.data")) + def test_init(self): self.assertRaises(ModuleSpecError, ModuleSpec, 1) module_spec = isc.config.module_spec_from_file(self.spec_file("spec1.spec"), False) @@ -269,6 +285,74 @@ class TestModuleSpec(unittest.TestCase): } ) + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_datetime", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "date-time" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_date", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "date" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_time", + 'item_type': "string", + 'item_optional': False, + 'item_default': 1, + 'item_format': "time" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_datetime", + 'item_type': "string", + 'item_optional': False, + 'item_default': "2011-05-27T19:42:57Z", + 'item_format': "dummy-format" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_date", + 'item_type': "string", + 'item_optional': False, + 'item_default': "2011-05-27", + 'item_format': "dummy-format" + } + ) + + self.assertRaises(ModuleSpecError, isc.config.module_spec._check_item_spec, + { 'item_name': "a_time", + 'item_type': "string", + 'item_optional': False, + 'item_default': "19:42:57Z", + 'item_format': "dummy-format" + } + ) + + def test_check_format(self): + self.assertTrue(isc.config.module_spec._check_format('2011-05-27T19:42:57Z', 'date-time')) + self.assertTrue(isc.config.module_spec._check_format('2011-05-27', 'date')) + self.assertTrue(isc.config.module_spec._check_format('19:42:57', 'time')) + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T19:42:57Z', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('2011-05-27', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('19:42:57', 'dummy')) + self.assertFalse(isc.config.module_spec._check_format('2011-13-99T99:99:99Z', 'date-time')) + self.assertFalse(isc.config.module_spec._check_format('2011-13-99', 'date')) + self.assertFalse(isc.config.module_spec._check_format('99:99:99', 'time')) + self.assertFalse(isc.config.module_spec._check_format('', 'date-time')) + self.assertFalse(isc.config.module_spec._check_format(None, 'date-time')) + self.assertFalse(isc.config.module_spec._check_format(None, None)) + def test_validate_type(self): errors = [] self.assertEqual(True, isc.config.module_spec._validate_type({ 'item_type': 'integer' }, 1, errors)) @@ -306,6 +390,25 @@ class TestModuleSpec(unittest.TestCase): self.assertEqual(False, isc.config.module_spec._validate_type({ 'item_type': 'map' }, 1, errors)) self.assertEqual(['1 should be a map'], errors) + def test_validate_format(self): + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "2011-05-27T19:42:57Z", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date-time' }, "a", errors)) + self.assertEqual(['format type of a should be date-time'], errors) + + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "2011-05-27", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'date' }, "a", errors)) + self.assertEqual(['format type of a should be date'], errors) + + errors = [] + self.assertEqual(True, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "19:42:57", errors)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "a", None)) + self.assertEqual(False, isc.config.module_spec._validate_format({ 'item_format': 'time' }, "a", errors)) + self.assertEqual(['format type of a should be time'], errors) + def test_validate_spec(self): spec = { 'item_name': "an_item", 'item_type': "string", From c4ef641d07c7ddfd6b86d6b5ae944ab9a30d6990 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:47:09 +0900 Subject: [PATCH 104/175] [trac929] add unittest of "get_statistics_spec" --- .../python/isc/config/tests/cfgmgr_test.py | 22 +++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/src/lib/python/isc/config/tests/cfgmgr_test.py b/src/lib/python/isc/config/tests/cfgmgr_test.py index 0a9e2d3e44..eacc425dd5 100644 --- a/src/lib/python/isc/config/tests/cfgmgr_test.py +++ b/src/lib/python/isc/config/tests/cfgmgr_test.py @@ -219,6 +219,25 @@ class TestConfigManager(unittest.TestCase): commands_spec = self.cm.get_commands_spec('Spec2') self.assertEqual(commands_spec['Spec2'], module_spec.get_commands_spec()) + def test_get_statistics_spec(self): + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec, {}) + module_spec = isc.config.module_spec.module_spec_from_file(self.data_path + os.sep + "spec1.spec") + self.assert_(module_spec.get_module_name() not in self.cm.module_specs) + self.cm.set_module_spec(module_spec) + self.assert_(module_spec.get_module_name() in self.cm.module_specs) + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec, { 'Spec1': None }) + self.cm.remove_module_spec('Spec1') + module_spec = isc.config.module_spec.module_spec_from_file(self.data_path + os.sep + "spec2.spec") + self.assert_(module_spec.get_module_name() not in self.cm.module_specs) + self.cm.set_module_spec(module_spec) + self.assert_(module_spec.get_module_name() in self.cm.module_specs) + statistics_spec = self.cm.get_statistics_spec() + self.assertEqual(statistics_spec['Spec2'], module_spec.get_statistics_spec()) + statistics_spec = self.cm.get_statistics_spec('Spec2') + self.assertEqual(statistics_spec['Spec2'], module_spec.get_statistics_spec()) + def test_read_config(self): self.assertEqual(self.cm.config.data, {'version': config_data.BIND10_CONFIG_DATA_VERSION}) self.cm.read_config() @@ -241,6 +260,7 @@ class TestConfigManager(unittest.TestCase): self._handle_msg_helper("", { 'result': [ 1, 'Unknown message format: ']}) self._handle_msg_helper({ "command": [ "badcommand" ] }, { 'result': [ 1, "Unknown command: badcommand"]}) self._handle_msg_helper({ "command": [ "get_commands_spec" ] }, { 'result': [ 0, {} ]}) + self._handle_msg_helper({ "command": [ "get_statistics_spec" ] }, { 'result': [ 0, {} ]}) self._handle_msg_helper({ "command": [ "get_module_spec" ] }, { 'result': [ 0, {} ]}) self._handle_msg_helper({ "command": [ "get_module_spec", { "module_name": "Spec2" } ] }, { 'result': [ 0, {} ]}) #self._handle_msg_helper({ "command": [ "get_module_spec", { "module_name": "nosuchmodule" } ] }, @@ -329,6 +349,7 @@ class TestConfigManager(unittest.TestCase): { "module_name" : "Spec2" } ] }, { 'result': [ 0, self.spec.get_full_spec() ] }) self._handle_msg_helper({ "command": [ "get_commands_spec" ] }, { 'result': [ 0, { self.spec.get_module_name(): self.spec.get_commands_spec() } ]}) + self._handle_msg_helper({ "command": [ "get_statistics_spec" ] }, { 'result': [ 0, { self.spec.get_module_name(): self.spec.get_statistics_spec() } ]}) # re-add this once we have new way to propagate spec changes (1 instead of the current 2 messages) #self.assertEqual(len(self.fake_session.message_queue), 2) # the name here is actually wrong (and hardcoded), but needed in the current version @@ -450,6 +471,7 @@ class TestConfigManager(unittest.TestCase): def test_run(self): self.fake_session.group_sendmsg({ "command": [ "get_commands_spec" ] }, "ConfigManager") + self.fake_session.group_sendmsg({ "command": [ "get_statistics_spec" ] }, "ConfigManager") self.fake_session.group_sendmsg({ "command": [ "shutdown" ] }, "ConfigManager") self.cm.run() pass From d5ded106a85afaf695e59941bd382bca4811fe46 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 19 Jul 2011 20:55:39 +0900 Subject: [PATCH 105/175] [trac929] addition and modification as a variant of the statistics part of module_spec.py - add check_format which checks whether the given element is a valid statistics specification - modify check_data_specification to add check of statistics specification - add getStatisticsSpec() which returns statistics specification - add two of validateStatistics which check whether specified data is valid for statistics specification - modify validateItem to add check of item_format in specification update the year of copyright --- src/lib/config/module_spec.cc | 84 ++++++++++++++++++++++++++++++++++- 1 file changed, 83 insertions(+), 1 deletion(-) diff --git a/src/lib/config/module_spec.cc b/src/lib/config/module_spec.cc index 306c7954f4..eed6b72430 100644 --- a/src/lib/config/module_spec.cc +++ b/src/lib/config/module_spec.cc @@ -1,4 +1,4 @@ -// Copyright (C) 2010 Internet Systems Consortium. +// Copyright (C) 2010, 2011 Internet Systems Consortium. // // Permission to use, copy, modify, and distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -87,6 +87,54 @@ check_config_item_list(ConstElementPtr spec) { } } +// checks whether the given element is a valid statistics specification +// returns false if the specification is bad +bool +check_format(ConstElementPtr value, ConstElementPtr format_name) { + typedef std::map format_types; + format_types time_formats; + // TODO: should be added other format types if necessary + time_formats.insert( + format_types::value_type("date-time", "%Y-%m-%dT%H:%M:%SZ") ); + time_formats.insert( + format_types::value_type("date", "%Y-%m-%d") ); + time_formats.insert( + format_types::value_type("time", "%H:%M:%S") ); + BOOST_FOREACH (const format_types::value_type& f, time_formats) { + if (format_name->stringValue() == f.first) { + struct tm tm; + return (strptime(value->stringValue().c_str(), + f.second.c_str(), &tm) != NULL); + } + } + return (false); +} + +void check_statistics_item_list(ConstElementPtr spec); + +void +check_statistics_item_list(ConstElementPtr spec) { + if (spec->getType() != Element::list) { + throw ModuleSpecError("statistics is not a list of elements"); + } + BOOST_FOREACH(ConstElementPtr item, spec->listValue()) { + check_config_item(item); + // additional checks for statistics + check_leaf_item(item, "item_title", Element::string, true); + check_leaf_item(item, "item_description", Element::string, true); + check_leaf_item(item, "item_format", Element::string, false); + // checks name of item_format and validation of item_default + if (item->contains("item_format") + && item->contains("item_default")) { + if(!check_format(item->get("item_default"), + item->get("item_format"))) { + throw ModuleSpecError( + "item_default not valid type of item_format"); + } + } + } +} + void check_command(ConstElementPtr spec) { check_leaf_item(spec, "command_name", Element::string, true); @@ -116,6 +164,9 @@ check_data_specification(ConstElementPtr spec) { if (spec->contains("commands")) { check_command_list(spec->get("commands")); } + if (spec->contains("statistics")) { + check_statistics_item_list(spec->get("statistics")); + } } // checks whether the given element is a valid module specification @@ -165,6 +216,15 @@ ModuleSpec::getConfigSpec() const { } } +ConstElementPtr +ModuleSpec::getStatisticsSpec() const { + if (module_specification->contains("statistics")) { + return (module_specification->get("statistics")); + } else { + return (ElementPtr()); + } +} + const std::string ModuleSpec::getModuleName() const { return (module_specification->get("module_name")->stringValue()); @@ -185,6 +245,12 @@ ModuleSpec::validateConfig(ConstElementPtr data, const bool full) const { return (validateSpecList(spec, data, full, ElementPtr())); } +bool +ModuleSpec::validateStatistics(ConstElementPtr data, const bool full) const { + ConstElementPtr spec = module_specification->find("statistics"); + return (validateSpecList(spec, data, full, ElementPtr())); +} + bool ModuleSpec::validateCommand(const std::string& command, ConstElementPtr args, @@ -223,6 +289,14 @@ ModuleSpec::validateConfig(ConstElementPtr data, const bool full, return (validateSpecList(spec, data, full, errors)); } +bool +ModuleSpec::validateStatistics(ConstElementPtr data, const bool full, + ElementPtr errors) const +{ + ConstElementPtr spec = module_specification->find("statistics"); + return (validateSpecList(spec, data, full, errors)); +} + ModuleSpec moduleSpecFromFile(const std::string& file_name, const bool check) throw(JSONError, ModuleSpecError) @@ -343,6 +417,14 @@ ModuleSpec::validateItem(ConstElementPtr spec, ConstElementPtr data, } } } + if (spec->contains("item_format")) { + if (!check_format(data, spec->get("item_format"))) { + if (errors) { + errors->add(Element::create("Format mismatch")); + } + return (false); + } + } return (true); } From 7cf7ec751e4f776dbb60cd290cea4fb217173cdb Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 19 Jul 2011 20:57:32 +0900 Subject: [PATCH 106/175] [trac929] add some methods for statistics specification - getStatisticsSpec() - validateStatistics() - validateStatistics() update the year of copyright --- src/lib/config/module_spec.h | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/src/lib/config/module_spec.h b/src/lib/config/module_spec.h index ab6e273edd..ce3762f203 100644 --- a/src/lib/config/module_spec.h +++ b/src/lib/config/module_spec.h @@ -1,4 +1,4 @@ -// Copyright (C) 2010 Internet Systems Consortium. +// Copyright (C) 2010, 2011 Internet Systems Consortium. // // Permission to use, copy, modify, and distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -71,6 +71,12 @@ namespace isc { namespace config { /// part of the specification isc::data::ConstElementPtr getConfigSpec() const; + /// Returns the statistics part of the specification as an + /// ElementPtr + /// \return ElementPtr Shared pointer to the statistics + /// part of the specification + isc::data::ConstElementPtr getStatisticsSpec() const; + /// Returns the full module specification as an ElementPtr /// \return ElementPtr Shared pointer to the specification isc::data::ConstElementPtr getFullSpec() const { @@ -95,6 +101,17 @@ namespace isc { namespace config { bool validateConfig(isc::data::ConstElementPtr data, const bool full = false) const; + // returns true if the given element conforms to this data + // statistics specification + /// Validates the given statistics data for this specification. + /// \param data The base \c Element of the data to check + /// \param full If true, all non-optional statistics parameters + /// must be specified. + /// \return true if the data conforms to the specification, + /// false otherwise. + bool validateStatistics(isc::data::ConstElementPtr data, + const bool full = false) const; + /// Validates the arguments for the given command /// /// This checks the command and argument against the @@ -142,6 +159,10 @@ namespace isc { namespace config { bool validateConfig(isc::data::ConstElementPtr data, const bool full, isc::data::ElementPtr errors) const; + /// errors must be of type ListElement + bool validateStatistics(isc::data::ConstElementPtr data, const bool full, + isc::data::ElementPtr errors) const; + private: bool validateItem(isc::data::ConstElementPtr spec, isc::data::ConstElementPtr data, From 7bda7762ab9243404bbd0964908b3365cd052969 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 19 Jul 2011 21:02:12 +0900 Subject: [PATCH 107/175] [trac929] add unit tests of the methods using some dummy spec files with the dummy data of statistics specification - getStatisticsSpec() - validateStatistics() - validateStatistics() - check_format() add include of boost_foreach update the year of copyright --- src/lib/config/tests/module_spec_unittests.cc | 145 +++++++++++++++++- 1 file changed, 144 insertions(+), 1 deletion(-) diff --git a/src/lib/config/tests/module_spec_unittests.cc b/src/lib/config/tests/module_spec_unittests.cc index d642af8286..315a78df06 100644 --- a/src/lib/config/tests/module_spec_unittests.cc +++ b/src/lib/config/tests/module_spec_unittests.cc @@ -1,4 +1,4 @@ -// Copyright (C) 2009 Internet Systems Consortium, Inc. ("ISC") +// Copyright (C) 2009, 2011 Internet Systems Consortium, Inc. ("ISC") // // Permission to use, copy, modify, and/or distribute this software for any // purpose with or without fee is hereby granted, provided that the above @@ -18,6 +18,8 @@ #include +#include + #include using namespace isc::data; @@ -57,6 +59,7 @@ TEST(ModuleSpec, ReadingSpecfiles) { dd = moduleSpecFromFile(specfile("spec2.spec")); EXPECT_EQ("[ { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\", \"command_name\": \"print_message\" }, { \"command_args\": [ ], \"command_description\": \"Shut down BIND 10\", \"command_name\": \"shutdown\" } ]", dd.getCommandsSpec()->str()); + EXPECT_EQ("[ { \"item_default\": \"1970-01-01T00:00:00Z\", \"item_description\": \"A dummy date time\", \"item_format\": \"date-time\", \"item_name\": \"dummy_time\", \"item_optional\": false, \"item_title\": \"Dummy Time\", \"item_type\": \"string\" } ]", dd.getStatisticsSpec()->str()); EXPECT_EQ("Spec2", dd.getModuleName()); EXPECT_EQ("", dd.getModuleDescription()); @@ -64,6 +67,11 @@ TEST(ModuleSpec, ReadingSpecfiles) { EXPECT_EQ("Spec25", dd.getModuleName()); EXPECT_EQ("Just an empty module", dd.getModuleDescription()); EXPECT_THROW(moduleSpecFromFile(specfile("spec26.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec34.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec35.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec36.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec37.spec")), ModuleSpecError); + EXPECT_THROW(moduleSpecFromFile(specfile("spec38.spec")), ModuleSpecError); std::ifstream file; file.open(specfile("spec1.spec").c_str()); @@ -71,6 +79,7 @@ TEST(ModuleSpec, ReadingSpecfiles) { EXPECT_EQ(dd.getFullSpec()->get("module_name") ->stringValue(), "Spec1"); EXPECT_TRUE(isNull(dd.getCommandsSpec())); + EXPECT_TRUE(isNull(dd.getStatisticsSpec())); std::ifstream file2; file2.open(specfile("spec8.spec").c_str()); @@ -114,6 +123,12 @@ TEST(ModuleSpec, SpecfileConfigData) { "commands is not a list of elements"); } +TEST(ModuleSpec, SpecfileStatistics) { + moduleSpecError("spec36.spec", "item_default not valid type of item_format"); + moduleSpecError("spec37.spec", "statistics is not a list of elements"); + moduleSpecError("spec38.spec", "item_default not valid type of item_format"); +} + TEST(ModuleSpec, SpecfileCommands) { moduleSpecError("spec17.spec", "command_name missing in { \"command_args\": [ { \"item_default\": \"\", \"item_name\": \"message\", \"item_optional\": false, \"item_type\": \"string\" } ], \"command_description\": \"Print the given message to stdout\" }"); @@ -136,6 +151,17 @@ dataTest(const ModuleSpec& dd, const std::string& data_file_name) { return (dd.validateConfig(data)); } +bool +statisticsTest(const ModuleSpec& dd, const std::string& data_file_name) { + std::ifstream data_file; + + data_file.open(specfile(data_file_name).c_str()); + ConstElementPtr data = Element::fromJSON(data_file, data_file_name); + data_file.close(); + + return (dd.validateStatistics(data)); +} + bool dataTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, ElementPtr errors) @@ -149,6 +175,19 @@ dataTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, return (dd.validateConfig(data, true, errors)); } +bool +statisticsTestWithErrors(const ModuleSpec& dd, const std::string& data_file_name, + ElementPtr errors) +{ + std::ifstream data_file; + + data_file.open(specfile(data_file_name).c_str()); + ConstElementPtr data = Element::fromJSON(data_file, data_file_name); + data_file.close(); + + return (dd.validateStatistics(data, true, errors)); +} + TEST(ModuleSpec, DataValidation) { ModuleSpec dd = moduleSpecFromFile(specfile("spec22.spec")); @@ -175,6 +214,17 @@ TEST(ModuleSpec, DataValidation) { EXPECT_EQ("[ \"Unknown item value_does_not_exist\" ]", errors->str()); } +TEST(ModuleSpec, StatisticsValidation) { + ModuleSpec dd = moduleSpecFromFile(specfile("spec33.spec")); + + EXPECT_TRUE(statisticsTest(dd, "data33_1.data")); + EXPECT_FALSE(statisticsTest(dd, "data33_2.data")); + + ElementPtr errors = Element::createList(); + EXPECT_FALSE(statisticsTestWithErrors(dd, "data33_2.data", errors)); + EXPECT_EQ("[ \"Format mismatch\", \"Format mismatch\", \"Format mismatch\" ]", errors->str()); +} + TEST(ModuleSpec, CommandValidation) { ModuleSpec dd = moduleSpecFromFile(specfile("spec2.spec")); ConstElementPtr arg = Element::fromJSON("{}"); @@ -220,3 +270,96 @@ TEST(ModuleSpec, NamedSetValidation) { EXPECT_FALSE(dataTest(dd, "data32_2.data")); EXPECT_FALSE(dataTest(dd, "data32_3.data")); } + +TEST(ModuleSpec, CheckFormat) { + + const std::string json_begin = "{ \"module_spec\": { \"module_name\": \"Foo\", \"statistics\": [ { \"item_name\": \"dummy_time\", \"item_type\": \"string\", \"item_optional\": true, \"item_title\": \"Dummy Time\", \"item_description\": \"A dummy date time\""; + const std::string json_end = " } ] } }"; + std::string item_default; + std::string item_format; + std::vector specs; + ConstElementPtr el; + + specs.clear(); + item_default = "\"item_default\": \"2011-05-27T19:42:57Z\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-05-27\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"19:42:57Z\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_format); + item_default = ""; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_format); + item_default = ""; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_format); + + item_default = "\"item_default\": \"a\""; + specs.push_back("," + item_default); + item_default = "\"item_default\": \"b\""; + specs.push_back("," + item_default); + item_default = "\"item_default\": \"c\""; + specs.push_back("," + item_default); + + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_format); + + specs.push_back(""); + + BOOST_FOREACH(std::string s, specs) { + el = Element::fromJSON(json_begin + s + json_end)->get("module_spec"); + EXPECT_NO_THROW(ModuleSpec(el, true)); + } + + specs.clear(); + item_default = "\"item_default\": \"2011-05-27T19:42:57Z\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-05-27\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"19:42:57Z\","; + item_format = "\"item_format\": \"dummy\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"2011-13-99T99:99:99Z\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"2011-13-99\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"99:99:99Z\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"1\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + item_default = "\"item_default\": \"\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + + BOOST_FOREACH(std::string s, specs) { + el = Element::fromJSON(json_begin + s + json_end)->get("module_spec"); + EXPECT_THROW(ModuleSpec(el, true), ModuleSpecError); + } +} From 9b6993b6f6507fab1bc8956f727cca60c8c9243a Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 26 Jul 2011 18:04:43 +0900 Subject: [PATCH 108/175] [trac929] changed into the exact way of checking whether the value is "None" or not in the "if" branch as pointed out in the reviewing. --- src/lib/python/isc/config/module_spec.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/lib/python/isc/config/module_spec.py b/src/lib/python/isc/config/module_spec.py index 976be79dec..7f5aa00441 100644 --- a/src/lib/python/isc/config/module_spec.py +++ b/src/lib/python/isc/config/module_spec.py @@ -130,7 +130,7 @@ class ModuleSpec: non-default values). Also it checks 'item_format' in case of time""" stat_spec = self.get_statistics_spec() - if stat_spec: + if stat_spec != None: return _validate_spec_list(stat_spec, full, stat, errors) else: # no spec, always bad From 0f787178301c7cbf59fc7c516ebe920a33e22429 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Wed, 27 Jul 2011 18:32:06 +0200 Subject: [PATCH 109/175] [trac929] replace != None with is not None --- src/lib/python/isc/config/module_spec.py | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/src/lib/python/isc/config/module_spec.py b/src/lib/python/isc/config/module_spec.py index 7f5aa00441..d120080094 100644 --- a/src/lib/python/isc/config/module_spec.py +++ b/src/lib/python/isc/config/module_spec.py @@ -92,7 +92,7 @@ class ModuleSpec: return _validate_spec_list(data_def, full, data, errors) else: # no spec, always bad - if errors != None: + if errors is not None: errors.append("No config_data specification") return False @@ -130,11 +130,11 @@ class ModuleSpec: non-default values). Also it checks 'item_format' in case of time""" stat_spec = self.get_statistics_spec() - if stat_spec != None: + if stat_spec is not None: return _validate_spec_list(stat_spec, full, stat, errors) else: # no spec, always bad - if errors != None: + if errors is not None: errors.append("No statistics specification") return False @@ -341,27 +341,27 @@ def _validate_type(spec, value, errors): specification""" data_type = spec['item_type'] if data_type == "integer" and type(value) != int: - if errors != None: + if errors is not None: errors.append(str(value) + " should be an integer") return False elif data_type == "real" and type(value) != float: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a real") return False elif data_type == "boolean" and type(value) != bool: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a boolean") return False elif data_type == "string" and type(value) != str: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a string") return False elif data_type == "list" and type(value) != list: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a list") return False elif data_type == "map" and type(value) != dict: - if errors != None: + if errors is not None: errors.append(str(value) + " should be a map") return False elif data_type == "named_set" and type(value) != dict: @@ -377,7 +377,7 @@ def _validate_format(spec, value, errors): if "item_format" in spec: item_format = spec['item_format'] if not _check_format(value, item_format): - if errors != None: + if errors is not None: errors.append("format type of " + str(value) + " should be " + item_format) return False @@ -420,7 +420,7 @@ def _validate_spec(spec, full, data, errors): elif item_name in data: return _validate_item(spec, full, data[item_name], errors) elif full and not item_optional: - if errors != None: + if errors is not None: errors.append("non-optional item " + item_name + " missing") return False else: @@ -445,7 +445,7 @@ def _validate_spec_list(module_spec, full, data, errors): if spec_item["item_name"] == item_name: found = True if not found and item_name != "version": - if errors != None: + if errors is not None: errors.append("unknown item " + item_name) validated = False return validated From 1ddc6158f7544c95742757654863379fff847771 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 29 Jul 2011 19:59:08 +0900 Subject: [PATCH 110/175] [trac929] fix the invalid spec file for module_spec.py remove format_type undefined in module_spec.py from stats-schema.spec --- src/bin/stats/stats-schema.spec | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/bin/stats/stats-schema.spec b/src/bin/stats/stats-schema.spec index 37e9c1ae9a..52528657e8 100644 --- a/src/bin/stats/stats-schema.spec +++ b/src/bin/stats/stats-schema.spec @@ -54,8 +54,7 @@ "item_optional": false, "item_default": 0.0, "item_title": "stats.Timestamp", - "item_description": "A current time stamp since epoch time (1970-01-01T00:00:00Z)", - "item_format": "second" + "item_description": "A current time stamp since epoch time (1970-01-01T00:00:00Z)" }, { "item_name": "stats.lname", From 87e410c0061df72fe69fb47c7456ae54c609b219 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 29 Jul 2011 20:07:23 +0900 Subject: [PATCH 111/175] [trac929] implement some methods checking config_data into the mock of module_spec, but actually just copy such methods from the original module_spec into the mock. --- src/bin/stats/tests/isc/config/ccsession.py | 89 +++++++++++++++++++++ 1 file changed, 89 insertions(+) diff --git a/src/bin/stats/tests/isc/config/ccsession.py b/src/bin/stats/tests/isc/config/ccsession.py index a4e9c37c1f..50f7c1b163 100644 --- a/src/bin/stats/tests/isc/config/ccsession.py +++ b/src/bin/stats/tests/isc/config/ccsession.py @@ -23,6 +23,7 @@ external module. import json import os +import time from isc.cc.session import Session COMMAND_CONFIG_UPDATE = "config_update" @@ -72,6 +73,9 @@ class ModuleSpecError(Exception): class ModuleSpec: def __init__(self, module_spec, check = True): + # check only confi_data for testing + if check and "config_data" in module_spec: + _check_config_spec(module_spec["config_data"]) self._module_spec = module_spec def get_config_spec(self): @@ -83,6 +87,91 @@ class ModuleSpec: def get_module_name(self): return self._module_spec['module_name'] +def _check_config_spec(config_data): + # config data is a list of items represented by dicts that contain + # things like "item_name", depending on the type they can have + # specific subitems + """Checks a list that contains the configuration part of the + specification. Raises a ModuleSpecError if there is a + problem.""" + if type(config_data) != list: + raise ModuleSpecError("config_data is of type " + str(type(config_data)) + ", not a list of items") + for config_item in config_data: + _check_item_spec(config_item) + +def _check_item_spec(config_item): + """Checks the dict that defines one config item + (i.e. containing "item_name", "item_type", etc. + Raises a ModuleSpecError if there is an error""" + if type(config_item) != dict: + raise ModuleSpecError("item spec not a dict") + if "item_name" not in config_item: + raise ModuleSpecError("no item_name in config item") + if type(config_item["item_name"]) != str: + raise ModuleSpecError("item_name is not a string: " + str(config_item["item_name"])) + item_name = config_item["item_name"] + if "item_type" not in config_item: + raise ModuleSpecError("no item_type in config item") + item_type = config_item["item_type"] + if type(item_type) != str: + raise ModuleSpecError("item_type in " + item_name + " is not a string: " + str(type(item_type))) + if item_type not in ["integer", "real", "boolean", "string", "list", "map", "any"]: + raise ModuleSpecError("unknown item_type in " + item_name + ": " + item_type) + if "item_optional" in config_item: + if type(config_item["item_optional"]) != bool: + raise ModuleSpecError("item_default in " + item_name + " is not a boolean") + if not config_item["item_optional"] and "item_default" not in config_item: + raise ModuleSpecError("no default value for non-optional item " + item_name) + else: + raise ModuleSpecError("item_optional not in item " + item_name) + if "item_default" in config_item: + item_default = config_item["item_default"] + if (item_type == "integer" and type(item_default) != int) or \ + (item_type == "real" and type(item_default) != float) or \ + (item_type == "boolean" and type(item_default) != bool) or \ + (item_type == "string" and type(item_default) != str) or \ + (item_type == "list" and type(item_default) != list) or \ + (item_type == "map" and type(item_default) != dict): + raise ModuleSpecError("Wrong type for item_default in " + item_name) + # TODO: once we have check_type, run the item default through that with the list|map_item_spec + if item_type == "list": + if "list_item_spec" not in config_item: + raise ModuleSpecError("no list_item_spec in list item " + item_name) + if type(config_item["list_item_spec"]) != dict: + raise ModuleSpecError("list_item_spec in " + item_name + " is not a dict") + _check_item_spec(config_item["list_item_spec"]) + if item_type == "map": + if "map_item_spec" not in config_item: + raise ModuleSpecError("no map_item_sepc in map item " + item_name) + if type(config_item["map_item_spec"]) != list: + raise ModuleSpecError("map_item_spec in " + item_name + " is not a list") + for map_item in config_item["map_item_spec"]: + if type(map_item) != dict: + raise ModuleSpecError("map_item_spec element is not a dict") + _check_item_spec(map_item) + if 'item_format' in config_item and 'item_default' in config_item: + item_format = config_item["item_format"] + item_default = config_item["item_default"] + if not _check_format(item_default, item_format): + raise ModuleSpecError( + "Wrong format for " + str(item_default) + " in " + str(item_name)) + +def _check_format(value, format_name): + """Check if specified value and format are correct. Return True if + is is correct.""" + # TODO: should be added other format types if necessary + time_formats = { 'date-time' : "%Y-%m-%dT%H:%M:%SZ", + 'date' : "%Y-%m-%d", + 'time' : "%H:%M:%S" } + for fmt in time_formats: + if format_name == fmt: + try: + time.strptime(value, time_formats[fmt]) + return True + except (ValueError, TypeError): + break + return False + class ModuleCCSessionError(Exception): pass From 93a7f7d1495795b731242e270b6dc76b1ad6b0dc Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 5 Aug 2011 14:22:28 +0900 Subject: [PATCH 112/175] [trac929] add more strict check for date and time format (add reverse check) --- src/lib/config/module_spec.cc | 9 ++++++++- src/lib/config/tests/module_spec_unittests.cc | 2 +- src/lib/python/isc/config/module_spec.py | 6 ++++-- 3 files changed, 13 insertions(+), 4 deletions(-) diff --git a/src/lib/config/module_spec.cc b/src/lib/config/module_spec.cc index eed6b72430..27cf993905 100644 --- a/src/lib/config/module_spec.cc +++ b/src/lib/config/module_spec.cc @@ -103,8 +103,15 @@ check_format(ConstElementPtr value, ConstElementPtr format_name) { BOOST_FOREACH (const format_types::value_type& f, time_formats) { if (format_name->stringValue() == f.first) { struct tm tm; + char buf[255] = ""; + memset(&tm, 0, sizeof(tm)); + // reverse check return (strptime(value->stringValue().c_str(), - f.second.c_str(), &tm) != NULL); + f.second.c_str(), &tm) != NULL + && strftime(buf, sizeof(buf), + f.second.c_str(), &tm) != 0 + && strcmp(value->stringValue().c_str(), + buf) == 0); } } return (false); diff --git a/src/lib/config/tests/module_spec_unittests.cc b/src/lib/config/tests/module_spec_unittests.cc index 315a78df06..cfd0ff5216 100644 --- a/src/lib/config/tests/module_spec_unittests.cc +++ b/src/lib/config/tests/module_spec_unittests.cc @@ -287,7 +287,7 @@ TEST(ModuleSpec, CheckFormat) { item_default = "\"item_default\": \"2011-05-27\","; item_format = "\"item_format\": \"date\""; specs.push_back("," + item_default + item_format); - item_default = "\"item_default\": \"19:42:57Z\","; + item_default = "\"item_default\": \"19:42:57\","; item_format = "\"item_format\": \"time\""; specs.push_back("," + item_default + item_format); diff --git a/src/lib/python/isc/config/module_spec.py b/src/lib/python/isc/config/module_spec.py index d120080094..b79f928237 100644 --- a/src/lib/python/isc/config/module_spec.py +++ b/src/lib/python/isc/config/module_spec.py @@ -330,8 +330,10 @@ def _check_format(value, format_name): for fmt in time_formats: if format_name == fmt: try: - time.strptime(value, time_formats[fmt]) - return True + # reverse check + return value == time.strftime( + time_formats[fmt], + time.strptime(value, time_formats[fmt])) except (ValueError, TypeError): break return False From 3e0a0e157bc2a1ca7ad9efb566755ec61eedd180 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 9 Aug 2011 15:53:56 +0900 Subject: [PATCH 113/175] [trac929] consideration for buffer overflow - use std::vector instead of char[] - use strncmp() instead of strcmp() - shorten length of char array for the buffer (not directly related to buffer overflow) add more unittests for some wrong type formats into both c++ and python codes (unittests for the previous change git e9620e0d9dd3d967bcfb99562f13848c70538a44) - date-time-type format not ending with "Z" - date-type format ending with "T" - time-type format ending with "Z" --- src/lib/config/module_spec.cc | 8 ++++---- src/lib/config/tests/module_spec_unittests.cc | 13 +++++++++++++ src/lib/python/isc/config/tests/module_spec_test.py | 6 ++++++ 3 files changed, 23 insertions(+), 4 deletions(-) diff --git a/src/lib/config/module_spec.cc b/src/lib/config/module_spec.cc index 27cf993905..bebe695023 100644 --- a/src/lib/config/module_spec.cc +++ b/src/lib/config/module_spec.cc @@ -103,15 +103,15 @@ check_format(ConstElementPtr value, ConstElementPtr format_name) { BOOST_FOREACH (const format_types::value_type& f, time_formats) { if (format_name->stringValue() == f.first) { struct tm tm; - char buf[255] = ""; + std::vector buf(32); memset(&tm, 0, sizeof(tm)); // reverse check return (strptime(value->stringValue().c_str(), f.second.c_str(), &tm) != NULL - && strftime(buf, sizeof(buf), + && strftime(&buf[0], buf.size(), f.second.c_str(), &tm) != 0 - && strcmp(value->stringValue().c_str(), - buf) == 0); + && strncmp(value->stringValue().c_str(), + &buf[0], buf.size()) == 0); } } return (false); diff --git a/src/lib/config/tests/module_spec_unittests.cc b/src/lib/config/tests/module_spec_unittests.cc index cfd0ff5216..b2ca7b45f4 100644 --- a/src/lib/config/tests/module_spec_unittests.cc +++ b/src/lib/config/tests/module_spec_unittests.cc @@ -358,6 +358,19 @@ TEST(ModuleSpec, CheckFormat) { item_format = "\"item_format\": \"time\""; specs.push_back("," + item_default + item_format); + // wrong date-time-type format not ending with "Z" + item_default = "\"item_default\": \"2011-05-27T19:42:57\","; + item_format = "\"item_format\": \"date-time\""; + specs.push_back("," + item_default + item_format); + // wrong date-type format ending with "T" + item_default = "\"item_default\": \"2011-05-27T\","; + item_format = "\"item_format\": \"date\""; + specs.push_back("," + item_default + item_format); + // wrong time-type format ending with "Z" + item_default = "\"item_default\": \"19:42:57Z\","; + item_format = "\"item_format\": \"time\""; + specs.push_back("," + item_default + item_format); + BOOST_FOREACH(std::string s, specs) { el = Element::fromJSON(json_begin + s + json_end)->get("module_spec"); EXPECT_THROW(ModuleSpec(el, true), ModuleSpecError); diff --git a/src/lib/python/isc/config/tests/module_spec_test.py b/src/lib/python/isc/config/tests/module_spec_test.py index 567cfd4945..fc53d23221 100644 --- a/src/lib/python/isc/config/tests/module_spec_test.py +++ b/src/lib/python/isc/config/tests/module_spec_test.py @@ -352,6 +352,12 @@ class TestModuleSpec(unittest.TestCase): self.assertFalse(isc.config.module_spec._check_format('', 'date-time')) self.assertFalse(isc.config.module_spec._check_format(None, 'date-time')) self.assertFalse(isc.config.module_spec._check_format(None, None)) + # wrong date-time-type format not ending with "Z" + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T19:42:57', 'date-time')) + # wrong date-type format ending with "T" + self.assertFalse(isc.config.module_spec._check_format('2011-05-27T', 'date')) + # wrong time-type format ending with "Z" + self.assertFalse(isc.config.module_spec._check_format('19:42:57Z', 'time')) def test_validate_type(self): errors = [] From 326885a3f98c49a848a67dc48db693b8bcc7b508 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:55:55 +0900 Subject: [PATCH 114/175] [trac930] remove unneeded specfile "stats-schema.spec" --- src/bin/stats/Makefile.am | 4 +- src/bin/stats/stats-schema.spec | 86 --------------------------------- 2 files changed, 2 insertions(+), 88 deletions(-) delete mode 100644 src/bin/stats/stats-schema.spec diff --git a/src/bin/stats/Makefile.am b/src/bin/stats/Makefile.am index e830f65d60..49cadad4c9 100644 --- a/src/bin/stats/Makefile.am +++ b/src/bin/stats/Makefile.am @@ -5,7 +5,7 @@ pkglibexecdir = $(libexecdir)/@PACKAGE@ pkglibexec_SCRIPTS = b10-stats b10-stats-httpd b10_statsdir = $(pkgdatadir) -b10_stats_DATA = stats.spec stats-httpd.spec stats-schema.spec +b10_stats_DATA = stats.spec stats-httpd.spec b10_stats_DATA += stats-httpd-xml.tpl stats-httpd-xsd.tpl stats-httpd-xsl.tpl pyexec_DATA = stats_messages.py stats_httpd_messages.py @@ -16,7 +16,7 @@ CLEANFILES += stats_httpd_messages.py stats_httpd_messages.pyc man_MANS = b10-stats.8 b10-stats-httpd.8 EXTRA_DIST = $(man_MANS) b10-stats.xml b10-stats-httpd.xml -EXTRA_DIST += stats.spec stats-httpd.spec stats-schema.spec +EXTRA_DIST += stats.spec stats-httpd.spec EXTRA_DIST += stats-httpd-xml.tpl stats-httpd-xsd.tpl stats-httpd-xsl.tpl EXTRA_DIST += stats_messages.mes stats_httpd_messages.mes diff --git a/src/bin/stats/stats-schema.spec b/src/bin/stats/stats-schema.spec deleted file mode 100644 index 52528657e8..0000000000 --- a/src/bin/stats/stats-schema.spec +++ /dev/null @@ -1,86 +0,0 @@ -{ - "module_spec": { - "module_name": "Stats", - "module_description": "Statistics data schema", - "config_data": [ - { - "item_name": "report_time", - "item_type": "string", - "item_optional": false, - "item_default": "1970-01-01T00:00:00Z", - "item_title": "Report time", - "item_description": "A date time when stats module reports", - "item_format": "date-time" - }, - { - "item_name": "bind10.boot_time", - "item_type": "string", - "item_optional": false, - "item_default": "1970-01-01T00:00:00Z", - "item_title": "bind10.BootTime", - "item_description": "A date time when bind10 process starts initially", - "item_format": "date-time" - }, - { - "item_name": "stats.boot_time", - "item_type": "string", - "item_optional": false, - "item_default": "1970-01-01T00:00:00Z", - "item_title": "stats.BootTime", - "item_description": "A date time when the stats module starts initially or when the stats module restarts", - "item_format": "date-time" - }, - { - "item_name": "stats.start_time", - "item_type": "string", - "item_optional": false, - "item_default": "1970-01-01T00:00:00Z", - "item_title": "stats.StartTime", - "item_description": "A date time when the stats module starts collecting data or resetting values last time", - "item_format": "date-time" - }, - { - "item_name": "stats.last_update_time", - "item_type": "string", - "item_optional": false, - "item_default": "1970-01-01T00:00:00Z", - "item_title": "stats.LastUpdateTime", - "item_description": "The latest date time when the stats module receives from other modules like auth server or boss process and so on", - "item_format": "date-time" - }, - { - "item_name": "stats.timestamp", - "item_type": "real", - "item_optional": false, - "item_default": 0.0, - "item_title": "stats.Timestamp", - "item_description": "A current time stamp since epoch time (1970-01-01T00:00:00Z)" - }, - { - "item_name": "stats.lname", - "item_type": "string", - "item_optional": false, - "item_default": "", - "item_title": "stats.LocalName", - "item_description": "A localname of stats module given via CC protocol" - }, - { - "item_name": "auth.queries.tcp", - "item_type": "integer", - "item_optional": false, - "item_default": 0, - "item_title": "auth.queries.tcp", - "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially" - }, - { - "item_name": "auth.queries.udp", - "item_type": "integer", - "item_optional": false, - "item_default": 0, - "item_title": "auth.queries.udp", - "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially" - } - ], - "commands": [] - } -} From 1768e822df82943f075ebed023b72d225b3b0216 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 15:57:41 +0900 Subject: [PATCH 115/175] [trac930] remove unneeded mockups, fake modules and dummy data --- configure.ac | 7 - src/bin/stats/tests/fake_select.py | 43 ---- src/bin/stats/tests/fake_socket.py | 70 ------ src/bin/stats/tests/fake_time.py | 47 ---- src/bin/stats/tests/http/Makefile.am | 6 - src/bin/stats/tests/http/__init__.py | 0 src/bin/stats/tests/http/server.py | 96 ------- src/bin/stats/tests/isc/Makefile.am | 8 - src/bin/stats/tests/isc/__init__.py | 0 src/bin/stats/tests/isc/cc/Makefile.am | 7 - src/bin/stats/tests/isc/cc/__init__.py | 1 - src/bin/stats/tests/isc/cc/session.py | 148 ----------- src/bin/stats/tests/isc/config/Makefile.am | 7 - src/bin/stats/tests/isc/config/__init__.py | 1 - src/bin/stats/tests/isc/config/ccsession.py | 249 ------------------- src/bin/stats/tests/isc/log/Makefile.am | 7 - src/bin/stats/tests/isc/log/__init__.py | 33 --- src/bin/stats/tests/isc/util/Makefile.am | 7 - src/bin/stats/tests/isc/util/__init__.py | 0 src/bin/stats/tests/isc/util/process.py | 21 -- src/bin/stats/tests/testdata/Makefile.am | 1 - src/bin/stats/tests/testdata/stats_test.spec | 19 -- 22 files changed, 778 deletions(-) delete mode 100644 src/bin/stats/tests/fake_select.py delete mode 100644 src/bin/stats/tests/fake_socket.py delete mode 100644 src/bin/stats/tests/fake_time.py delete mode 100644 src/bin/stats/tests/http/Makefile.am delete mode 100644 src/bin/stats/tests/http/__init__.py delete mode 100644 src/bin/stats/tests/http/server.py delete mode 100644 src/bin/stats/tests/isc/Makefile.am delete mode 100644 src/bin/stats/tests/isc/__init__.py delete mode 100644 src/bin/stats/tests/isc/cc/Makefile.am delete mode 100644 src/bin/stats/tests/isc/cc/__init__.py delete mode 100644 src/bin/stats/tests/isc/cc/session.py delete mode 100644 src/bin/stats/tests/isc/config/Makefile.am delete mode 100644 src/bin/stats/tests/isc/config/__init__.py delete mode 100644 src/bin/stats/tests/isc/config/ccsession.py delete mode 100644 src/bin/stats/tests/isc/log/Makefile.am delete mode 100644 src/bin/stats/tests/isc/log/__init__.py delete mode 100644 src/bin/stats/tests/isc/util/Makefile.am delete mode 100644 src/bin/stats/tests/isc/util/__init__.py delete mode 100644 src/bin/stats/tests/isc/util/process.py delete mode 100644 src/bin/stats/tests/testdata/Makefile.am delete mode 100644 src/bin/stats/tests/testdata/stats_test.spec diff --git a/configure.ac b/configure.ac index 6e129b6093..ee990eb412 100644 --- a/configure.ac +++ b/configure.ac @@ -801,13 +801,6 @@ AC_CONFIG_FILES([Makefile src/bin/zonemgr/tests/Makefile src/bin/stats/Makefile src/bin/stats/tests/Makefile - src/bin/stats/tests/isc/Makefile - src/bin/stats/tests/isc/cc/Makefile - src/bin/stats/tests/isc/config/Makefile - src/bin/stats/tests/isc/util/Makefile - src/bin/stats/tests/isc/log/Makefile - src/bin/stats/tests/testdata/Makefile - src/bin/stats/tests/http/Makefile src/bin/usermgr/Makefile src/bin/tests/Makefile src/lib/Makefile diff --git a/src/bin/stats/tests/fake_select.py b/src/bin/stats/tests/fake_select.py deleted file mode 100644 index ca0ca82619..0000000000 --- a/src/bin/stats/tests/fake_select.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (C) 2011 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -""" -A mock-up module of select - -*** NOTE *** -It is only for testing stats_httpd module and not reusable for -external module. -""" - -import fake_socket -import errno - -class error(Exception): - pass - -def select(rlst, wlst, xlst, timeout): - if type(timeout) != int and type(timeout) != float: - raise TypeError("Error: %s must be integer or float" - % timeout.__class__.__name__) - for s in rlst + wlst + xlst: - if type(s) != fake_socket.socket: - raise TypeError("Error: %s must be a dummy socket" - % s.__class__.__name__) - s._called = s._called + 1 - if s._called > 3: - raise error("Something is happened!") - elif s._called > 2: - raise error(errno.EINTR) - return (rlst, wlst, xlst) diff --git a/src/bin/stats/tests/fake_socket.py b/src/bin/stats/tests/fake_socket.py deleted file mode 100644 index 4e3a4581a5..0000000000 --- a/src/bin/stats/tests/fake_socket.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (C) 2011 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -""" -A mock-up module of socket - -*** NOTE *** -It is only for testing stats_httpd module and not reusable for -external module. -""" - -import re - -AF_INET = 'AF_INET' -AF_INET6 = 'AF_INET6' -_ADDRFAMILY = AF_INET -has_ipv6 = True -_CLOSED = False - -class gaierror(Exception): - pass - -class error(Exception): - pass - -class socket: - - def __init__(self, family=None): - if family is None: - self.address_family = _ADDRFAMILY - else: - self.address_family = family - self._closed = _CLOSED - if self._closed: - raise error('socket is already closed!') - self._called = 0 - - def close(self): - self._closed = True - - def fileno(self): - return id(self) - - def bind(self, server_class): - (self.server_address, self.server_port) = server_class - if self.address_family not in set([AF_INET, AF_INET6]): - raise error("Address family not supported by protocol: %s" % self.address_family) - if self.address_family == AF_INET6 and not has_ipv6: - raise error("Address family not supported in this machine: %s has_ipv6: %s" - % (self.address_family, str(has_ipv6))) - if self.address_family == AF_INET and re.search(':', self.server_address) is not None: - raise gaierror("Address family for hostname not supported : %s %s" % (self.server_address, self.address_family)) - if self.address_family == AF_INET6 and re.search(':', self.server_address) is None: - raise error("Cannot assign requested address : %s" % str(self.server_address)) - if type(self.server_port) is not int: - raise TypeError("an integer is required: %s" % str(self.server_port)) - if self.server_port < 0 or self.server_port > 65535: - raise OverflowError("port number must be 0-65535.: %s" % str(self.server_port)) diff --git a/src/bin/stats/tests/fake_time.py b/src/bin/stats/tests/fake_time.py deleted file mode 100644 index 65e02371d6..0000000000 --- a/src/bin/stats/tests/fake_time.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (C) 2010 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -__version__ = "$Revision$" - -# This is a dummy time class against a Python standard time class. -# It is just testing use only. -# Other methods which time class has is not implemented. -# (This class isn't orderloaded for time class.) - -# These variables are constant. These are example. -_TEST_TIME_SECS = 1283364938.229088 -_TEST_TIME_STRF = '2010-09-01T18:15:38Z' - -def time(): - """ - This is a dummy time() method against time.time() - """ - # return float constant value - return _TEST_TIME_SECS - -def gmtime(): - """ - This is a dummy gmtime() method against time.gmtime() - """ - # always return nothing - return None - -def strftime(*arg): - """ - This is a dummy gmtime() method against time.gmtime() - """ - return _TEST_TIME_STRF - - diff --git a/src/bin/stats/tests/http/Makefile.am b/src/bin/stats/tests/http/Makefile.am deleted file mode 100644 index 79263a98b4..0000000000 --- a/src/bin/stats/tests/http/Makefile.am +++ /dev/null @@ -1,6 +0,0 @@ -EXTRA_DIST = __init__.py server.py -CLEANFILES = __init__.pyc server.pyc -CLEANDIRS = __pycache__ - -clean-local: - rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/http/__init__.py b/src/bin/stats/tests/http/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/src/bin/stats/tests/http/server.py b/src/bin/stats/tests/http/server.py deleted file mode 100644 index 70ed6faa30..0000000000 --- a/src/bin/stats/tests/http/server.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (C) 2011 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -""" -A mock-up module of http.server - -*** NOTE *** -It is only for testing stats_httpd module and not reusable for -external module. -""" - -import fake_socket - -class DummyHttpResponse: - def __init__(self, path): - self.path = path - self.headers={} - self.log = "" - - def _write_log(self, msg): - self.log = self.log + msg - -class HTTPServer: - """ - A mock-up class of http.server.HTTPServer - """ - address_family = fake_socket.AF_INET - def __init__(self, server_class, handler_class): - self.socket = fake_socket.socket(self.address_family) - self.server_class = server_class - self.socket.bind(self.server_class) - self._handler = handler_class(None, None, self) - - def handle_request(self): - pass - - def server_close(self): - self.socket.close() - -class BaseHTTPRequestHandler: - """ - A mock-up class of http.server.BaseHTTPRequestHandler - """ - - def __init__(self, request, client_address, server): - self.path = "/path/to" - self.headers = {} - self.server = server - self.response = DummyHttpResponse(path=self.path) - self.response.write = self._write - self.wfile = self.response - - def send_response(self, code=0): - if self.path != self.response.path: - self.response = DummyHttpResponse(path=self.path) - self.response.code = code - - def send_header(self, key, value): - if self.path != self.response.path: - self.response = DummyHttpResponse(path=self.path) - self.response.headers[key] = value - - def end_headers(self): - if self.path != self.response.path: - self.response = DummyHttpResponse(path=self.path) - self.response.wrote_headers = True - - def send_error(self, code, message=None): - if self.path != self.response.path: - self.response = DummyHttpResponse(path=self.path) - self.response.code = code - self.response.body = message - - def address_string(self): - return 'dummyhost' - - def log_date_time_string(self): - return '[DD/MM/YYYY HH:MI:SS]' - - def _write(self, obj): - if self.path != self.response.path: - self.response = DummyHttpResponse(path=self.path) - self.response.body = obj.decode() - diff --git a/src/bin/stats/tests/isc/Makefile.am b/src/bin/stats/tests/isc/Makefile.am deleted file mode 100644 index d31395d404..0000000000 --- a/src/bin/stats/tests/isc/Makefile.am +++ /dev/null @@ -1,8 +0,0 @@ -SUBDIRS = cc config util log -EXTRA_DIST = __init__.py -CLEANFILES = __init__.pyc - -CLEANDIRS = __pycache__ - -clean-local: - rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/__init__.py b/src/bin/stats/tests/isc/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/src/bin/stats/tests/isc/cc/Makefile.am b/src/bin/stats/tests/isc/cc/Makefile.am deleted file mode 100644 index 67323b5f1b..0000000000 --- a/src/bin/stats/tests/isc/cc/Makefile.am +++ /dev/null @@ -1,7 +0,0 @@ -EXTRA_DIST = __init__.py session.py -CLEANFILES = __init__.pyc session.pyc - -CLEANDIRS = __pycache__ - -clean-local: - rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/cc/__init__.py b/src/bin/stats/tests/isc/cc/__init__.py deleted file mode 100644 index 9a3eaf6185..0000000000 --- a/src/bin/stats/tests/isc/cc/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from isc.cc.session import * diff --git a/src/bin/stats/tests/isc/cc/session.py b/src/bin/stats/tests/isc/cc/session.py deleted file mode 100644 index e16d6a9abc..0000000000 --- a/src/bin/stats/tests/isc/cc/session.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (C) 2010,2011 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -""" -A mock-up module of isc.cc.session - -*** NOTE *** -It is only for testing stats_httpd module and not reusable for -external module. -""" - -import sys -import fake_socket - -# set a dummy lname -_TEST_LNAME = '123abc@xxxx' - -class Queue(): - def __init__(self, msg=None, env={}): - self.msg = msg - self.env = env - - def dump(self): - return { 'msg': self.msg, 'env': self.env } - -class SessionError(Exception): - pass - -class SessionTimeout(Exception): - pass - -class Session: - def __init__(self, socket_file=None, verbose=False): - self._lname = _TEST_LNAME - self.message_queue = [] - self.old_message_queue = [] - try: - self._socket = fake_socket.socket() - except fake_socket.error as se: - raise SessionError(se) - self.verbose = verbose - - @property - def lname(self): - return self._lname - - def close(self): - self._socket.close() - - def _clear_queues(self): - while len(self.message_queue) > 0: - self.dequeue() - - def _next_sequence(self, que=None): - return len(self.message_queue) - - def enqueue(self, msg=None, env={}): - if self._socket._closed: - raise SessionError("Session has been closed.") - seq = self._next_sequence() - env.update({"seq": 0}) # fixed here - que = Queue(msg=msg, env=env) - self.message_queue.append(que) - if self.verbose: - sys.stdout.write("[Session] enqueue: " + str(que.dump()) + "\n") - return seq - - def dequeue(self): - if self._socket._closed: - raise SessionError("Session has been closed.") - que = None - try: - que = self.message_queue.pop(0) # always pop at index 0 - self.old_message_queue.append(que) - except IndexError: - que = Queue() - if self.verbose: - sys.stdout.write("[Session] dequeue: " + str(que.dump()) + "\n") - return que - - def get_queue(self, seq=None): - if self._socket._closed: - raise SessionError("Session has been closed.") - if seq is None: - seq = len(self.message_queue) - 1 - que = None - try: - que = self.message_queue[seq] - except IndexError: - raise IndexError - que = Queue() - if self.verbose: - sys.stdout.write("[Session] get_queue: " + str(que.dump()) + "\n") - return que - - def group_sendmsg(self, msg, group, instance="*", to="*"): - return self.enqueue(msg=msg, env={ - "type": "send", - "from": self._lname, - "to": to, - "group": group, - "instance": instance }) - - def group_recvmsg(self, nonblock=True, seq=0): - que = self.dequeue() - return que.msg, que.env - - def group_reply(self, routing, msg): - return self.enqueue(msg=msg, env={ - "type": "send", - "from": self._lname, - "to": routing["from"], - "group": routing["group"], - "instance": routing["instance"], - "reply": routing["seq"] }) - - def get_message(self, group, to='*'): - if self._socket._closed: - raise SessionError("Session has been closed.") - que = Queue() - for q in self.message_queue: - if q.env['group'] == group: - self.message_queue.remove(q) - self.old_message_queue.append(q) - que = q - if self.verbose: - sys.stdout.write("[Session] get_message: " + str(que.dump()) + "\n") - return q.msg - - def group_subscribe(self, group, instance = "*"): - if self._socket._closed: - raise SessionError("Session has been closed.") - - def group_unsubscribe(self, group, instance = "*"): - if self._socket._closed: - raise SessionError("Session has been closed.") diff --git a/src/bin/stats/tests/isc/config/Makefile.am b/src/bin/stats/tests/isc/config/Makefile.am deleted file mode 100644 index ffbecdae03..0000000000 --- a/src/bin/stats/tests/isc/config/Makefile.am +++ /dev/null @@ -1,7 +0,0 @@ -EXTRA_DIST = __init__.py ccsession.py -CLEANFILES = __init__.pyc ccsession.pyc - -CLEANDIRS = __pycache__ - -clean-local: - rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/config/__init__.py b/src/bin/stats/tests/isc/config/__init__.py deleted file mode 100644 index 4c49e956aa..0000000000 --- a/src/bin/stats/tests/isc/config/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from isc.config.ccsession import * diff --git a/src/bin/stats/tests/isc/config/ccsession.py b/src/bin/stats/tests/isc/config/ccsession.py deleted file mode 100644 index 50f7c1b163..0000000000 --- a/src/bin/stats/tests/isc/config/ccsession.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright (C) 2010,2011 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -""" -A mock-up module of isc.cc.session - -*** NOTE *** -It is only for testing stats_httpd module and not reusable for -external module. -""" - -import json -import os -import time -from isc.cc.session import Session - -COMMAND_CONFIG_UPDATE = "config_update" - -def parse_answer(msg): - assert 'result' in msg - try: - return msg['result'][0], msg['result'][1] - except IndexError: - return msg['result'][0], None - -def create_answer(rcode, arg = None): - if arg is None: - return { 'result': [ rcode ] } - else: - return { 'result': [ rcode, arg ] } - -def parse_command(msg): - assert 'command' in msg - try: - return msg['command'][0], msg['command'][1] - except IndexError: - return msg['command'][0], None - -def create_command(command_name, params = None): - if params is None: - return {"command": [command_name]} - else: - return {"command": [command_name, params]} - -def module_spec_from_file(spec_file, check = True): - try: - file = open(spec_file) - json_str = file.read() - module_spec = json.loads(json_str) - file.close() - return ModuleSpec(module_spec['module_spec'], check) - except IOError as ioe: - raise ModuleSpecError("JSON read error: " + str(ioe)) - except ValueError as ve: - raise ModuleSpecError("JSON parse error: " + str(ve)) - except KeyError as err: - raise ModuleSpecError("Data definition has no module_spec element") - -class ModuleSpecError(Exception): - pass - -class ModuleSpec: - def __init__(self, module_spec, check = True): - # check only confi_data for testing - if check and "config_data" in module_spec: - _check_config_spec(module_spec["config_data"]) - self._module_spec = module_spec - - def get_config_spec(self): - return self._module_spec['config_data'] - - def get_commands_spec(self): - return self._module_spec['commands'] - - def get_module_name(self): - return self._module_spec['module_name'] - -def _check_config_spec(config_data): - # config data is a list of items represented by dicts that contain - # things like "item_name", depending on the type they can have - # specific subitems - """Checks a list that contains the configuration part of the - specification. Raises a ModuleSpecError if there is a - problem.""" - if type(config_data) != list: - raise ModuleSpecError("config_data is of type " + str(type(config_data)) + ", not a list of items") - for config_item in config_data: - _check_item_spec(config_item) - -def _check_item_spec(config_item): - """Checks the dict that defines one config item - (i.e. containing "item_name", "item_type", etc. - Raises a ModuleSpecError if there is an error""" - if type(config_item) != dict: - raise ModuleSpecError("item spec not a dict") - if "item_name" not in config_item: - raise ModuleSpecError("no item_name in config item") - if type(config_item["item_name"]) != str: - raise ModuleSpecError("item_name is not a string: " + str(config_item["item_name"])) - item_name = config_item["item_name"] - if "item_type" not in config_item: - raise ModuleSpecError("no item_type in config item") - item_type = config_item["item_type"] - if type(item_type) != str: - raise ModuleSpecError("item_type in " + item_name + " is not a string: " + str(type(item_type))) - if item_type not in ["integer", "real", "boolean", "string", "list", "map", "any"]: - raise ModuleSpecError("unknown item_type in " + item_name + ": " + item_type) - if "item_optional" in config_item: - if type(config_item["item_optional"]) != bool: - raise ModuleSpecError("item_default in " + item_name + " is not a boolean") - if not config_item["item_optional"] and "item_default" not in config_item: - raise ModuleSpecError("no default value for non-optional item " + item_name) - else: - raise ModuleSpecError("item_optional not in item " + item_name) - if "item_default" in config_item: - item_default = config_item["item_default"] - if (item_type == "integer" and type(item_default) != int) or \ - (item_type == "real" and type(item_default) != float) or \ - (item_type == "boolean" and type(item_default) != bool) or \ - (item_type == "string" and type(item_default) != str) or \ - (item_type == "list" and type(item_default) != list) or \ - (item_type == "map" and type(item_default) != dict): - raise ModuleSpecError("Wrong type for item_default in " + item_name) - # TODO: once we have check_type, run the item default through that with the list|map_item_spec - if item_type == "list": - if "list_item_spec" not in config_item: - raise ModuleSpecError("no list_item_spec in list item " + item_name) - if type(config_item["list_item_spec"]) != dict: - raise ModuleSpecError("list_item_spec in " + item_name + " is not a dict") - _check_item_spec(config_item["list_item_spec"]) - if item_type == "map": - if "map_item_spec" not in config_item: - raise ModuleSpecError("no map_item_sepc in map item " + item_name) - if type(config_item["map_item_spec"]) != list: - raise ModuleSpecError("map_item_spec in " + item_name + " is not a list") - for map_item in config_item["map_item_spec"]: - if type(map_item) != dict: - raise ModuleSpecError("map_item_spec element is not a dict") - _check_item_spec(map_item) - if 'item_format' in config_item and 'item_default' in config_item: - item_format = config_item["item_format"] - item_default = config_item["item_default"] - if not _check_format(item_default, item_format): - raise ModuleSpecError( - "Wrong format for " + str(item_default) + " in " + str(item_name)) - -def _check_format(value, format_name): - """Check if specified value and format are correct. Return True if - is is correct.""" - # TODO: should be added other format types if necessary - time_formats = { 'date-time' : "%Y-%m-%dT%H:%M:%SZ", - 'date' : "%Y-%m-%d", - 'time' : "%H:%M:%S" } - for fmt in time_formats: - if format_name == fmt: - try: - time.strptime(value, time_formats[fmt]) - return True - except (ValueError, TypeError): - break - return False - -class ModuleCCSessionError(Exception): - pass - -class DataNotFoundError(Exception): - pass - -class ConfigData: - def __init__(self, specification): - self.specification = specification - - def get_value(self, identifier): - """Returns a tuple where the first item is the value at the - given identifier, and the second item is absolutely False - even if the value is an unset default or not. Raises an - DataNotFoundError if the identifier is not found in the - specification file. - *** NOTE *** - There are some differences from the original method. This - method never handles local settings like the original - method. But these different behaviors aren't so big issues - for a mock-up method of stats_httpd because stats_httpd - calls this method at only first.""" - for config_map in self.get_module_spec().get_config_spec(): - if config_map['item_name'] == identifier: - if 'item_default' in config_map: - return config_map['item_default'], False - raise DataNotFoundError("item_name %s is not found in the specfile" % identifier) - - def get_module_spec(self): - return self.specification - -class ModuleCCSession(ConfigData): - def __init__(self, spec_file_name, config_handler, command_handler, cc_session = None): - module_spec = module_spec_from_file(spec_file_name) - ConfigData.__init__(self, module_spec) - self._module_name = module_spec.get_module_name() - self.set_config_handler(config_handler) - self.set_command_handler(command_handler) - if not cc_session: - self._session = Session(verbose=True) - else: - self._session = cc_session - - def start(self): - pass - - def close(self): - self._session.close() - - def check_command(self, nonblock=True): - msg, env = self._session.group_recvmsg(nonblock) - if not msg or 'result' in msg: - return - cmd, arg = parse_command(msg) - answer = None - if cmd == COMMAND_CONFIG_UPDATE and self._config_handler: - answer = self._config_handler(arg) - elif env['group'] == self._module_name and self._command_handler: - answer = self._command_handler(cmd, arg) - if answer: - self._session.group_reply(env, answer) - - def set_config_handler(self, config_handler): - self._config_handler = config_handler - # should we run this right now since we've changed the handler? - - def set_command_handler(self, command_handler): - self._command_handler = command_handler - - def get_module_spec(self): - return self.specification - - def get_socket(self): - return self._session._socket - diff --git a/src/bin/stats/tests/isc/log/Makefile.am b/src/bin/stats/tests/isc/log/Makefile.am deleted file mode 100644 index 457b9de1c2..0000000000 --- a/src/bin/stats/tests/isc/log/Makefile.am +++ /dev/null @@ -1,7 +0,0 @@ -EXTRA_DIST = __init__.py -CLEANFILES = __init__.pyc - -CLEANDIRS = __pycache__ - -clean-local: - rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/log/__init__.py b/src/bin/stats/tests/isc/log/__init__.py deleted file mode 100644 index 641cf790c1..0000000000 --- a/src/bin/stats/tests/isc/log/__init__.py +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright (C) 2011 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -# This file is not installed. The log.so is installed into the right place. -# It is only to find it in the .libs directory when we run as a test or -# from the build directory. -# But as nobody gives us the builddir explicitly (and we can't use generation -# from .in file, as it would put us into the builddir and we wouldn't be found) -# we guess from current directory. Any idea for something better? This should -# be enough for the tests, but would it work for B10_FROM_SOURCE as well? -# Should we look there? Or define something in bind10_config? - -import os -import sys - -for base in sys.path[:]: - loglibdir = os.path.join(base, 'isc/log/.libs') - if os.path.exists(loglibdir): - sys.path.insert(0, loglibdir) - -from log import * diff --git a/src/bin/stats/tests/isc/util/Makefile.am b/src/bin/stats/tests/isc/util/Makefile.am deleted file mode 100644 index 9c74354ca3..0000000000 --- a/src/bin/stats/tests/isc/util/Makefile.am +++ /dev/null @@ -1,7 +0,0 @@ -EXTRA_DIST = __init__.py process.py -CLEANFILES = __init__.pyc process.pyc - -CLEANDIRS = __pycache__ - -clean-local: - rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/util/__init__.py b/src/bin/stats/tests/isc/util/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/src/bin/stats/tests/isc/util/process.py b/src/bin/stats/tests/isc/util/process.py deleted file mode 100644 index 0f764c1872..0000000000 --- a/src/bin/stats/tests/isc/util/process.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (C) 2010 Internet Systems Consortium. -# -# Permission to use, copy, modify, and distribute this software for any -# purpose with or without fee is hereby granted, provided that the above -# copyright notice and this permission notice appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM -# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL -# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, -# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING -# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, -# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION -# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -""" -A dummy function of isc.util.process.rename() -""" - -def rename(name=None): - pass diff --git a/src/bin/stats/tests/testdata/Makefile.am b/src/bin/stats/tests/testdata/Makefile.am deleted file mode 100644 index 1b8df6d736..0000000000 --- a/src/bin/stats/tests/testdata/Makefile.am +++ /dev/null @@ -1 +0,0 @@ -EXTRA_DIST = stats_test.spec diff --git a/src/bin/stats/tests/testdata/stats_test.spec b/src/bin/stats/tests/testdata/stats_test.spec deleted file mode 100644 index 8136756440..0000000000 --- a/src/bin/stats/tests/testdata/stats_test.spec +++ /dev/null @@ -1,19 +0,0 @@ -{ - "module_spec": { - "module_name": "Stats", - "module_description": "Stats daemon", - "config_data": [], - "commands": [ - { - "command_name": "status", - "command_description": "identify whether stats module is alive or not", - "command_args": [] - }, - { - "command_name": "the_dummy", - "command_description": "this is for testing", - "command_args": [] - } - ] - } -} From 1aa728ddf691657611680385c920e3a7bd5fee12 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 16:00:30 +0900 Subject: [PATCH 116/175] [trac930] add utilities and mock-up modules for unittests of statistics modules and change some environ variables (PYTHONPATH, CONFIG_TESTDATA_PATH) in Makefile test_utilies.py internally calls msgq, cfgmgr and some mock modules with threads for as real situation as possible. --- src/bin/stats/tests/Makefile.am | 8 +- src/bin/stats/tests/test_utils.py | 293 ++++++++++++++++++++++++++++++ 2 files changed, 297 insertions(+), 4 deletions(-) create mode 100644 src/bin/stats/tests/test_utils.py diff --git a/src/bin/stats/tests/Makefile.am b/src/bin/stats/tests/Makefile.am index dad6c48bbc..19f6117334 100644 --- a/src/bin/stats/tests/Makefile.am +++ b/src/bin/stats/tests/Makefile.am @@ -1,8 +1,7 @@ -SUBDIRS = isc http testdata PYCOVERAGE_RUN = @PYCOVERAGE_RUN@ PYTESTS = b10-stats_test.py b10-stats-httpd_test.py -EXTRA_DIST = $(PYTESTS) fake_time.py fake_socket.py fake_select.py -CLEANFILES = fake_time.pyc fake_socket.pyc fake_select.pyc +EXTRA_DIST = $(PYTESTS) test_utils.py +CLEANFILES = test_utils.pyc # If necessary (rare cases), explicitly specify paths to dynamic libraries # required by loadable python modules. @@ -21,8 +20,9 @@ endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/bin/stats:$(abs_top_builddir)/src/bin/stats/tests \ + env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/bin/stats:$(abs_top_builddir)/src/bin/stats/tests:$(abs_top_builddir)/src/bin/msgq:$(abs_top_builddir)/src/lib/python/isc/config \ B10_FROM_SOURCE=$(abs_top_srcdir) \ + CONFIG_TESTDATA_PATH=$(abs_top_srcdir)/src/lib/config/tests/testdata \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ done diff --git a/src/bin/stats/tests/test_utils.py b/src/bin/stats/tests/test_utils.py new file mode 100644 index 0000000000..bd23182d2c --- /dev/null +++ b/src/bin/stats/tests/test_utils.py @@ -0,0 +1,293 @@ +""" +Utilities and mock modules for unittests of statistics modules + +""" +import os +import io +import time +import sys +import threading +import tempfile + +import msgq +import isc.config.cfgmgr +import stats +import stats_httpd + +# TODO: consider appropriate timeout seconds +TIMEOUT_SEC = 0.01 + +def send_command(command_name, module_name, params=None, session=None, nonblock=False, timeout=TIMEOUT_SEC*2): + if not session: + cc_session = isc.cc.Session() + else: + cc_session = session + orig_timeout = cc_session.get_timeout() + cc_session.set_timeout(timeout * 1000) + command = isc.config.ccsession.create_command(command_name, params) + seq = cc_session.group_sendmsg(command, module_name) + try: + (answer, env) = cc_session.group_recvmsg(nonblock, seq) + if answer: + return isc.config.ccsession.parse_answer(answer) + except isc.cc.SessionTimeout: + pass + finally: + if not session: + cc_session.close() + else: + cc_session.set_timeout(orig_timeout) + +def send_shutdown(module_name): + return send_command("shutdown", module_name) + +class ThreadingServerManager: + def __init__(self, server_class, verbose): + self.server_class = server_class + self.server_class_name = server_class.__name__ + self.verbose = verbose + self.server = self.server_class(self.verbose) + self.server._thread = threading.Thread( + name=self.server_class_name, target=self.server.run) + self.server._thread.daemon = True + + def run(self): + self.server._thread.start() + self.server._started.wait() + + def shutdown(self): + self.server.shutdown() + self.server._thread.join(TIMEOUT_SEC) + +class MockMsgq: + def __init__(self, verbose): + self.verbose = verbose + self._started = threading.Event() + self.msgq = msgq.MsgQ(None, verbose) + result = self.msgq.setup() + if result: + sys.exit("Error on Msgq startup: %s" % result) + + def run(self): + self._started.set() + try: + self.msgq.run() + except Exception: + pass + finally: + self.shutdown() + + def shutdown(self): + self.msgq.shutdown() + +class MockCfgmgr: + def __init__(self, verbose): + self._started = threading.Event() + self.cfgmgr = isc.config.cfgmgr.ConfigManager( + os.environ['CONFIG_TESTDATA_PATH'], "b10-config.db") + self.cfgmgr.read_config() + + def run(self): + self._started.set() + try: + self.cfgmgr.run() + finally: + self.shutdown() + + def shutdown(self): + self.cfgmgr.running = False + +class MockBoss: + spec_str = """\ +{ + "module_spec": { + "module_name": "Boss", + "module_description": "Mock Master process", + "config_data": [], + "commands": [ + { + "command_name": "sendstats", + "command_description": "Send data to a statistics module at once", + "command_args": [] + } + ], + "statistics": [ + { + "item_name": "boot_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Boot time", + "item_description": "A date time when bind10 process starts initially", + "item_format": "date-time" + } + ] + } +} +""" + _BASETIME = (2011, 6, 22, 8, 14, 8, 2, 173, 0) + + def __init__(self, verbose): + self.verbose = verbose + self._started = threading.Event() + self.running = False + self.spec_file = io.StringIO(self.spec_str) + # create ModuleCCSession object + self.mccs = isc.config.ModuleCCSession( + self.spec_file, + self.config_handler, + self.command_handler) + self.spec_file.close() + self.cc_session = self.mccs._session + self.got_command_name = '' + + def run(self): + self.mccs.start() + self.running = True + self._started.set() + while self.running: + self.mccs.check_command(False) + + def shutdown(self): + self.running = False + + def config_handler(self, new_config): + return isc.config.create_answer(0) + + def command_handler(self, command, *args, **kwargs): + self.got_command_name = command + if command == 'sendstats': + params = { "owner": "Boss", + "data": { + 'boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', self._BASETIME) + } + } + return send_command("set", "Stats", params=params, session=self.cc_session) + return isc.config.create_answer(1, "Unknown Command") + +class MockAuth: + spec_str = """\ +{ + "module_spec": { + "module_name": "Auth", + "module_description": "Mock Authoritative service", + "config_data": [], + "commands": [ + { + "command_name": "sendstats", + "command_description": "Send data to a statistics module at once", + "command_args": [] + } + ], + "statistics": [ + { + "item_name": "queries.tcp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Queries TCP ", + "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially" + }, + { + "item_name": "queries.udp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "Queries UDP", + "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially" + } + ] + } +} +""" + def __init__(self, verbose): + self.verbose = verbose + self._started = threading.Event() + self.running = False + self.spec_file = io.StringIO(self.spec_str) + # create ModuleCCSession object + self.mccs = isc.config.ModuleCCSession( + self.spec_file, + self.config_handler, + self.command_handler) + self.spec_file.close() + self.cc_session = self.mccs._session + self.got_command_name = '' + self.queries_tcp = 3 + self.queries_udp = 2 + + def run(self): + self.mccs.start() + self.running = True + self._started.set() + while self.running: + self.mccs.check_command(False) + + def shutdown(self): + self.running = False + + def config_handler(self, new_config): + return isc.config.create_answer(0) + + def command_handler(self, command, *args, **kwargs): + self.got_command_name = command + if command == 'sendstats': + params = { "owner": "Auth", + "data": { 'queries.tcp': self.queries_tcp, + 'queries.udp': self.queries_udp } } + return send_command("set", "Stats", params=params, session=self.cc_session) + return isc.config.create_answer(1, "Unknown Command") + +class MyStats(stats.Stats): + def __init__(self, verbose): + self._started = threading.Event() + stats.Stats.__init__(self, verbose) + + def run(self): + self._started.set() + stats.Stats.start(self) + + def shutdown(self): + send_shutdown("Stats") + +class MyStatsHttpd(stats_httpd.StatsHttpd): + def __init__(self, verbose): + self._started = threading.Event() + stats_httpd.StatsHttpd.__init__(self, verbose) + + def run(self): + self._started.set() + stats_httpd.StatsHttpd.start(self) + + def shutdown(self): + send_shutdown("StatsHttpd") + +class BaseModules: + def __init__(self, verbose): + self.verbose = verbose + self.class_name = BaseModules.__name__ + + # Change value of BIND10_MSGQ_SOCKET_FILE in environment variables + os.environ['BIND10_MSGQ_SOCKET_FILE'] = tempfile.mktemp(prefix='unix_socket.') + # MockMsgq + self.msgq = ThreadingServerManager(MockMsgq, self.verbose) + self.msgq.run() + # MockCfgmgr + self.cfgmgr = ThreadingServerManager(MockCfgmgr, self.verbose) + self.cfgmgr.run() + # MockBoss + self.boss = ThreadingServerManager(MockBoss, self.verbose) + self.boss.run() + # MockAuth + self.auth = ThreadingServerManager(MockAuth, self.verbose) + self.auth.run() + + def shutdown(self): + # MockAuth + self.auth.shutdown() + # MockBoss + self.boss.shutdown() + # MockCfgmgr + self.cfgmgr.shutdown() + # MockMsgq + self.msgq.shutdown() From d4078d52343247b07c47370b497927a3a47a4f9a Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 16:12:09 +0900 Subject: [PATCH 117/175] [trac930] remove descriptions about "stats-schema.spec" and add description about new features because stats module can be requested to show statistics data schema. --- src/bin/stats/b10-stats-httpd.8 | 6 +----- src/bin/stats/b10-stats-httpd.xml | 8 +------- src/bin/stats/b10-stats.8 | 4 ---- src/bin/stats/b10-stats.xml | 6 ------ 4 files changed, 2 insertions(+), 22 deletions(-) diff --git a/src/bin/stats/b10-stats-httpd.8 b/src/bin/stats/b10-stats-httpd.8 index ed4aafa6c6..1206e1d791 100644 --- a/src/bin/stats/b10-stats-httpd.8 +++ b/src/bin/stats/b10-stats-httpd.8 @@ -36,7 +36,7 @@ b10-stats-httpd \- BIND 10 HTTP server for HTTP/XML interface of statistics .PP \fBb10\-stats\-httpd\fR -is a standalone HTTP server\&. It is intended for HTTP/XML interface for statistics module\&. This server process runs as a process separated from the process of the BIND 10 Stats daemon (\fBb10\-stats\fR)\&. The server is initially executed by the BIND 10 boss process (\fBbind10\fR) and eventually exited by it\&. The server is intended to be server requests by HTTP clients like web browsers and third\-party modules\&. When the server is asked, it requests BIND 10 statistics data from +is a standalone HTTP server\&. It is intended for HTTP/XML interface for statistics module\&. This server process runs as a process separated from the process of the BIND 10 Stats daemon (\fBb10\-stats\fR)\&. The server is initially executed by the BIND 10 boss process (\fBbind10\fR) and eventually exited by it\&. The server is intended to be server requests by HTTP clients like web browsers and third\-party modules\&. When the server is asked, it requests BIND 10 statistics data or its schema from \fBb10\-stats\fR, and it sends the data back in Python dictionary format and the server converts it into XML format\&. The server sends it to the HTTP client\&. The server can send three types of document, which are XML (Extensible Markup Language), XSD (XML Schema definition) and XSL (Extensible Stylesheet Language)\&. The XML document is the statistics data of BIND 10, The XSD document is the data schema of it, and The XSL document is the style sheet to be showed for the web browsers\&. There is different URL for each document\&. But please note that you would be redirected to the URL of XML document if you request the URL of the root document\&. For example, you would be redirected to http://127\&.0\&.0\&.1:8000/bind10/statistics/xml if you request http://127\&.0\&.0\&.1:8000/\&. Please see the manual and the spec file of \fBb10\-stats\fR for more details about the items of BIND 10 statistics\&. The server uses CC session in communication with @@ -66,10 +66,6 @@ bindctl(1)\&. Please see the manual of bindctl(1) about how to configure the settings\&. .PP -/usr/local/share/bind10\-devel/stats\-schema\&.spec -\(em This is a spec file for data schema of of BIND 10 statistics\&. This schema cannot be configured via -bindctl(1)\&. -.PP /usr/local/share/bind10\-devel/stats\-httpd\-xml\&.tpl \(em the template file of XML document\&. diff --git a/src/bin/stats/b10-stats-httpd.xml b/src/bin/stats/b10-stats-httpd.xml index 34c704f509..3636d9d543 100644 --- a/src/bin/stats/b10-stats-httpd.xml +++ b/src/bin/stats/b10-stats-httpd.xml @@ -57,7 +57,7 @@ by the BIND 10 boss process (bind10) and eventually exited by it. The server is intended to be server requests by HTTP clients like web browsers and third-party modules. When the server is - asked, it requests BIND 10 statistics data from + asked, it requests BIND 10 statistics data or its schema from b10-stats, and it sends the data back in Python dictionary format and the server converts it into XML format. The server sends it to the HTTP client. The server can send three types of document, @@ -112,12 +112,6 @@ of bindctl1 about how to configure the settings.
- /usr/local/share/bind10-devel/stats-schema.spec - - — This is a spec file for data schema of - of BIND 10 statistics. This schema cannot be configured - via bindctl1. - /usr/local/share/bind10-devel/stats-httpd-xml.tpl diff --git a/src/bin/stats/b10-stats.8 b/src/bin/stats/b10-stats.8 index f69e4d37fa..2c75cbcc0e 100644 --- a/src/bin/stats/b10-stats.8 +++ b/src/bin/stats/b10-stats.8 @@ -66,10 +66,6 @@ switches to verbose mode\&. It sends verbose messages to STDOUT\&. \fBb10\-stats\fR\&. It contains commands for \fBb10\-stats\fR\&. They can be invoked via bindctl(1)\&. -.PP -/usr/local/share/bind10\-devel/stats\-schema\&.spec -\(em This is a spec file for data schema of of BIND 10 statistics\&. This schema cannot be configured via -bindctl(1)\&. .SH "SEE ALSO" .PP diff --git a/src/bin/stats/b10-stats.xml b/src/bin/stats/b10-stats.xml index f0c472dd29..bd2400a2d5 100644 --- a/src/bin/stats/b10-stats.xml +++ b/src/bin/stats/b10-stats.xml @@ -95,12 +95,6 @@ invoked via bindctl1. - /usr/local/share/bind10-devel/stats-schema.spec - - — This is a spec file for data schema of - of BIND 10 statistics. This schema cannot be configured - via bindctl1. - From 9261da8717a433cf20218af08d3642fbeffb7d4b Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 16:13:17 +0900 Subject: [PATCH 118/175] [trac930] add a column "Owner" in the table tag --- src/bin/stats/stats-httpd-xsl.tpl | 1 + 1 file changed, 1 insertion(+) diff --git a/src/bin/stats/stats-httpd-xsl.tpl b/src/bin/stats/stats-httpd-xsl.tpl index 01ffdc681b..a1f6406a5a 100644 --- a/src/bin/stats/stats-httpd-xsl.tpl +++ b/src/bin/stats/stats-httpd-xsl.tpl @@ -44,6 +44,7 @@ td.title {

BIND 10 Statistics

+ From c19a295eb4125b4d2a391de65972271002412258 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 16:18:38 +0900 Subject: [PATCH 119/175] [trac930] remove description about removing statistics data by stats module update example format in bindctl when show command of stats module is invoked --- doc/guide/bind10-guide.html | 30 ++++++++++++++++++------------ doc/guide/bind10-guide.xml | 32 ++++++++++++++++++++------------ 2 files changed, 38 insertions(+), 24 deletions(-) diff --git a/doc/guide/bind10-guide.html b/doc/guide/bind10-guide.html index 5754cf001e..4415d42550 100644 --- a/doc/guide/bind10-guide.html +++ b/doc/guide/bind10-guide.html @@ -664,24 +664,30 @@ This may be a temporary setting until then.

- This stats daemon provides commands to identify if it is running, - show specified or all statistics data, set values, remove data, - and reset data. + This stats daemon provides commands to identify if it is + running, show specified or all statistics data, show specified + or all statistics data schema, and set specified statistics + data. For example, using bindctl:

 > Stats show
 {
-    "auth.queries.tcp": 1749,
-    "auth.queries.udp": 867868,
-    "bind10.boot_time": "2011-01-20T16:59:03Z",
-    "report_time": "2011-01-20T17:04:06Z",
-    "stats.boot_time": "2011-01-20T16:59:05Z",
-    "stats.last_update_time": "2011-01-20T17:04:05Z",
-    "stats.lname": "4d3869d9_a@jreed.example.net",
-    "stats.start_time": "2011-01-20T16:59:05Z",
-    "stats.timestamp": 1295543046.823504
+    "Auth": {
+        "queries.tcp": 1749,
+        "queries.udp": 867868
+    },
+    "Boss": {
+        "boot_time": "2011-01-20T16:59:03Z"
+    },
+    "Stats": {
+        "boot_time": "2011-01-20T16:59:05Z",
+        "last_update_time": "2011-01-20T17:04:05Z",
+        "lname": "4d3869d9_a@jreed.example.net",
+        "report_time": "2011-01-20T17:04:06Z",
+        "timestamp": 1295543046.823504
+    }
 }
        

Chapter 14. Logging

diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index ef66f3d3fb..4883bb0a29 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1453,24 +1453,32 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - This stats daemon provides commands to identify if it is running, - show specified or all statistics data, set values, remove data, - and reset data. + + This stats daemon provides commands to identify if it is + running, show specified or all statistics data, show specified + or all statistics data schema, and set specified statistics + data. For example, using bindctl: + > Stats show { - "auth.queries.tcp": 1749, - "auth.queries.udp": 867868, - "bind10.boot_time": "2011-01-20T16:59:03Z", - "report_time": "2011-01-20T17:04:06Z", - "stats.boot_time": "2011-01-20T16:59:05Z", - "stats.last_update_time": "2011-01-20T17:04:05Z", - "stats.lname": "4d3869d9_a@jreed.example.net", - "stats.start_time": "2011-01-20T16:59:05Z", - "stats.timestamp": 1295543046.823504 + "Auth": { + "queries.tcp": 1749, + "queries.udp": 867868 + }, + "Boss": { + "boot_time": "2011-01-20T16:59:03Z" + }, + "Stats": { + "boot_time": "2011-01-20T16:59:05Z", + "last_update_time": "2011-01-20T17:04:05Z", + "lname": "4d3869d9_a@jreed.example.net", + "report_time": "2011-01-20T17:04:06Z", + "timestamp": 1295543046.823504 + } } From 0b235902f38d611606d44661506f32baf266fdda Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 16:21:49 +0900 Subject: [PATCH 120/175] [trac930] update argument name and argument format of set command in auth module and boss module and also update related unittests of their modules --- src/bin/auth/statistics.cc | 7 ++++--- src/bin/auth/tests/statistics_unittest.cc | 8 +++++--- src/bin/bind10/bind10_src.py.in | 7 ++++--- src/bin/bind10/tests/bind10_test.py.in | 5 +++-- 4 files changed, 16 insertions(+), 11 deletions(-) diff --git a/src/bin/auth/statistics.cc b/src/bin/auth/statistics.cc index 76e50074fc..444fb8b35b 100644 --- a/src/bin/auth/statistics.cc +++ b/src/bin/auth/statistics.cc @@ -67,10 +67,11 @@ AuthCountersImpl::submitStatistics() const { } std::stringstream statistics_string; statistics_string << "{\"command\": [\"set\"," - << "{ \"stats_data\": " - << "{ \"auth.queries.udp\": " + << "{ \"owner\": \"Auth\"," + << " \"data\":" + << "{ \"queries.udp\": " << counters_.at(AuthCounters::COUNTER_UDP_QUERY) - << ", \"auth.queries.tcp\": " + << ", \"queries.tcp\": " << counters_.at(AuthCounters::COUNTER_TCP_QUERY) << " }" << "}" diff --git a/src/bin/auth/tests/statistics_unittest.cc b/src/bin/auth/tests/statistics_unittest.cc index 9a3dded837..cd2755b110 100644 --- a/src/bin/auth/tests/statistics_unittest.cc +++ b/src/bin/auth/tests/statistics_unittest.cc @@ -201,12 +201,14 @@ TEST_F(AuthCountersTest, submitStatistics) { // Command is "set". EXPECT_EQ("set", statistics_session_.sent_msg->get("command") ->get(0)->stringValue()); + EXPECT_EQ("Auth", statistics_session_.sent_msg->get("command") + ->get(1)->get("owner")->stringValue()); ConstElementPtr statistics_data = statistics_session_.sent_msg ->get("command")->get(1) - ->get("stats_data"); + ->get("data"); // UDP query counter is 2 and TCP query counter is 1. - EXPECT_EQ(2, statistics_data->get("auth.queries.udp")->intValue()); - EXPECT_EQ(1, statistics_data->get("auth.queries.tcp")->intValue()); + EXPECT_EQ(2, statistics_data->get("queries.udp")->intValue()); + EXPECT_EQ(1, statistics_data->get("queries.tcp")->intValue()); } } diff --git a/src/bin/bind10/bind10_src.py.in b/src/bin/bind10/bind10_src.py.in index b497f7c922..5189802c27 100755 --- a/src/bin/bind10/bind10_src.py.in +++ b/src/bin/bind10/bind10_src.py.in @@ -85,7 +85,7 @@ isc.util.process.rename(sys.argv[0]) # number, and the overall BIND 10 version number (set in configure.ac). VERSION = "bind10 20110223 (BIND 10 @PACKAGE_VERSION@)" -# This is for bind10.boottime of stats module +# This is for boot_time of Boss _BASETIME = time.gmtime() class RestartSchedule: @@ -319,8 +319,9 @@ class BoB: elif command == "sendstats": # send statistics data to the stats daemon immediately cmd = isc.config.ccsession.create_command( - 'set', { "stats_data": { - 'bind10.boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', _BASETIME) + 'set', { "owner": "Boss", + "data": { + 'boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', _BASETIME) }}) seq = self.cc_session.group_sendmsg(cmd, 'Stats') self.cc_session.group_recvmsg(True, seq) diff --git a/src/bin/bind10/tests/bind10_test.py.in b/src/bin/bind10/tests/bind10_test.py.in index 077190c865..dc1d6603c4 100644 --- a/src/bin/bind10/tests/bind10_test.py.in +++ b/src/bin/bind10/tests/bind10_test.py.in @@ -153,8 +153,9 @@ class TestBoB(unittest.TestCase): self.assertEqual(bob.cc_session.group, "Stats") self.assertEqual(bob.cc_session.msg, isc.config.ccsession.create_command( - 'set', { "stats_data": { - 'bind10.boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', _BASETIME) + "set", { "owner": "Boss", + "data": { + "boot_time": time.strftime("%Y-%m-%dT%H:%M:%SZ", _BASETIME) }})) # "ping" command self.assertEqual(bob.command_handler("ping", None), From e7b4337aeaa760947e8e7906e64077ad7aaadc66 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 16:33:59 +0900 Subject: [PATCH 121/175] [trac930] update spec file of stats module - update description of status command, shutdown command and show command - change argument of show command (Owner module name of statistics data can be specified) - change argument of set command (Owner module name of statistics data is always required) - add showschema command which shows statistics data schema of each module specified) - disabled reset command and remove command --- src/bin/stats/stats.spec | 75 +++++++++++++++++++++++++--------------- 1 file changed, 47 insertions(+), 28 deletions(-) diff --git a/src/bin/stats/stats.spec b/src/bin/stats/stats.spec index 635eb486a1..e716b62279 100644 --- a/src/bin/stats/stats.spec +++ b/src/bin/stats/stats.spec @@ -6,18 +6,51 @@ "commands": [ { "command_name": "status", - "command_description": "identify whether stats module is alive or not", + "command_description": "Show status of the stats daemon", + "command_args": [] + }, + { + "command_name": "shutdown", + "command_description": "Shut down the stats module", "command_args": [] }, { "command_name": "show", - "command_description": "show the specified/all statistics data", + "command_description": "Show the specified/all statistics data", "command_args": [ { - "item_name": "stats_item_name", + "item_name": "owner", "item_type": "string", "item_optional": true, - "item_default": "" + "item_default": "", + "item_description": "module name of the owner of the statistics data" + }, + { + "item_name": "name", + "item_type": "string", + "item_optional": true, + "item_default": "", + "item_description": "statistics item name of the owner" + } + ] + }, + { + "command_name": "showschema", + "command_description": "show the specified/all statistics shema", + "command_args": [ + { + "item_name": "owner", + "item_type": "string", + "item_optional": true, + "item_default": "", + "item_description": "module name of the owner of the statistics data" + }, + { + "item_name": "name", + "item_type": "string", + "item_optional": true, + "item_default": "", + "item_description": "statistics item name of the owner" } ] }, @@ -26,35 +59,21 @@ "command_description": "set the value of specified name in statistics data", "command_args": [ { - "item_name": "stats_data", + "item_name": "owner", + "item_type": "string", + "item_optional": false, + "item_default": "", + "item_description": "module name of the owner of the statistics data" + }, + { + "item_name": "data", "item_type": "map", "item_optional": false, "item_default": {}, + "item_description": "statistics data set of the owner", "map_item_spec": [] } ] - }, - { - "command_name": "remove", - "command_description": "remove the specified name from statistics data", - "command_args": [ - { - "item_name": "stats_item_name", - "item_type": "string", - "item_optional": false, - "item_default": "" - } - ] - }, - { - "command_name": "reset", - "command_description": "reset all statistics data to default values except for several constant names", - "command_args": [] - }, - { - "command_name": "shutdown", - "command_description": "Shut down the stats module", - "command_args": [] } ], "statistics": [ @@ -100,7 +119,7 @@ "item_default": "", "item_title": "Local Name", "item_description": "A localname of stats module given via CC protocol" - } + } ] } } From daa1d6dd07292142d3dec5928583b0ab1da89adf Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 19:40:15 +0900 Subject: [PATCH 122/175] [trac930] - remove "stats-schema.spec" setting and getting statistics data schema via this spec file - add "version" item in DEFAULT_CONFIG - get the address family by socket.getaddrinfo function with specified server_address in advance, and create HttpServer object once, in stead of creating double HttpServer objects for IPv6 and IPv4 in the prior code (It is aimed for avoiding to fail to close the once opened sockets.) - open HTTP port in start method - avoid calling config_handler recursively in the except statement - create XML, XSD, XSL documents after getting statistics data and schema from remote stats module via CC session - definitely close once opened template file object --- src/bin/stats/stats_httpd.py.in | 227 +++++++++++++++++--------------- 1 file changed, 120 insertions(+), 107 deletions(-) diff --git a/src/bin/stats/stats_httpd.py.in b/src/bin/stats/stats_httpd.py.in index 74298cf288..cc9c6045c2 100755 --- a/src/bin/stats/stats_httpd.py.in +++ b/src/bin/stats/stats_httpd.py.in @@ -57,7 +57,6 @@ else: BASE_LOCATION = "@datadir@" + os.sep + "@PACKAGE@" BASE_LOCATION = BASE_LOCATION.replace("${datarootdir}", DATAROOTDIR).replace("${prefix}", PREFIX) SPECFILE_LOCATION = BASE_LOCATION + os.sep + "stats-httpd.spec" -SCHEMA_SPECFILE_LOCATION = BASE_LOCATION + os.sep + "stats-schema.spec" XML_TEMPLATE_LOCATION = BASE_LOCATION + os.sep + "stats-httpd-xml.tpl" XSD_TEMPLATE_LOCATION = BASE_LOCATION + os.sep + "stats-httpd-xsd.tpl" XSL_TEMPLATE_LOCATION = BASE_LOCATION + os.sep + "stats-httpd-xsl.tpl" @@ -69,7 +68,7 @@ XSD_URL_PATH = '/bind10/statistics/xsd' XSL_URL_PATH = '/bind10/statistics/xsl' # TODO: This should be considered later. XSD_NAMESPACE = 'http://bind10.isc.org' + XSD_URL_PATH -DEFAULT_CONFIG = dict(listen_on=[('127.0.0.1', 8000)]) +DEFAULT_CONFIG = dict(version=0, listen_on=[('127.0.0.1', 8000)]) # Assign this process name isc.util.process.rename() @@ -161,8 +160,6 @@ class StatsHttpd: self.httpd = [] self.open_mccs() self.load_config() - self.load_templates() - self.open_httpd() def open_mccs(self): """Opens a ModuleCCSession object""" @@ -171,10 +168,6 @@ class StatsHttpd: self.mccs = isc.config.ModuleCCSession( SPECFILE_LOCATION, self.config_handler, self.command_handler) self.cc_session = self.mccs._session - # read spec file of stats module and subscribe 'Stats' - self.stats_module_spec = isc.config.module_spec_from_file(SCHEMA_SPECFILE_LOCATION) - self.stats_config_spec = self.stats_module_spec.get_config_spec() - self.stats_module_name = self.stats_module_spec.get_module_name() def close_mccs(self): """Closes a ModuleCCSession object""" @@ -208,45 +201,41 @@ class StatsHttpd: for addr in self.http_addrs: self.httpd.append(self._open_httpd(addr)) - def _open_httpd(self, server_address, address_family=None): + def _open_httpd(self, server_address): + httpd = None try: - # try IPv6 at first - if address_family is not None: - HttpServer.address_family = address_family - elif socket.has_ipv6: - HttpServer.address_family = socket.AF_INET6 + # get address family for the server_address before + # creating HttpServer object + address_family = socket.getaddrinfo(*server_address)[0][0] + HttpServer.address_family = address_family httpd = HttpServer( server_address, HttpHandler, self.xml_handler, self.xsd_handler, self.xsl_handler, self.write_log) - except (socket.gaierror, socket.error, - OverflowError, TypeError) as err: - # try IPv4 next - if HttpServer.address_family == socket.AF_INET6: - httpd = self._open_httpd(server_address, socket.AF_INET) - else: - raise HttpServerError( - "Invalid address %s, port %s: %s: %s" % - (server_address[0], server_address[1], - err.__class__.__name__, err)) - else: logger.info(STATHTTPD_STARTED, server_address[0], server_address[1]) - return httpd + return httpd + except (socket.gaierror, socket.error, + OverflowError, TypeError) as err: + if httpd: + httpd.server_close() + raise HttpServerError( + "Invalid address %s, port %s: %s: %s" % + (server_address[0], server_address[1], + err.__class__.__name__, err)) def close_httpd(self): """Closes sockets for HTTP""" - if len(self.httpd) == 0: - return - for ht in self.httpd: + while len(self.httpd)>0: + ht = self.httpd.pop() logger.info(STATHTTPD_CLOSING, ht.server_address[0], ht.server_address[1]) ht.server_close() - self.httpd = [] def start(self): """Starts StatsHttpd objects to run. Waiting for client requests by using select.select functions""" + self.open_httpd() self.mccs.start() self.running = True while self.running: @@ -310,7 +299,8 @@ class StatsHttpd: except HttpServerError as err: logger.error(STATHTTPD_SERVER_ERROR, err) # restore old config - self.config_handler(old_config) + self.load_config(old_config) + self.open_httpd() return isc.config.ccsession.create_answer( 1, "[b10-stats-httpd] %s" % err) else: @@ -341,8 +331,7 @@ class StatsHttpd: the data which obtains from it""" try: seq = self.cc_session.group_sendmsg( - isc.config.ccsession.create_command('show'), - self.stats_module_name) + isc.config.ccsession.create_command('show'), 'Stats') (answer, env) = self.cc_session.group_recvmsg(False, seq) if answer: (rcode, value) = isc.config.ccsession.parse_answer(answer) @@ -357,34 +346,82 @@ class StatsHttpd: raise StatsHttpdError("Stats module: %s" % str(value)) def get_stats_spec(self): - """Just returns spec data""" - return self.stats_config_spec + """Requests statistics data to the Stats daemon and returns + the data which obtains from it""" + try: + seq = self.cc_session.group_sendmsg( + isc.config.ccsession.create_command('showschema'), 'Stats') + (answer, env) = self.cc_session.group_recvmsg(False, seq) + if answer: + (rcode, value) = isc.config.ccsession.parse_answer(answer) + if rcode == 0: + return value + else: + raise StatsHttpdError("Stats module: %s" % str(value)) + except (isc.cc.session.SessionTimeout, + isc.cc.session.SessionError) as err: + raise StatsHttpdError("%s: %s" % + (err.__class__.__name__, err)) - def load_templates(self): - """Setup the bodies of XSD and XSL documents to be responds to - HTTP clients. Before that it also creates XML tag structures by - using xml.etree.ElementTree.Element class and substitutes - concrete strings with parameters embed in the string.Template - object.""" + def xml_handler(self): + """Handler which requests to Stats daemon to obtain statistics + data and returns the body of XML document""" + xml_list=[] + for (mod, spec) in self.get_stats_data().items(): + if not spec: continue + elem1 = xml.etree.ElementTree.Element(str(mod)) + for (k, v) in spec.items(): + elem2 = xml.etree.ElementTree.Element(str(k)) + elem2.text = str(v) + elem1.append(elem2) + # The coding conversion is tricky. xml..tostring() of Python 3.2 + # returns bytes (not string) regardless of the coding, while + # tostring() of Python 3.1 returns a string. To support both + # cases transparently, we first make sure tostring() returns + # bytes by specifying utf-8 and then convert the result to a + # plain string (code below assume it). + xml_list.append( + str(xml.etree.ElementTree.tostring(elem1, encoding='utf-8'), + encoding='us-ascii')) + xml_string = "".join(xml_list) + self.xml_body = self.open_template(XML_TEMPLATE_LOCATION).substitute( + xml_string=xml_string, + xsd_namespace=XSD_NAMESPACE, + xsd_url_path=XSD_URL_PATH, + xsl_url_path=XSL_URL_PATH) + assert self.xml_body is not None + return self.xml_body + + def xsd_handler(self): + """Handler which just returns the body of XSD document""" # for XSD xsd_root = xml.etree.ElementTree.Element("all") # started with "all" tag - for item in self.get_stats_spec(): - element = xml.etree.ElementTree.Element( - "element", - dict( name=item["item_name"], - type=item["item_type"] if item["item_type"].lower() != 'real' else 'float', - minOccurs="1", - maxOccurs="1" ), - ) - annotation = xml.etree.ElementTree.Element("annotation") - appinfo = xml.etree.ElementTree.Element("appinfo") - documentation = xml.etree.ElementTree.Element("documentation") - appinfo.text = item["item_title"] - documentation.text = item["item_description"] - annotation.append(appinfo) - annotation.append(documentation) - element.append(annotation) - xsd_root.append(element) + for (mod, spec) in self.get_stats_spec().items(): + if not spec: continue + alltag = xml.etree.ElementTree.Element("all") + for item in spec: + element = xml.etree.ElementTree.Element( + "element", + dict( name=item["item_name"], + type=item["item_type"] if item["item_type"].lower() != 'real' else 'float', + minOccurs="1", + maxOccurs="1" ), + ) + annotation = xml.etree.ElementTree.Element("annotation") + appinfo = xml.etree.ElementTree.Element("appinfo") + documentation = xml.etree.ElementTree.Element("documentation") + appinfo.text = item["item_title"] + documentation.text = item["item_description"] + annotation.append(appinfo) + annotation.append(documentation) + element.append(annotation) + alltag.append(element) + + complextype = xml.etree.ElementTree.Element("complexType") + complextype.append(alltag) + mod_element = xml.etree.ElementTree.Element("element", { "name" : mod }) + mod_element.append(complextype) + xsd_root.append(mod_element) # The coding conversion is tricky. xml..tostring() of Python 3.2 # returns bytes (not string) regardless of the coding, while # tostring() of Python 3.1 returns a string. To support both @@ -398,25 +435,33 @@ class StatsHttpd: xsd_namespace=XSD_NAMESPACE ) assert self.xsd_body is not None + return self.xsd_body + def xsl_handler(self): + """Handler which just returns the body of XSL document""" # for XSL xsd_root = xml.etree.ElementTree.Element( "xsl:template", dict(match="*")) # started with xml:template tag - for item in self.get_stats_spec(): - tr = xml.etree.ElementTree.Element("tr") - td1 = xml.etree.ElementTree.Element( - "td", { "class" : "title", - "title" : item["item_description"] }) - td1.text = item["item_title"] - td2 = xml.etree.ElementTree.Element("td") - xsl_valueof = xml.etree.ElementTree.Element( - "xsl:value-of", - dict(select=item["item_name"])) - td2.append(xsl_valueof) - tr.append(td1) - tr.append(td2) - xsd_root.append(tr) + for (mod, spec) in self.get_stats_spec().items(): + if not spec: continue + for item in spec: + tr = xml.etree.ElementTree.Element("tr") + td0 = xml.etree.ElementTree.Element("td") + td0.text = str(mod) + td1 = xml.etree.ElementTree.Element( + "td", { "class" : "title", + "title" : item["item_description"] }) + td1.text = item["item_title"] + td2 = xml.etree.ElementTree.Element("td") + xsl_valueof = xml.etree.ElementTree.Element( + "xsl:value-of", + dict(select=mod+'/'+item["item_name"])) + td2.append(xsl_valueof) + tr.append(td0) + tr.append(td1) + tr.append(td2) + xsd_root.append(tr) # The coding conversion is tricky. xml..tostring() of Python 3.2 # returns bytes (not string) regardless of the coding, while # tostring() of Python 3.1 returns a string. To support both @@ -429,47 +474,15 @@ class StatsHttpd: xsl_string=xsl_string, xsd_namespace=XSD_NAMESPACE) assert self.xsl_body is not None - - def xml_handler(self): - """Handler which requests to Stats daemon to obtain statistics - data and returns the body of XML document""" - xml_list=[] - for (k, v) in self.get_stats_data().items(): - (k, v) = (str(k), str(v)) - elem = xml.etree.ElementTree.Element(k) - elem.text = v - # The coding conversion is tricky. xml..tostring() of Python 3.2 - # returns bytes (not string) regardless of the coding, while - # tostring() of Python 3.1 returns a string. To support both - # cases transparently, we first make sure tostring() returns - # bytes by specifying utf-8 and then convert the result to a - # plain string (code below assume it). - xml_list.append( - str(xml.etree.ElementTree.tostring(elem, encoding='utf-8'), - encoding='us-ascii')) - xml_string = "".join(xml_list) - self.xml_body = self.open_template(XML_TEMPLATE_LOCATION).substitute( - xml_string=xml_string, - xsd_namespace=XSD_NAMESPACE, - xsd_url_path=XSD_URL_PATH, - xsl_url_path=XSL_URL_PATH) - assert self.xml_body is not None - return self.xml_body - - def xsd_handler(self): - """Handler which just returns the body of XSD document""" - return self.xsd_body - - def xsl_handler(self): - """Handler which just returns the body of XSL document""" return self.xsl_body def open_template(self, file_name): """It opens a template file, and it loads all lines to a string variable and returns string. Template object includes the variable. Limitation of a file size isn't needed there.""" - lines = "".join( - open(file_name, 'r').readlines()) + f = open(file_name, 'r') + lines = "".join(f.readlines()) + f.close() assert lines is not None return string.Template(lines) From c074f6e0b72c3facf6b325b17dea1ca13a2788cc Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 19:56:24 +0900 Subject: [PATCH 123/175] [trac930] modify Stats - remove unneeded subject and listener classes - add StatsError for handling errors in Stats - add some new methods (update_modules, update_statistics_data and get_statistics_data) - modify implementations of existent commands(show and set) according changes stats.spec - remove reset and remove command because stats module couldn't manage other modules' statistics data schema - add implementation of strict validation of each statistics data (If the validation is failed, it puts out the error.) - stats module shows its PID when status command invoked - add new command showschema invokable via bindctl - set command requires arguments of owner module name and statistics item name - show and showschema commands accepts arguments of owner module name and statistics item name - exits at exit code 1 if got runtime errors - has boot time in _BASETIME --- src/bin/stats/stats.py.in | 666 ++++++++++++++++++-------------------- 1 file changed, 310 insertions(+), 356 deletions(-) diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index ce3d9f4612..3faa3059a0 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -15,16 +15,17 @@ # NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. +""" +Statistics daemon in BIND 10 + +""" import sys; sys.path.append ('@@PYTHONPATH@@') import os -import signal -import select from time import time, strftime, gmtime from optparse import OptionParser, OptionValueError -from collections import defaultdict -from isc.config.ccsession import ModuleCCSession, create_answer -from isc.cc import Session, SessionError +import isc +import isc.util.process import isc.log from stats_messages import * @@ -35,352 +36,24 @@ logger = isc.log.Logger("stats") # have #1074 DBG_STATS_MESSAGING = 30 +# This is for boot_time of Stats +_BASETIME = gmtime() + # for setproctitle -import isc.util.process isc.util.process.rename() # If B10_FROM_SOURCE is set in the environment, we use data files # from a directory relative to that, otherwise we use the ones # installed on the system if "B10_FROM_SOURCE" in os.environ: - BASE_LOCATION = os.environ["B10_FROM_SOURCE"] + os.sep + \ - "src" + os.sep + "bin" + os.sep + "stats" + SPECFILE_LOCATION = os.environ["B10_FROM_SOURCE"] + os.sep + \ + "src" + os.sep + "bin" + os.sep + "stats" + os.sep + "stats.spec" else: PREFIX = "@prefix@" DATAROOTDIR = "@datarootdir@" - BASE_LOCATION = "@datadir@" + os.sep + "@PACKAGE@" - BASE_LOCATION = BASE_LOCATION.replace("${datarootdir}", DATAROOTDIR).replace("${prefix}", PREFIX) -SPECFILE_LOCATION = BASE_LOCATION + os.sep + "stats.spec" -SCHEMA_SPECFILE_LOCATION = BASE_LOCATION + os.sep + "stats-schema.spec" - -class Singleton(type): - """ - A abstract class of singleton pattern - """ - # Because of singleton pattern: - # At the beginning of coding, one UNIX domain socket is needed - # for config manager, another socket is needed for stats module, - # then stats module might need two sockets. So I adopted the - # singleton pattern because I avoid creating multiple sockets in - # one stats module. But in the initial version stats module - # reports only via bindctl, so just one socket is needed. To use - # the singleton pattern is not important now. :( - - def __init__(self, *args, **kwargs): - type.__init__(self, *args, **kwargs) - self._instances = {} - - def __call__(self, *args, **kwargs): - if args not in self._instances: - self._instances[args]={} - kw = tuple(kwargs.items()) - if kw not in self._instances[args]: - self._instances[args][kw] = type.__call__(self, *args, **kwargs) - return self._instances[args][kw] - -class Callback(): - """ - A Callback handler class - """ - def __init__(self, name=None, callback=None, args=(), kwargs={}): - self.name = name - self.callback = callback - self.args = args - self.kwargs = kwargs - - def __call__(self, *args, **kwargs): - if not args: - args = self.args - if not kwargs: - kwargs = self.kwargs - if self.callback: - return self.callback(*args, **kwargs) - -class Subject(): - """ - A abstract subject class of observer pattern - """ - # Because of observer pattern: - # In the initial release, I'm also sure that observer pattern - # isn't definitely needed because the interface between gathering - # and reporting statistics data is single. However in the future - # release, the interfaces may be multiple, that is, multiple - # listeners may be needed. For example, one interface, which - # stats module has, is for between ''config manager'' and stats - # module, another interface is for between ''HTTP server'' and - # stats module, and one more interface is for between ''SNMP - # server'' and stats module. So by considering that stats module - # needs multiple interfaces in the future release, I adopted the - # observer pattern in stats module. But I don't have concrete - # ideas in case of multiple listener currently. - - def __init__(self): - self._listeners = [] - - def attach(self, listener): - if not listener in self._listeners: - self._listeners.append(listener) - - def detach(self, listener): - try: - self._listeners.remove(listener) - except ValueError: - pass - - def notify(self, event, modifier=None): - for listener in self._listeners: - if modifier != listener: - listener.update(event) - -class Listener(): - """ - A abstract listener class of observer pattern - """ - def __init__(self, subject): - self.subject = subject - self.subject.attach(self) - self.events = {} - - def update(self, name): - if name in self.events: - callback = self.events[name] - return callback() - - def add_event(self, event): - self.events[event.name]=event - -class SessionSubject(Subject, metaclass=Singleton): - """ - A concrete subject class which creates CC session object - """ - def __init__(self, session=None): - Subject.__init__(self) - self.session=session - self.running = False - - def start(self): - self.running = True - self.notify('start') - - def stop(self): - self.running = False - self.notify('stop') - - def check(self): - self.notify('check') - -class CCSessionListener(Listener): - """ - A concrete listener class which creates SessionSubject object and - ModuleCCSession object - """ - def __init__(self, subject): - Listener.__init__(self, subject) - self.session = subject.session - self.boot_time = get_datetime() - - # create ModuleCCSession object - self.cc_session = ModuleCCSession(SPECFILE_LOCATION, - self.config_handler, - self.command_handler, - self.session) - - self.session = self.subject.session = self.cc_session._session - - # initialize internal data - self.stats_spec = isc.config.module_spec_from_file(SCHEMA_SPECFILE_LOCATION).get_config_spec() - self.stats_data = self.initialize_data(self.stats_spec) - - # add event handler invoked via SessionSubject object - self.add_event(Callback('start', self.start)) - self.add_event(Callback('stop', self.stop)) - self.add_event(Callback('check', self.check)) - # don't add 'command_' suffix to the special commands in - # order to prevent executing internal command via bindctl - - # get commands spec - self.commands_spec = self.cc_session.get_module_spec().get_commands_spec() - - # add event handler related command_handler of ModuleCCSession - # invoked via bindctl - for cmd in self.commands_spec: - try: - # add prefix "command_" - name = "command_" + cmd["command_name"] - callback = getattr(self, name) - kwargs = self.initialize_data(cmd["command_args"]) - self.add_event(Callback(name=name, callback=callback, args=(), kwargs=kwargs)) - except AttributeError as ae: - logger.error(STATS_UNKNOWN_COMMAND_IN_SPEC, cmd["command_name"]) - - def start(self): - """ - start the cc chanel - """ - # set initial value - self.stats_data['stats.boot_time'] = self.boot_time - self.stats_data['stats.start_time'] = get_datetime() - self.stats_data['stats.last_update_time'] = get_datetime() - self.stats_data['stats.lname'] = self.session.lname - self.cc_session.start() - # request Bob to send statistics data - logger.debug(DBG_STATS_MESSAGING, STATS_SEND_REQUEST_BOSS) - cmd = isc.config.ccsession.create_command("sendstats", None) - seq = self.session.group_sendmsg(cmd, 'Boss') - self.session.group_recvmsg(True, seq) - - def stop(self): - """ - stop the cc chanel - """ - return self.cc_session.close() - - def check(self): - """ - check the cc chanel - """ - return self.cc_session.check_command(False) - - def config_handler(self, new_config): - """ - handle a configure from the cc channel - """ - logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_NEW_CONFIG, - new_config) - - # do nothing currently - return create_answer(0) - - def command_handler(self, command, *args, **kwargs): - """ - handle commands from the cc channel - """ - # add 'command_' suffix in order to executing command via bindctl - name = 'command_' + command - - if name in self.events: - event = self.events[name] - return event(*args, **kwargs) - else: - return self.command_unknown(command, args) - - def command_shutdown(self, args): - """ - handle shutdown command - """ - logger.info(STATS_RECEIVED_SHUTDOWN_COMMAND) - self.subject.running = False - return create_answer(0) - - def command_set(self, args, stats_data={}): - """ - handle set command - """ - # 'args' must be dictionary type - self.stats_data.update(args['stats_data']) - - # overwrite "stats.LastUpdateTime" - self.stats_data['stats.last_update_time'] = get_datetime() - - return create_answer(0) - - def command_remove(self, args, stats_item_name=''): - """ - handle remove command - """ - - # 'args' must be dictionary type - if args and args['stats_item_name'] in self.stats_data: - stats_item_name = args['stats_item_name'] - - logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_REMOVE_COMMAND, - stats_item_name) - - # just remove one item - self.stats_data.pop(stats_item_name) - - return create_answer(0) - - def command_show(self, args, stats_item_name=''): - """ - handle show command - """ - - # always overwrite 'report_time' and 'stats.timestamp' - # if "show" command invoked - self.stats_data['report_time'] = get_datetime() - self.stats_data['stats.timestamp'] = get_timestamp() - - # if with args - if args and args['stats_item_name'] in self.stats_data: - stats_item_name = args['stats_item_name'] - logger.debug(DBG_STATS_MESSAGING, - STATS_RECEIVED_SHOW_NAME_COMMAND, - stats_item_name) - return create_answer(0, {stats_item_name: self.stats_data[stats_item_name]}) - - logger.debug(DBG_STATS_MESSAGING, - STATS_RECEIVED_SHOW_ALL_COMMAND) - return create_answer(0, self.stats_data) - - def command_reset(self, args): - """ - handle reset command - """ - logger.debug(DBG_STATS_MESSAGING, - STATS_RECEIVED_RESET_COMMAND) - - # re-initialize internal variables - self.stats_data = self.initialize_data(self.stats_spec) - - # reset initial value - self.stats_data['stats.boot_time'] = self.boot_time - self.stats_data['stats.start_time'] = get_datetime() - self.stats_data['stats.last_update_time'] = get_datetime() - self.stats_data['stats.lname'] = self.session.lname - - return create_answer(0) - - def command_status(self, args): - """ - handle status command - """ - logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_STATUS_COMMAND) - # just return "I'm alive." - return create_answer(0, "I'm alive.") - - def command_unknown(self, command, args): - """ - handle an unknown command - """ - logger.error(STATS_RECEIVED_UNKNOWN_COMMAND, command) - return create_answer(1, "Unknown command: '"+str(command)+"'") - - - def initialize_data(self, spec): - """ - initialize stats data - """ - def __get_init_val(spec): - if spec['item_type'] == 'null': - return None - elif spec['item_type'] == 'boolean': - return bool(spec.get('item_default', False)) - elif spec['item_type'] == 'string': - return str(spec.get('item_default', '')) - elif spec['item_type'] in set(['number', 'integer']): - return int(spec.get('item_default', 0)) - elif spec['item_type'] in set(['float', 'double', 'real']): - return float(spec.get('item_default', 0.0)) - elif spec['item_type'] in set(['list', 'array']): - return spec.get('item_default', - [ __get_init_val(s) for s in spec['list_item_spec'] ]) - elif spec['item_type'] in set(['map', 'object']): - return spec.get('item_default', - dict([ (s['item_name'], __get_init_val(s)) for s in spec['map_item_spec'] ]) ) - else: - return spec.get('item_default') - return dict([ (s['item_name'], __get_init_val(s)) for s in spec ]) + SPECFILE_LOCATION = "@datadir@" + os.sep + "@PACKAGE@" + os.sep + "stats.spec" + SPECFILE_LOCATION = SPECFILE_LOCATION.replace("${datarootdir}", DATAROOTDIR)\ + .replace("${prefix}", PREFIX) def get_timestamp(): """ @@ -388,33 +61,314 @@ def get_timestamp(): """ return time() -def get_datetime(): +def get_datetime(gmt=None): """ get current datetime """ - return strftime("%Y-%m-%dT%H:%M:%SZ", gmtime()) + if not gmt: gmt = gmtime() + return strftime("%Y-%m-%dT%H:%M:%SZ", gmt) -def main(session=None): +def parse_spec(spec): + """ + parse spec type data + """ + def _parse_spec(spec): + item_type = spec['item_type'] + if item_type == "integer": + return int(spec.get('item_default', 0)) + elif item_type == "real": + return float(spec.get('item_default', 0.0)) + elif item_type == "boolean": + return bool(spec.get('item_default', False)) + elif item_type == "string": + return str(spec.get('item_default', "")) + elif item_type == "list": + return spec.get( + "item_default", + [ _parse_spec(s) for s in spec["list_item_spec"] ]) + elif item_type == "map": + return spec.get( + "item_default", + dict([ (s["item_name"], _parse_spec(s)) for s in spec["map_item_spec"] ]) ) + else: + return spec.get("item_default", None) + return dict([ (s['item_name'], _parse_spec(s)) for s in spec ]) + +class Callback(): + """ + A Callback handler class + """ + def __init__(self, command=None, args=(), kwargs={}): + self.command = command + self.args = args + self.kwargs = kwargs + + def __call__(self, *args, **kwargs): + if not args: args = self.args + if not kwargs: kwargs = self.kwargs + if self.command: return self.command(*args, **kwargs) + +class StatsError(Exception): + """Exception class for Stats class""" + pass + +class Stats: + """ + Main class of stats module + """ + def __init__(self): + self.running = False + # create ModuleCCSession object + self.mccs = isc.config.ModuleCCSession(SPECFILE_LOCATION, + self.config_handler, + self.command_handler) + self.cc_session = self.mccs._session + # get module spec + self.module_name = self.mccs.get_module_spec().get_module_name() + self.modules = {} + self.statistics_data = {} + # get commands spec + self.commands_spec = self.mccs.get_module_spec().get_commands_spec() + # add event handler related command_handler of ModuleCCSession + self.callbacks = {} + for cmd in self.commands_spec: + # add prefix "command_" + name = "command_" + cmd["command_name"] + try: + callback = getattr(self, name) + kwargs = parse_spec(cmd["command_args"]) + self.callbacks[name] = Callback(command=callback, kwargs=kwargs) + except AttributeError: + raise StatsError(STATS_UNKNOWN_COMMAND_IN_SPEC, cmd["command_name"]) + self.mccs.start() + + def start(self): + """ + Start stats module + """ + self.running = True + # TODO: should be added into new logging interface + # if self.verbose: + # sys.stdout.write("[b10-stats] starting\n") + + # request Bob to send statistics data + logger.debug(DBG_STATS_MESSAGING, STATS_SEND_REQUEST_BOSS) + cmd = isc.config.ccsession.create_command("sendstats", None) + seq = self.cc_session.group_sendmsg(cmd, 'Boss') + self.cc_session.group_recvmsg(True, seq) + + # initialized Statistics data + errors = self.update_statistics_data( + self.module_name, + lname=self.cc_session.lname, + boot_time=get_datetime(_BASETIME) + ) + if errors: + raise StatsError("stats spec file is incorrect") + + while self.running: + self.mccs.check_command(False) + + def config_handler(self, new_config): + """ + handle a configure from the cc channel + """ + logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_NEW_CONFIG, + new_config) + # do nothing currently + return isc.config.create_answer(0) + + def command_handler(self, command, kwargs): + """ + handle commands from the cc channel + """ + name = 'command_' + command + if name in self.callbacks: + callback = self.callbacks[name] + if kwargs: + return callback(**kwargs) + else: + return callback() + else: + logger.error(STATS_RECEIVED_UNKNOWN_COMMAND, command) + return isc.config.create_answer(1, "Unknown command: '"+str(command)+"'") + + def update_modules(self): + """ + update information of each module + """ + modules = {} + seq = self.cc_session.group_sendmsg( + isc.config.ccsession.create_command( + isc.config.ccsession.COMMAND_GET_STATISTICS_SPEC), + 'ConfigManager') + (answer, env) = self.cc_session.group_recvmsg(False, seq) + if answer: + (rcode, value) = isc.config.ccsession.parse_answer(answer) + if rcode == 0: + for mod in value: + spec = { "module_name" : mod, + "statistics" : [] } + if value[mod] and type(value[mod]) is list: + spec["statistics"] = value[mod] + modules[mod] = isc.config.module_spec.ModuleSpec(spec) + modules[self.module_name] = self.mccs.get_module_spec() + self.modules = modules + + def get_statistics_data(self, owner=None, name=None): + """ + return statistics data which stats module has of each module + """ + self.update_statistics_data() + if owner and name: + try: + return self.statistics_data[owner][name] + except KeyError: + pass + elif owner: + try: + return self.statistics_data[owner] + except KeyError: + pass + elif name: + pass + else: + return self.statistics_data + + def update_statistics_data(self, owner=None, **data): + """ + change statistics date of specified module into specified data + """ + self.update_modules() + statistics_data = {} + for (name, module) in self.modules.items(): + value = parse_spec(module.get_statistics_spec()) + if module.validate_statistics(True, value): + statistics_data[name] = value + for (name, value) in self.statistics_data.items(): + if name in statistics_data: + statistics_data[name].update(value) + else: + statistics_data[name] = value + self.statistics_data = statistics_data + if owner and data: + errors = [] + try: + if self.modules[owner].validate_statistics(False, data, errors): + self.statistics_data[owner].update(data) + return + except KeyError: + errors.append('unknown module name') + return errors + + def command_status(self): + """ + handle status command + """ + logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_STATUS_COMMAND) + return isc.config.create_answer( + 0, "Stats is up. (PID " + str(os.getpid()) + ")") + + def command_shutdown(self): + """ + handle shutdown command + """ + logger.info(STATS_RECEIVED_SHUTDOWN_COMMAND) + self.running = False + return isc.config.create_answer(0) + + def command_show(self, owner=None, name=None): + """ + handle show command + """ + if (owner or name): + logger.debug(DBG_STATS_MESSAGING, + STATS_RECEIVED_SHOW_NAME_COMMAND, + str(owner)+", "+str(name)) + else: + logger.debug(DBG_STATS_MESSAGING, + STATS_RECEIVED_SHOW_ALL_COMMAND) + if owner and not name: + return isc.config.create_answer(1, "item name is not specified") + errors = self.update_statistics_data( + self.module_name, + timestamp=get_timestamp(), + report_time=get_datetime() + ) + if errors: raise StatsError("stats spec file is incorrect") + ret = self.get_statistics_data(owner, name) + if ret: + return isc.config.create_answer(0, ret) + else: + return isc.config.create_answer( + 1, "specified module name and/or item name are incorrect") + + def command_showschema(self, owner=None, name=None): + """ + handle show command + """ + # TODO: should be added into new logging interface + # if self.verbose: + # sys.stdout.write("[b10-stats] 'showschema' command received\n") + self.update_modules() + schema = {} + schema_byname = {} + for mod in self.modules: + spec = self.modules[mod].get_statistics_spec() + schema_byname[mod] = {} + if spec: + schema[mod] = spec + for item in spec: + schema_byname[mod][item['item_name']] = item + if owner: + try: + if name: + return isc.config.create_answer(0, schema_byname[owner][name]) + else: + return isc.config.create_answer(0, schema[owner]) + except KeyError: + pass + else: + if name: + return isc.config.create_answer(1, "module name is not specified") + else: + return isc.config.create_answer(0, schema) + return isc.config.create_answer( + 1, "specified module name and/or item name are incorrect") + + def command_set(self, owner, data): + """ + handle set command + """ + errors = self.update_statistics_data(owner, **data) + if errors: + return isc.config.create_answer( + 1, + "specified module name and/or statistics data are incorrect: " + + ", ".join(errors)) + errors = self.update_statistics_data( + self.module_name, last_update_time=get_datetime() ) + if errors: + raise StatsError("stats spec file is incorrect") + return isc.config.create_answer(0) + +if __name__ == "__main__": try: parser = OptionParser() - parser.add_option("-v", "--verbose", dest="verbose", action="store_true", - help="display more about what is going on") + parser.add_option( + "-v", "--verbose", dest="verbose", action="store_true", + help="display more about what is going on") (options, args) = parser.parse_args() if options.verbose: isc.log.init("b10-stats", "DEBUG", 99) - subject = SessionSubject(session=session) - listener = CCSessionListener(subject) - subject.start() - while subject.running: - subject.check() - subject.stop() - + stats = Stats() + stats.start() except OptionValueError as ove: logger.fatal(STATS_BAD_OPTION_VALUE, ove) except SessionError as se: logger.fatal(STATS_CC_SESSION_ERROR, se) + # TODO: should be added into new logging interface + except StatsError as se: + sys.exit("[b10-stats] %s" % se) except KeyboardInterrupt as kie: logger.info(STATS_STOPPED_BY_KEYBOARD) - -if __name__ == "__main__": - main() From c06463cf96ea7401325a208af8ba457e661d1cec Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 20:08:22 +0900 Subject: [PATCH 124/175] [trac930] refurbish the unittests for new stats module, new stats httpd module and new mockups and utilities in test_utils.py --- src/bin/stats/tests/b10-stats-httpd_test.py | 593 ++++++---- src/bin/stats/tests/b10-stats_test.py | 1079 ++++++++----------- src/bin/stats/tests/test_utils.py | 37 +- 3 files changed, 862 insertions(+), 847 deletions(-) diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index 6d72dc2f38..ae07aa9f27 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -15,145 +15,259 @@ import unittest import os -import http.server -import string -import fake_select import imp -import sys -import fake_socket - -import isc.cc +import socket +import errno +import select +import string +import time +import threading +import http.client +import xml.etree.ElementTree +import isc import stats_httpd -stats_httpd.socket = fake_socket -stats_httpd.select = fake_select +import stats +from test_utils import BaseModules, ThreadingServerManager, MyStats, MyStatsHttpd, TIMEOUT_SEC DUMMY_DATA = { - "auth.queries.tcp": 10000, - "auth.queries.udp": 12000, - "bind10.boot_time": "2011-03-04T11:59:05Z", - "report_time": "2011-03-04T11:59:19Z", - "stats.boot_time": "2011-03-04T11:59:06Z", - "stats.last_update_time": "2011-03-04T11:59:07Z", - "stats.lname": "4d70d40a_c@host", - "stats.start_time": "2011-03-04T11:59:06Z", - "stats.timestamp": 1299239959.560846 + 'Boss' : { + "boot_time": "2011-03-04T11:59:06Z" + }, + 'Auth' : { + "queries.tcp": 2, + "queries.udp": 3 + }, + 'Stats' : { + "report_time": "2011-03-04T11:59:19Z", + "boot_time": "2011-03-04T11:59:06Z", + "last_update_time": "2011-03-04T11:59:07Z", + "lname": "4d70d40a_c@host", + "timestamp": 1299239959.560846 + } } -def push_answer(stats_httpd): - stats_httpd.cc_session.group_sendmsg( - { 'result': - [ 0, DUMMY_DATA ] }, "Stats") - -def pull_query(stats_httpd): - (msg, env) = stats_httpd.cc_session.group_recvmsg() - if 'result' in msg: - (ret, arg) = isc.config.ccsession.parse_answer(msg) - else: - (ret, arg) = isc.config.ccsession.parse_command(msg) - return (ret, arg, env) - class TestHttpHandler(unittest.TestCase): """Tests for HttpHandler class""" def setUp(self): - self.stats_httpd = stats_httpd.StatsHttpd() - self.assertTrue(type(self.stats_httpd.httpd) is list) - self.httpd = self.stats_httpd.httpd + self.base = BaseModules() + self.stats_server = ThreadingServerManager(MyStats) + self.stats = self.stats_server.server + self.stats_server.run() + + def tearDown(self): + self.stats_server.shutdown() + self.base.shutdown() def test_do_GET(self): - for ht in self.httpd: - self._test_do_GET(ht._handler) - - def _test_do_GET(self, handler): + (address, port) = ('127.0.0.1', 65450) + statshttpd_server = ThreadingServerManager(MyStatsHttpd) + self.stats_httpd = statshttpd_server.server + self.stats_httpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) + self.assertTrue(type(self.stats_httpd.httpd) is list) + self.assertEqual(len(self.stats_httpd.httpd), 0) + statshttpd_server.run() + time.sleep(TIMEOUT_SEC*5) + client = http.client.HTTPConnection(address, port) + client._http_vsn_str = 'HTTP/1.0\n' + client.connect() # URL is '/bind10/statistics/xml' - handler.path = stats_httpd.XML_URL_PATH - push_answer(self.stats_httpd) - handler.do_GET() - (ret, arg, env) = pull_query(self.stats_httpd) - self.assertEqual(ret, "show") - self.assertIsNone(arg) - self.assertTrue('group' in env) - self.assertEqual(env['group'], 'Stats') - self.assertEqual(handler.response.code, 200) - self.assertEqual(handler.response.headers["Content-type"], "text/xml") - self.assertTrue(handler.response.headers["Content-Length"] > 0) - self.assertTrue(handler.response.wrote_headers) - self.assertTrue(handler.response.body.find(stats_httpd.XSD_NAMESPACE)>0) - self.assertTrue(handler.response.body.find(stats_httpd.XSD_URL_PATH)>0) - for (k, v) in DUMMY_DATA.items(): - self.assertTrue(handler.response.body.find(str(k))>0) - self.assertTrue(handler.response.body.find(str(v))>0) + client.putrequest('GET', stats_httpd.XML_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.getheader("Content-type"), "text/xml") + self.assertTrue(int(response.getheader("Content-Length")) > 0) + self.assertEqual(response.status, 200) + root = xml.etree.ElementTree.parse(response).getroot() + self.assertTrue(root.tag.find('stats_data') > 0) + for (k,v) in root.attrib.items(): + if k.find('schemaLocation') > 0: + self.assertEqual(v, stats_httpd.XSD_NAMESPACE + ' ' + stats_httpd.XSD_URL_PATH) + for mod in DUMMY_DATA: + for (item, value) in DUMMY_DATA[mod].items(): + self.assertIsNotNone(root.find(mod + '/' + item)) # URL is '/bind10/statitics/xsd' - handler.path = stats_httpd.XSD_URL_PATH - handler.do_GET() - self.assertEqual(handler.response.code, 200) - self.assertEqual(handler.response.headers["Content-type"], "text/xml") - self.assertTrue(handler.response.headers["Content-Length"] > 0) - self.assertTrue(handler.response.wrote_headers) - self.assertTrue(handler.response.body.find(stats_httpd.XSD_NAMESPACE)>0) - for (k, v) in DUMMY_DATA.items(): - self.assertTrue(handler.response.body.find(str(k))>0) + client.putrequest('GET', stats_httpd.XSD_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.getheader("Content-type"), "text/xml") + self.assertTrue(int(response.getheader("Content-Length")) > 0) + self.assertEqual(response.status, 200) + root = xml.etree.ElementTree.parse(response).getroot() + url_xmlschema = '{http://www.w3.org/2001/XMLSchema}' + tags = [ url_xmlschema + t for t in [ 'element', 'complexType', 'all', 'element' ] ] + xsdpath = '/'.join(tags) + self.assertTrue(root.tag.find('schema') > 0) + self.assertTrue(hasattr(root, 'attrib')) + self.assertTrue('targetNamespace' in root.attrib) + self.assertEqual(root.attrib['targetNamespace'], + stats_httpd.XSD_NAMESPACE) + for elm in root.findall(xsdpath): + self.assertIsNotNone(elm.attrib['name']) + self.assertTrue(elm.attrib['name'] in DUMMY_DATA) # URL is '/bind10/statitics/xsl' - handler.path = stats_httpd.XSL_URL_PATH - handler.do_GET() - self.assertEqual(handler.response.code, 200) - self.assertEqual(handler.response.headers["Content-type"], "text/xml") - self.assertTrue(handler.response.headers["Content-Length"] > 0) - self.assertTrue(handler.response.wrote_headers) - self.assertTrue(handler.response.body.find(stats_httpd.XSD_NAMESPACE)>0) - for (k, v) in DUMMY_DATA.items(): - self.assertTrue(handler.response.body.find(str(k))>0) + client.putrequest('GET', stats_httpd.XSL_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.getheader("Content-type"), "text/xml") + self.assertTrue(int(response.getheader("Content-Length")) > 0) + self.assertEqual(response.status, 200) + root = xml.etree.ElementTree.parse(response).getroot() + url_trans = '{http://www.w3.org/1999/XSL/Transform}' + url_xhtml = '{http://www.w3.org/1999/xhtml}' + xslpath = url_trans + 'template/' + url_xhtml + 'tr' + self.assertEqual(root.tag, url_trans + 'stylesheet') + for tr in root.findall(xslpath): + tds = tr.findall(url_xhtml + 'td') + self.assertIsNotNone(tds) + self.assertEqual(type(tds), list) + self.assertTrue(len(tds) > 2) + self.assertTrue(hasattr(tds[0], 'text')) + self.assertTrue(tds[0].text in DUMMY_DATA) + valueof = tds[2].find(url_trans + 'value-of') + self.assertIsNotNone(valueof) + self.assertTrue(hasattr(valueof, 'attrib')) + self.assertIsNotNone(valueof.attrib) + self.assertTrue('select' in valueof.attrib) + self.assertTrue(valueof.attrib['select'] in \ + [ tds[0].text+'/'+item for item in DUMMY_DATA[tds[0].text].keys() ]) # 302 redirect - handler.path = '/' - handler.headers = {'Host': 'my.host.domain'} - handler.do_GET() - self.assertEqual(handler.response.code, 302) - self.assertEqual(handler.response.headers["Location"], - "http://my.host.domain%s" % stats_httpd.XML_URL_PATH) + client._http_vsn_str = 'HTTP/1.1' + client.putrequest('GET', '/') + client.putheader('Host', address) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 302) + self.assertEqual(response.getheader('Location'), + "http://%s:%d%s" % (address, port, stats_httpd.XML_URL_PATH)) - # 404 NotFound - handler.path = '/path/to/foo/bar' - handler.headers = {} - handler.do_GET() - self.assertEqual(handler.response.code, 404) + # # 404 NotFound + client._http_vsn_str = 'HTTP/1.0' + client.putrequest('GET', '/path/to/foo/bar') + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 404) + client.close() + statshttpd_server.shutdown() + + def test_do_GET_failed1(self): # failure case(connection with Stats is down) - handler.path = stats_httpd.XML_URL_PATH - push_answer(self.stats_httpd) - self.assertFalse(self.stats_httpd.cc_session._socket._closed) - self.stats_httpd.cc_session._socket._closed = True - handler.do_GET() - self.stats_httpd.cc_session._socket._closed = False - self.assertEqual(handler.response.code, 500) - self.stats_httpd.cc_session._clear_queues() + (address, port) = ('127.0.0.1', 65451) + statshttpd_server = ThreadingServerManager(MyStatsHttpd) + statshttpd = statshttpd_server.server + statshttpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) + statshttpd_server.run() + self.assertTrue(self.stats_server.server.running) + self.stats_server.shutdown() + time.sleep(TIMEOUT_SEC*2) + self.assertFalse(self.stats_server.server.running) + statshttpd.cc_session.set_timeout(milliseconds=TIMEOUT_SEC/1000) + client = http.client.HTTPConnection(address, port) + client.connect() - # failure case(Stats module returns err) - handler.path = stats_httpd.XML_URL_PATH - self.stats_httpd.cc_session.group_sendmsg( - { 'result': [ 1, "I have an error." ] }, "Stats") - self.assertFalse(self.stats_httpd.cc_session._socket._closed) - self.stats_httpd.cc_session._socket._closed = False - handler.do_GET() - self.assertEqual(handler.response.code, 500) - self.stats_httpd.cc_session._clear_queues() + # request XML + client.putrequest('GET', stats_httpd.XML_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 500) + + # request XSD + client.putrequest('GET', stats_httpd.XSD_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 500) + + # request XSL + client.putrequest('GET', stats_httpd.XSL_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 500) + + client.close() + statshttpd_server.shutdown() + + def test_do_GET_failed2(self): + # failure case(connection with Stats is down) + (address, port) = ('127.0.0.1', 65452) + statshttpd_server = ThreadingServerManager(MyStatsHttpd) + self.stats_httpd = statshttpd_server.server + self.stats_httpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) + statshttpd_server.run() + self.stats.mccs.set_command_handler( + lambda cmd, args: \ + isc.config.ccsession.create_answer(1, "I have an error.") + ) + time.sleep(TIMEOUT_SEC*5) + client = http.client.HTTPConnection(address, port) + client.connect() + + # request XML + client.putrequest('GET', stats_httpd.XML_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 500) + + # request XSD + client.putrequest('GET', stats_httpd.XSD_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 500) + + # request XSL + client.putrequest('GET', stats_httpd.XSL_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 500) + + client.close() + statshttpd_server.shutdown() def test_do_HEAD(self): - for ht in self.httpd: - self._test_do_HEAD(ht._handler) + (address, port) = ('127.0.0.1', 65453) + statshttpd_server = ThreadingServerManager(MyStatsHttpd) + self.stats_httpd = statshttpd_server.server + self.stats_httpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) + statshttpd_server.run() + time.sleep(TIMEOUT_SEC*5) + client = http.client.HTTPConnection(address, port) + client.connect() + client.putrequest('HEAD', stats_httpd.XML_URL_PATH) + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 200) - def _test_do_HEAD(self, handler): - handler.path = '/path/to/foo/bar' - handler.do_HEAD() - self.assertEqual(handler.response.code, 404) + client.putrequest('HEAD', '/path/to/foo/bar') + client.endheaders() + response = client.getresponse() + self.assertEqual(response.status, 404) + client.close() + statshttpd_server.shutdown() + + def test_log_message(self): + class MyHttpHandler(stats_httpd.HttpHandler): + def __init__(self): + class _Dummy_class_(): pass + self.address_string = lambda : 'dummyhost' + self.log_date_time_string = lambda : \ + 'DD/MM/YYYY HH:MI:SS' + self.server = _Dummy_class_() + self.server.log_writer = self.log_writer + def log_writer(self, line): + self.logged_line = line + self.handler = MyHttpHandler() + self.handler.log_message("%s %d", 'ABCDEFG', 12345) + self.assertEqual(self.handler.logged_line, + "[b10-stats-httpd] dummyhost - - " + + "[DD/MM/YYYY HH:MI:SS] ABCDEFG 12345\n") class TestHttpServerError(unittest.TestCase): """Tests for HttpServerError exception""" - def test_raises(self): try: raise stats_httpd.HttpServerError('Nothing') @@ -162,17 +276,16 @@ class TestHttpServerError(unittest.TestCase): class TestHttpServer(unittest.TestCase): """Tests for HttpServer class""" + def setUp(self): + self.base = BaseModules() + + def tearDown(self): + self.base.shutdown() def test_httpserver(self): - self.stats_httpd = stats_httpd.StatsHttpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(ht.server_address in self.stats_httpd.http_addrs) - self.assertEqual(ht.xml_handler, self.stats_httpd.xml_handler) - self.assertEqual(ht.xsd_handler, self.stats_httpd.xsd_handler) - self.assertEqual(ht.xsl_handler, self.stats_httpd.xsl_handler) - self.assertEqual(ht.log_writer, self.stats_httpd.write_log) - self.assertTrue(isinstance(ht._handler, stats_httpd.HttpHandler)) - self.assertTrue(isinstance(ht.socket, fake_socket.socket)) + statshttpd = stats_httpd.StatsHttpd() + self.assertEqual(type(statshttpd.httpd), list) + self.assertEqual(len(statshttpd.httpd), 0) class TestStatsHttpdError(unittest.TestCase): """Tests for StatsHttpdError exception""" @@ -187,130 +300,176 @@ class TestStatsHttpd(unittest.TestCase): """Tests for StatsHttpd class""" def setUp(self): - fake_socket._CLOSED = False - fake_socket.has_ipv6 = True + self.base = BaseModules() + self.stats_server = ThreadingServerManager(MyStats) + self.stats = self.stats_server.server + self.stats_server.run() self.stats_httpd = stats_httpd.StatsHttpd() + # checking IPv6 enabled on this platform + self.ipv6_enabled = True + try: + sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) + sock.bind(("::1",8000)) + sock.close() + except socket.error: + self.ipv6_enabled = False + def tearDown(self): self.stats_httpd.stop() + self.stats_server.shutdown() + self.base.shutdown() def test_init(self): - self.assertFalse(self.stats_httpd.mccs.get_socket()._closed) - self.assertEqual(self.stats_httpd.mccs.get_socket().fileno(), - id(self.stats_httpd.mccs.get_socket())) - for ht in self.stats_httpd.httpd: - self.assertFalse(ht.socket._closed) - self.assertEqual(ht.socket.fileno(), id(ht.socket)) - fake_socket._CLOSED = True - self.assertRaises(isc.cc.session.SessionError, - stats_httpd.StatsHttpd) - fake_socket._CLOSED = False + self.assertEqual(self.stats_httpd.running, False) + self.assertEqual(self.stats_httpd.poll_intval, 0.5) + self.assertEqual(self.stats_httpd.httpd, []) + self.assertEqual(type(self.stats_httpd.mccs), isc.config.ModuleCCSession) + self.assertEqual(type(self.stats_httpd.cc_session), isc.cc.Session) + self.assertEqual(len(self.stats_httpd.config), 2) + self.assertTrue('listen_on' in self.stats_httpd.config) + self.assertEqual(len(self.stats_httpd.config['listen_on']), 1) + self.assertTrue('address' in self.stats_httpd.config['listen_on'][0]) + self.assertTrue('port' in self.stats_httpd.config['listen_on'][0]) + self.assertTrue(('127.0.0.1', 8000) in set(self.stats_httpd.http_addrs)) + + def test_openclose_mccs(self): + statshttpd = stats_httpd.StatsHttpd() + statshttpd.close_mccs() + self.assertEqual(statshttpd.mccs, None) + statshttpd.open_mccs() + self.assertIsNotNone(statshttpd.mccs) + statshttpd.mccs = None + self.assertEqual(statshttpd.mccs, None) + self.assertEqual(statshttpd.close_mccs(), None) def test_mccs(self): - self.stats_httpd.open_mccs() + self.assertIsNotNone(self.stats_httpd.mccs.get_socket()) self.assertTrue( - isinstance(self.stats_httpd.mccs.get_socket(), fake_socket.socket)) + isinstance(self.stats_httpd.mccs.get_socket(), socket.socket)) self.assertTrue( isinstance(self.stats_httpd.cc_session, isc.cc.session.Session)) - self.assertTrue( - isinstance(self.stats_httpd.stats_module_spec, isc.config.ModuleSpec)) - for cfg in self.stats_httpd.stats_config_spec: - self.assertTrue('item_name' in cfg) - self.assertTrue(cfg['item_name'] in DUMMY_DATA) - self.assertTrue(len(self.stats_httpd.stats_config_spec), len(DUMMY_DATA)) - - def test_load_config(self): - self.stats_httpd.load_config() - self.assertTrue(('127.0.0.1', 8000) in set(self.stats_httpd.http_addrs)) + self.statistics_spec = self.stats_httpd.get_stats_spec() + for mod in DUMMY_DATA: + self.assertTrue(mod in self.statistics_spec) + for cfg in self.statistics_spec[mod]: + self.assertTrue('item_name' in cfg) + self.assertTrue(cfg['item_name'] in DUMMY_DATA[mod]) + self.assertTrue(len(self.statistics_spec[mod]), len(DUMMY_DATA[mod])) + self.stats_httpd.close_mccs() + self.assertIsNone(self.stats_httpd.mccs) def test_httpd(self): # dual stack (addresses is ipv4 and ipv6) - fake_socket.has_ipv6 = True - self.assertTrue(('127.0.0.1', 8000) in set(self.stats_httpd.http_addrs)) - self.stats_httpd.http_addrs = [ ('::1', 8000), ('127.0.0.1', 8000) ] - self.assertTrue( - stats_httpd.HttpServer.address_family in set([fake_socket.AF_INET, fake_socket.AF_INET6])) - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, fake_socket.socket)) - self.stats_httpd.close_httpd() + if self.ipv6_enabled: + self.assertTrue(('127.0.0.1', 8000) in set(self.stats_httpd.http_addrs)) + self.stats_httpd.http_addrs = [ ('::1', 8000), ('127.0.0.1', 8000) ] + self.assertTrue( + stats_httpd.HttpServer.address_family in set([socket.AF_INET, socket.AF_INET6])) + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, socket.socket)) + self.stats_httpd.close_httpd() # dual stack (address is ipv6) - fake_socket.has_ipv6 = True - self.stats_httpd.http_addrs = [ ('::1', 8000) ] - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, fake_socket.socket)) - self.stats_httpd.close_httpd() - + if self.ipv6_enabled: + self.stats_httpd.http_addrs = [ ('::1', 8000) ] + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, socket.socket)) + self.stats_httpd.close_httpd() + # dual stack (address is ipv4) - fake_socket.has_ipv6 = True - self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, fake_socket.socket)) - self.stats_httpd.close_httpd() + if self.ipv6_enabled: + self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, socket.socket)) + self.stats_httpd.close_httpd() # only-ipv4 single stack - fake_socket.has_ipv6 = False - self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, fake_socket.socket)) - self.stats_httpd.close_httpd() - + if not self.ipv6_enabled: + self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, socket.socket)) + self.stats_httpd.close_httpd() + # only-ipv4 single stack (force set ipv6 ) - fake_socket.has_ipv6 = False - self.stats_httpd.http_addrs = [ ('::1', 8000) ] - self.assertRaises(stats_httpd.HttpServerError, - self.stats_httpd.open_httpd) - + if not self.ipv6_enabled: + self.stats_httpd.http_addrs = [ ('::1', 8000) ] + self.assertRaises(stats_httpd.HttpServerError, + self.stats_httpd.open_httpd) + # hostname self.stats_httpd.http_addrs = [ ('localhost', 8000) ] self.stats_httpd.open_httpd() for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, fake_socket.socket)) + self.assertTrue(isinstance(ht.socket, socket.socket)) self.stats_httpd.close_httpd() - + self.stats_httpd.http_addrs = [ ('my.host.domain', 8000) ] - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, fake_socket.socket)) + self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) + self.assertEqual(type(self.stats_httpd.httpd), list) + self.assertEqual(len(self.stats_httpd.httpd), 0) self.stats_httpd.close_httpd() # over flow of port number self.stats_httpd.http_addrs = [ ('', 80000) ] self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) + # negative self.stats_httpd.http_addrs = [ ('', -8000) ] self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) + # alphabet self.stats_httpd.http_addrs = [ ('', 'ABCDE') ] self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) - def test_start(self): - self.stats_httpd.cc_session.group_sendmsg( - { 'command': [ "shutdown" ] }, "StatsHttpd") - self.stats_httpd.start() - self.stats_httpd = stats_httpd.StatsHttpd() - self.assertRaises( - fake_select.error, self.stats_httpd.start) + # Address already in use + self.statshttpd_server = ThreadingServerManager(MyStatsHttpd) + self.statshttpd_server.server.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65454 }]}) + self.statshttpd_server.run() + time.sleep(TIMEOUT_SEC) + self.stats_httpd.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65454 }]}) + self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) + self.statshttpd_server.shutdown() - def test_stop(self): - # success case - fake_socket._CLOSED = False - self.stats_httpd.stop() + def test_running(self): self.assertFalse(self.stats_httpd.running) - self.assertIsNone(self.stats_httpd.mccs) - for ht in self.stats_httpd.httpd: - self.assertTrue(ht.socket._closed) - self.assertTrue(self.stats_httpd.cc_session._socket._closed) + self.statshttpd_server = ThreadingServerManager(MyStatsHttpd) + self.stats_httpd = self.statshttpd_server.server + self.stats_httpd.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65455 }]}) + self.statshttpd_server.run() + time.sleep(TIMEOUT_SEC*2) + self.assertTrue(self.stats_httpd.running) + self.statshttpd_server.shutdown() + self.assertFalse(self.stats_httpd.running) + # failure case - self.stats_httpd.cc_session._socket._closed = False - self.stats_httpd.open_mccs() - self.stats_httpd.cc_session._socket._closed = True - self.stats_httpd.stop() # No excetion raises - self.stats_httpd.cc_session._socket._closed = False + self.stats_httpd = stats_httpd.StatsHttpd() + self.stats_httpd.cc_session.close() + self.assertRaises( + isc.cc.session.SessionError, self.stats_httpd.start) + + def test_select_failure(self): + def raise_select_except(*args): + raise select.error('dummy error') + def raise_select_except_with_errno(*args): + raise select.error(errno.EINTR) + (address, port) = ('127.0.0.1', 65456) + stats_httpd.select.select = raise_select_except + statshttpd = stats_httpd.StatsHttpd() + statshttpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) + self.assertRaises(select.error, statshttpd.start) + statshttpd.stop() + stats_httpd.select.select = raise_select_except_with_errno + statshttpd_server = ThreadingServerManager(MyStatsHttpd) + statshttpd = statshttpd_server.server + statshttpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) + statshttpd_server.run() + time.sleep(TIMEOUT_SEC*2) + statshttpd_server.shutdown() def test_open_template(self): # successful conditions @@ -363,38 +522,40 @@ class TestStatsHttpd(unittest.TestCase): self.assertEqual( self.stats_httpd.config_handler(dict(_UNKNOWN_KEY_=None)), isc.config.ccsession.create_answer( - 1, "Unknown known config: _UNKNOWN_KEY_")) + 1, "Unknown known config: _UNKNOWN_KEY_")) + self.assertEqual( self.stats_httpd.config_handler( - dict(listen_on=[dict(address="::2",port=8000)])), + dict(listen_on=[dict(address="127.0.0.2",port=8000)])), isc.config.ccsession.create_answer(0)) self.assertTrue("listen_on" in self.stats_httpd.config) for addr in self.stats_httpd.config["listen_on"]: self.assertTrue("address" in addr) self.assertTrue("port" in addr) - self.assertTrue(addr["address"] == "::2") + self.assertTrue(addr["address"] == "127.0.0.2") self.assertTrue(addr["port"] == 8000) - self.assertEqual( - self.stats_httpd.config_handler( - dict(listen_on=[dict(address="::1",port=80)])), - isc.config.ccsession.create_answer(0)) - self.assertTrue("listen_on" in self.stats_httpd.config) - for addr in self.stats_httpd.config["listen_on"]: - self.assertTrue("address" in addr) - self.assertTrue("port" in addr) - self.assertTrue(addr["address"] == "::1") - self.assertTrue(addr["port"] == 80) + if self.ipv6_enabled: + self.assertEqual( + self.stats_httpd.config_handler( + dict(listen_on=[dict(address="::1",port=8000)])), + isc.config.ccsession.create_answer(0)) + self.assertTrue("listen_on" in self.stats_httpd.config) + for addr in self.stats_httpd.config["listen_on"]: + self.assertTrue("address" in addr) + self.assertTrue("port" in addr) + self.assertTrue(addr["address"] == "::1") + self.assertTrue(addr["port"] == 8000) self.assertEqual( self.stats_httpd.config_handler( - dict(listen_on=[dict(address="1.2.3.4",port=54321)])), + dict(listen_on=[dict(address="127.0.0.1",port=54321)])), isc.config.ccsession.create_answer(0)) self.assertTrue("listen_on" in self.stats_httpd.config) for addr in self.stats_httpd.config["listen_on"]: self.assertTrue("address" in addr) self.assertTrue("port" in addr) - self.assertTrue(addr["address"] == "1.2.3.4") + self.assertTrue(addr["address"] == "127.0.0.1") self.assertTrue(addr["port"] == 54321) (ret, arg) = isc.config.ccsession.parse_answer( self.stats_httpd.config_handler( @@ -500,8 +661,6 @@ class TestStatsHttpd(unittest.TestCase): imp.reload(stats_httpd) os.environ["B10_FROM_SOURCE"] = tmppath imp.reload(stats_httpd) - stats_httpd.socket = fake_socket - stats_httpd.select = fake_select if __name__ == "__main__": unittest.main() diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index a42c81d136..b013c7a8bc 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -13,632 +13,496 @@ # NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -# -# Tests for the stats module -# -import os -import sys -import time import unittest +import os +import threading +import io +import time import imp -from isc.cc.session import Session, SessionError -from isc.config.ccsession import ModuleCCSession, ModuleCCSessionError -from fake_time import time, strftime, gmtime -import stats -stats.time = time -stats.strftime = strftime -stats.gmtime = gmtime -from stats import SessionSubject, CCSessionListener, get_timestamp, get_datetime -from fake_time import _TEST_TIME_SECS, _TEST_TIME_STRF -if "B10_FROM_SOURCE" in os.environ: - TEST_SPECFILE_LOCATION = os.environ["B10_FROM_SOURCE"] +\ - "/src/bin/stats/tests/testdata/stats_test.spec" -else: - TEST_SPECFILE_LOCATION = "./testdata/stats_test.spec" +import stats +import isc.cc.session +from test_utils import BaseModules, ThreadingServerManager, MyStats, send_command, TIMEOUT_SEC + +class TestUtilties(unittest.TestCase): + items = [ + { 'item_name': 'test_int1', 'item_type': 'integer', 'item_default': 12345 }, + { 'item_name': 'test_real1', 'item_type': 'real', 'item_default': 12345.6789 }, + { 'item_name': 'test_bool1', 'item_type': 'boolean', 'item_default': True }, + { 'item_name': 'test_str1', 'item_type': 'string', 'item_default': 'ABCD' }, + { 'item_name': 'test_list1', 'item_type': 'list', 'item_default': [1,2,3], + 'list_item_spec' : [ { 'item_name': 'one', 'item_type': 'integer' }, + { 'item_name': 'two', 'item_type': 'integer' }, + { 'item_name': 'three', 'item_type': 'integer' } ] }, + { 'item_name': 'test_map1', 'item_type': 'map', 'item_default': {'a':1,'b':2,'c':3}, + 'map_item_spec' : [ { 'item_name': 'a', 'item_type': 'integer'}, + { 'item_name': 'b', 'item_type': 'integer'}, + { 'item_name': 'c', 'item_type': 'integer'} ] }, + { 'item_name': 'test_int2', 'item_type': 'integer' }, + { 'item_name': 'test_real2', 'item_type': 'real' }, + { 'item_name': 'test_bool2', 'item_type': 'boolean' }, + { 'item_name': 'test_str2', 'item_type': 'string' }, + { 'item_name': 'test_list2', 'item_type': 'list', + 'list_item_spec' : [ { 'item_name': 'one', 'item_type': 'integer' }, + { 'item_name': 'two', 'item_type': 'integer' }, + { 'item_name': 'three', 'item_type': 'integer' } ] }, + { 'item_name': 'test_map2', 'item_type': 'map', + 'map_item_spec' : [ { 'item_name': 'A', 'item_type': 'integer'}, + { 'item_name': 'B', 'item_type': 'integer'}, + { 'item_name': 'C', 'item_type': 'integer'} ] }, + { 'item_name': 'test_none', 'item_type': 'none' } + ] + + def test_parse_spec(self): + self.assertEqual( + stats.parse_spec(self.items), { + 'test_int1' : 12345 , + 'test_real1' : 12345.6789 , + 'test_bool1' : True , + 'test_str1' : 'ABCD' , + 'test_list1' : [1,2,3] , + 'test_map1' : {'a':1,'b':2,'c':3}, + 'test_int2' : 0 , + 'test_real2' : 0.0, + 'test_bool2' : False, + 'test_str2' : "", + 'test_list2' : [0,0,0], + 'test_map2' : { 'A' : 0, 'B' : 0, 'C' : 0 }, + 'test_none' : None }) + self.assertRaises(TypeError, stats.parse_spec, None) + self.assertRaises(KeyError, stats.parse_spec, [{'item_name':'Foo'}]) + + def test_get_timestamp(self): + self.assertEqual(stats.get_timestamp(), 1308730448.965706) + + def test_get_datetime(self): + stats.time = lambda : 1308730448.965706 + stats.gmtime = lambda : (2011, 6, 22, 8, 14, 8, 2, 173, 0) + self.assertEqual(stats.get_datetime(), '2011-06-22T08:14:08Z') + self.assertNotEqual(stats.get_datetime( + (2011, 6, 22, 8, 23, 40, 2, 173, 0)), '2011-06-22T08:14:08Z') + +class TestCallback(unittest.TestCase): + def setUp(self): + self.dummy_func = lambda *x, **y : (x, y) + self.dummy_args = (1,2,3) + self.dummy_kwargs = {'a':1,'b':2,'c':3} + self.cback1 = stats.Callback( + command=self.dummy_func, + args=self.dummy_args, + kwargs=self.dummy_kwargs + ) + self.cback2 = stats.Callback( + args=self.dummy_args, + kwargs=self.dummy_kwargs + ) + self.cback3 = stats.Callback( + command=self.dummy_func, + kwargs=self.dummy_kwargs + ) + self.cback4 = stats.Callback( + command=self.dummy_func, + args=self.dummy_args + ) + + def tearDown(self): + pass + + def test_init(self): + self.assertEqual((self.cback1.command, self.cback1.args, self.cback1.kwargs), + (self.dummy_func, self.dummy_args, self.dummy_kwargs)) + self.assertEqual((self.cback2.command, self.cback2.args, self.cback2.kwargs), + (None, self.dummy_args, self.dummy_kwargs)) + self.assertEqual((self.cback3.command, self.cback3.args, self.cback3.kwargs), + (self.dummy_func, (), self.dummy_kwargs)) + self.assertEqual((self.cback4.command, self.cback4.args, self.cback4.kwargs), + (self.dummy_func, self.dummy_args, {})) + + def test_call(self): + self.assertEqual(self.cback1(), (self.dummy_args, self.dummy_kwargs)) + self.assertEqual(self.cback1(100, 200), ((100, 200), self.dummy_kwargs)) + self.assertEqual(self.cback1(a=100, b=200), (self.dummy_args, {'a':100, 'b':200})) + self.assertEqual(self.cback2(), None) + self.assertEqual(self.cback3(), ((), self.dummy_kwargs)) + self.assertEqual(self.cback3(100, 200), ((100, 200), self.dummy_kwargs)) + self.assertEqual(self.cback3(a=100, b=200), ((), {'a':100, 'b':200})) + self.assertEqual(self.cback4(), (self.dummy_args, {})) + self.assertEqual(self.cback4(100, 200), ((100, 200), {})) + self.assertEqual(self.cback4(a=100, b=200), (self.dummy_args, {'a':100, 'b':200})) class TestStats(unittest.TestCase): - def setUp(self): - self.session = Session() - self.subject = SessionSubject(session=self.session) - self.listener = CCSessionListener(self.subject) - self.stats_spec = self.listener.cc_session.get_module_spec().get_config_spec() - self.module_name = self.listener.cc_session.get_module_spec().get_module_name() - self.stats_data = { - 'report_time' : get_datetime(), - 'bind10.boot_time' : "1970-01-01T00:00:00Z", - 'stats.timestamp' : get_timestamp(), - 'stats.lname' : self.session.lname, - 'auth.queries.tcp': 0, - 'auth.queries.udp': 0, - "stats.boot_time": get_datetime(), - "stats.start_time": get_datetime(), - "stats.last_update_time": get_datetime() - } - # check starting - self.assertFalse(self.subject.running) - self.subject.start() - self.assertTrue(self.subject.running) - self.assertEqual(len(self.session.message_queue), 0) - self.assertEqual(self.module_name, 'Stats') + self.base = BaseModules() + self.stats = stats.Stats() + self.assertTrue("B10_FROM_SOURCE" in os.environ) + self.assertEqual(stats.SPECFILE_LOCATION, \ + os.environ["B10_FROM_SOURCE"] + os.sep + \ + "src" + os.sep + "bin" + os.sep + "stats" + \ + os.sep + "stats.spec") def tearDown(self): - # check closing - self.subject.stop() - self.assertFalse(self.subject.running) - self.subject.detach(self.listener) - self.listener.stop() - self.session.close() + self.base.shutdown() - def test_local_func(self): - """ - Test for local function - - """ - # test for result_ok - self.assertEqual(type(result_ok()), dict) - self.assertEqual(result_ok(), {'result': [0]}) - self.assertEqual(result_ok(1), {'result': [1]}) - self.assertEqual(result_ok(0,'OK'), {'result': [0, 'OK']}) - self.assertEqual(result_ok(1,'Not good'), {'result': [1, 'Not good']}) - self.assertEqual(result_ok(None,"It's None"), {'result': [None, "It's None"]}) - self.assertNotEqual(result_ok(), {'RESULT': [0]}) + def test_init(self): + self.assertEqual(self.stats.module_name, 'Stats') + self.assertFalse(self.stats.running) + self.assertTrue('command_show' in self.stats.callbacks) + self.assertTrue('command_status' in self.stats.callbacks) + self.assertTrue('command_shutdown' in self.stats.callbacks) + self.assertTrue('command_show' in self.stats.callbacks) + self.assertTrue('command_showschema' in self.stats.callbacks) + self.assertTrue('command_set' in self.stats.callbacks) - # test for get_timestamp - self.assertEqual(get_timestamp(), _TEST_TIME_SECS) + def test_init_undefcmd(self): + spec_str = """\ +{ + "module_spec": { + "module_name": "Stats", + "module_description": "Stats daemon", + "config_data": [], + "commands": [ + { + "command_name": "_undef_command_", + "command_description": "a undefined command in stats", + "command_args": [] + } + ], + "statistics": [] + } +} +""" + orig_spec_location = stats.SPECFILE_LOCATION + stats.SPECFILE_LOCATION = io.StringIO(spec_str) + self.assertRaises(stats.StatsError, stats.Stats) + stats.SPECFILE_LOCATION = orig_spec_location - # test for get_datetime - self.assertEqual(get_datetime(), _TEST_TIME_STRF) + def test_start(self): + statsserver = ThreadingServerManager(MyStats) + stats = statsserver.server + self.assertFalse(stats.running) + statsserver.run() + time.sleep(TIMEOUT_SEC) + self.assertTrue(stats.running) + statsserver.shutdown() + self.assertFalse(stats.running) - def test_show_command(self): - """ - Test for show command - - """ - # test show command without arg - self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - # ignore under 0.9 seconds - self.assertEqual(result_ok(0, self.stats_data), result_data) - self.assertEqual(len(self.session.message_queue), 0) + def test_start_with_err(self): + statsd = stats.Stats() + statsd.update_statistics_data = lambda x,**y: [1] + self.assertRaises(stats.StatsError, statsd.start) - # test show command with arg - self.session.group_sendmsg({"command": [ "show", {"stats_item_name": "stats.lname"}]}, "Stats") - self.assertEqual(len(self.subject.session.message_queue), 1) - self.subject.check() - result_data = self.subject.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'stats.lname': self.stats_data['stats.lname']}), - result_data) - self.assertEqual(len(self.subject.session.message_queue), 0) + def test_config_handler(self): + self.assertEqual(self.stats.config_handler({'foo':'bar'}), + isc.config.create_answer(0)) - # test show command with arg which has wrong name - self.session.group_sendmsg({"command": [ "show", {"stats_item_name": "stats.dummy"}]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - # ignore under 0.9 seconds - self.assertEqual(result_ok(0, self.stats_data), result_data) - self.assertEqual(len(self.session.message_queue), 0) - - def test_set_command(self): - """ - Test for set command - - """ - # test set command - self.stats_data['auth.queries.udp'] = 54321 - self.assertEqual(self.stats_data['auth.queries.udp'], 54321) - self.assertEqual(self.stats_data['auth.queries.tcp'], 0) - self.session.group_sendmsg({ "command": [ - "set", { - 'stats_data': {'auth.queries.udp': 54321 } - } ] }, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # test show command - self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, self.stats_data), result_data) - self.assertEqual(len(self.session.message_queue), 0) - - # test set command 2 - self.stats_data['auth.queries.udp'] = 0 - self.assertEqual(self.stats_data['auth.queries.udp'], 0) - self.assertEqual(self.stats_data['auth.queries.tcp'], 0) - self.session.group_sendmsg({ "command": [ "set", {'stats_data': {'auth.queries.udp': 0}} ]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # test show command 2 - self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, self.stats_data), result_data) - self.assertEqual(len(self.session.message_queue), 0) - - # test set command 3 - self.stats_data['auth.queries.tcp'] = 54322 - self.assertEqual(self.stats_data['auth.queries.udp'], 0) - self.assertEqual(self.stats_data['auth.queries.tcp'], 54322) - self.session.group_sendmsg({ "command": [ - "set", { - 'stats_data': {'auth.queries.tcp': 54322 } - } ] }, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # test show command 3 - self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, self.stats_data), result_data) - self.assertEqual(len(self.session.message_queue), 0) - - def test_remove_command(self): - """ - Test for remove command - - """ - self.session.group_sendmsg({"command": - [ "remove", {"stats_item_name": 'bind10.boot_time' }]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - self.assertEqual(self.stats_data.pop('bind10.boot_time'), "1970-01-01T00:00:00Z") - self.assertFalse('bind10.boot_time' in self.stats_data) - - # test show command with arg - self.session.group_sendmsg({"command": - [ "show", {"stats_item_name": 'bind10.boot_time'}]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertFalse('bind10.boot_time' in result_data['result'][1]) - self.assertEqual(result_ok(0, self.stats_data), result_data) - self.assertEqual(len(self.session.message_queue), 0) - - def test_reset_command(self): - """ - Test for reset command - - """ - self.session.group_sendmsg({"command": [ "reset" ] }, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # test show command - self.session.group_sendmsg({"command": [ "show" ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, self.stats_data), result_data) - self.assertEqual(len(self.session.message_queue), 0) - - def test_status_command(self): - """ - Test for status command - - """ - self.session.group_sendmsg({"command": [ "status" ] }, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(0, "I'm alive."), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - def test_unknown_command(self): - """ - Test for unknown command - - """ - self.session.group_sendmsg({"command": [ "hoge", None ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(1, "Unknown command: 'hoge'"), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - def test_shutdown_command(self): - """ - Test for shutdown command - - """ - self.session.group_sendmsg({"command": [ "shutdown", None ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.assertTrue(self.subject.running) - self.subject.check() - self.assertFalse(self.subject.running) - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - - def test_some_commands(self): - """ - Test for some commands in a row - - """ - # test set command - self.stats_data['bind10.boot_time'] = '2010-08-02T14:47:56Z' - self.assertEqual(self.stats_data['bind10.boot_time'], '2010-08-02T14:47:56Z') - self.session.group_sendmsg({ "command": [ - "set", { - 'stats_data': {'bind10.boot_time': '2010-08-02T14:47:56Z' } - }]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # check its value - self.session.group_sendmsg({ "command": [ - "show", { 'stats_item_name': 'bind10.boot_time' } - ] }, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'bind10.boot_time': '2010-08-02T14:47:56Z'}), - result_data) - self.assertEqual(result_ok(0, {'bind10.boot_time': self.stats_data['bind10.boot_time']}), - result_data) - self.assertEqual(len(self.session.message_queue), 0) - - # test set command 2nd - self.stats_data['auth.queries.udp'] = 98765 - self.assertEqual(self.stats_data['auth.queries.udp'], 98765) - self.session.group_sendmsg({ "command": [ - "set", { 'stats_data': { - 'auth.queries.udp': - self.stats_data['auth.queries.udp'] - } } - ] }, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # check its value - self.session.group_sendmsg({"command": [ - "show", {'stats_item_name': 'auth.queries.udp'} - ] }, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'auth.queries.udp': 98765}), - result_data) - self.assertEqual(result_ok(0, {'auth.queries.udp': self.stats_data['auth.queries.udp']}), - result_data) - self.assertEqual(len(self.session.message_queue), 0) - - # test set command 3 - self.stats_data['auth.queries.tcp'] = 4321 - self.session.group_sendmsg({"command": [ - "set", - {'stats_data': {'auth.queries.tcp': 4321 }} ]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # check value - self.session.group_sendmsg({"command": [ "show", {'stats_item_name': 'auth.queries.tcp'} ]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'auth.queries.tcp': 4321}), - result_data) - self.assertEqual(result_ok(0, {'auth.queries.tcp': self.stats_data['auth.queries.tcp']}), - result_data) - self.assertEqual(len(self.session.message_queue), 0) - - self.session.group_sendmsg({"command": [ "show", {'stats_item_name': 'auth.queries.udp'} ]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'auth.queries.udp': 98765}), - result_data) - self.assertEqual(result_ok(0, {'auth.queries.udp': self.stats_data['auth.queries.udp']}), - result_data) - self.assertEqual(len(self.session.message_queue), 0) - - # test set command 4 - self.stats_data['auth.queries.tcp'] = 67890 - self.session.group_sendmsg({"command": [ - "set", {'stats_data': {'auth.queries.tcp': 67890 }} ]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # test show command for all values - self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, self.stats_data), result_data) - self.assertEqual(len(self.session.message_queue), 0) - - def test_some_commands2(self): - """ - Test for some commands in a row using list-type value - - """ - self.stats_data['listtype'] = [1, 2, 3] - self.assertEqual(self.stats_data['listtype'], [1, 2, 3]) - self.session.group_sendmsg({ "command": [ - "set", {'stats_data': {'listtype': [1, 2, 3] }} - ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # check its value - self.session.group_sendmsg({ "command": [ - "show", { 'stats_item_name': 'listtype'} - ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'listtype': [1, 2, 3]}), - result_data) - self.assertEqual(result_ok(0, {'listtype': self.stats_data['listtype']}), - result_data) - self.assertEqual(len(self.session.message_queue), 0) - - # test set list-type value - self.assertEqual(self.stats_data['listtype'], [1, 2, 3]) - self.session.group_sendmsg({"command": [ - "set", {'stats_data': {'listtype': [3, 2, 1, 0] }} - ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # check its value - self.session.group_sendmsg({ "command": [ - "show", { 'stats_item_name': 'listtype' } - ] }, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'listtype': [3, 2, 1, 0]}), - result_data) - self.assertEqual(len(self.session.message_queue), 0) - - def test_some_commands3(self): - """ - Test for some commands in a row using dictionary-type value - - """ - self.stats_data['dicttype'] = {"a": 1, "b": 2, "c": 3} - self.assertEqual(self.stats_data['dicttype'], {"a": 1, "b": 2, "c": 3}) - self.session.group_sendmsg({ "command": [ - "set", { - 'stats_data': {'dicttype': {"a": 1, "b": 2, "c": 3} } - }]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # check its value - self.session.group_sendmsg({ "command": [ "show", { 'stats_item_name': 'dicttype' } ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'dicttype': {"a": 1, "b": 2, "c": 3}}), - result_data) - self.assertEqual(result_ok(0, {'dicttype': self.stats_data['dicttype']}), - result_data) - self.assertEqual(len(self.session.message_queue), 0) - - # test set list-type value - self.assertEqual(self.stats_data['dicttype'], {"a": 1, "b": 2, "c": 3}) - self.session.group_sendmsg({"command": [ - "set", {'stats_data': {'dicttype': {"a": 3, "b": 2, "c": 1, "d": 0} }} ]}, - "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - self.assertEqual(len(self.session.message_queue), 0) - - # check its value - self.session.group_sendmsg({ "command": [ "show", { 'stats_item_name': 'dicttype' }]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - result_data = self.session.get_message("Stats", None) - self.assertEqual(result_ok(0, {'dicttype': {"a": 3, "b": 2, "c": 1, "d": 0} }), - result_data) - self.assertEqual(len(self.session.message_queue), 0) - - def test_config_update(self): - """ - Test for config update - - """ - # test show command without arg - self.session.group_sendmsg({"command": [ "config_update", {"x-version":999} ]}, "Stats") - self.assertEqual(len(self.session.message_queue), 1) - self.subject.check() - self.assertEqual(result_ok(), - self.session.get_message("Stats", None)) - - def test_for_boss(self): - last_queue = self.session.old_message_queue.pop() + def test_command_handler(self): + statsserver = ThreadingServerManager(MyStats) + statsserver.run() + time.sleep(TIMEOUT_SEC*4) + self.base.boss.server._started.wait() self.assertEqual( - last_queue.msg, {'command': ['sendstats']}) + send_command( + 'show', 'Stats', + params={ 'owner' : 'Boss', + 'name' : 'boot_time' }), + (0, '2011-06-22T08:14:08Z')) self.assertEqual( - last_queue.env['group'], 'Boss') + send_command( + 'set', 'Stats', + params={ 'owner' : 'Boss', + 'data' : { 'boot_time' : '2012-06-22T18:24:08Z' } }), + (0, None)) + self.assertEqual( + send_command( + 'show', 'Stats', + params={ 'owner' : 'Boss', + 'name' : 'boot_time' }), + (0, '2012-06-22T18:24:08Z')) + self.assertEqual( + send_command('status', 'Stats'), + (0, "Stats is up. (PID " + str(os.getpid()) + ")")) -class TestStats2(unittest.TestCase): - - def setUp(self): - self.session = Session() - self.subject = SessionSubject(session=self.session) - self.listener = CCSessionListener(self.subject) - self.module_name = self.listener.cc_session.get_module_spec().get_module_name() - # check starting - self.assertFalse(self.subject.running) - self.subject.start() - self.assertTrue(self.subject.running) - self.assertEqual(len(self.session.message_queue), 0) - self.assertEqual(self.module_name, 'Stats') - - def tearDown(self): - # check closing - self.subject.stop() - self.assertFalse(self.subject.running) - self.subject.detach(self.listener) - self.listener.stop() - - def test_specfile(self): - """ - Test for specfile + (rcode, value) = send_command('show', 'Stats') + self.assertEqual(rcode, 0) + self.assertEqual(len(value), 3) + self.assertTrue('Boss' in value) + self.assertTrue('Stats' in value) + self.assertTrue('Auth' in value) + self.assertEqual(len(value['Stats']), 5) + self.assertEqual(len(value['Boss']), 1) + self.assertTrue('boot_time' in value['Boss']) + self.assertEqual(value['Boss']['boot_time'], '2012-06-22T18:24:08Z') + self.assertTrue('report_time' in value['Stats']) + self.assertTrue('boot_time' in value['Stats']) + self.assertTrue('last_update_time' in value['Stats']) + self.assertTrue('timestamp' in value['Stats']) + self.assertTrue('lname' in value['Stats']) + (rcode, value) = send_command('showschema', 'Stats') + self.assertEqual(rcode, 0) + self.assertEqual(len(value), 3) + self.assertTrue('Boss' in value) + self.assertTrue('Stats' in value) + self.assertTrue('Auth' in value) + self.assertEqual(len(value['Stats']), 5) + self.assertEqual(len(value['Boss']), 1) + for item in value['Boss']: + self.assertTrue(len(item) == 7) + self.assertTrue('item_name' in item) + self.assertTrue('item_type' in item) + self.assertTrue('item_optional' in item) + self.assertTrue('item_default' in item) + self.assertTrue('item_title' in item) + self.assertTrue('item_description' in item) + self.assertTrue('item_format' in item) + for item in value['Stats']: + self.assertTrue(len(item) == 6 or len(item) == 7) + self.assertTrue('item_name' in item) + self.assertTrue('item_type' in item) + self.assertTrue('item_optional' in item) + self.assertTrue('item_default' in item) + self.assertTrue('item_title' in item) + self.assertTrue('item_description' in item) + if len(item) == 7: + self.assertTrue('item_format' in item) - """ - if "B10_FROM_SOURCE" in os.environ: - self.assertEqual(stats.SPECFILE_LOCATION, - os.environ["B10_FROM_SOURCE"] + os.sep + \ - "src" + os.sep + "bin" + os.sep + "stats" + \ - os.sep + "stats.spec") - self.assertEqual(stats.SCHEMA_SPECFILE_LOCATION, - os.environ["B10_FROM_SOURCE"] + os.sep + \ - "src" + os.sep + "bin" + os.sep + "stats" + \ - os.sep + "stats-schema.spec") - imp.reload(stats) - # change path of SPECFILE_LOCATION - stats.SPECFILE_LOCATION = TEST_SPECFILE_LOCATION - stats.SCHEMA_SPECFILE_LOCATION = TEST_SPECFILE_LOCATION - self.assertEqual(stats.SPECFILE_LOCATION, TEST_SPECFILE_LOCATION) - self.subject = stats.SessionSubject(session=self.session) - self.session = self.subject.session - self.listener = stats.CCSessionListener(self.subject) + self.assertEqual( + send_command('__UNKNOWN__', 'Stats'), + (1, "Unknown command: '__UNKNOWN__'")) - self.assertEqual(self.listener.stats_spec, []) - self.assertEqual(self.listener.stats_data, {}) + statsserver.shutdown() - self.assertEqual(self.listener.commands_spec, [ - { - "command_name": "status", - "command_description": "identify whether stats module is alive or not", - "command_args": [] - }, - { - "command_name": "the_dummy", - "command_description": "this is for testing", - "command_args": [] - }]) + def test_update_modules(self): + self.assertEqual(len(self.stats.modules), 0) + self.stats.update_modules() + self.assertTrue('Stats' in self.stats.modules) + self.assertTrue('Boss' in self.stats.modules) + self.assertFalse('Dummy' in self.stats.modules) + my_statistics_data = stats.parse_spec(self.stats.modules['Stats'].get_statistics_spec()) + self.assertTrue('report_time' in my_statistics_data) + self.assertTrue('boot_time' in my_statistics_data) + self.assertTrue('last_update_time' in my_statistics_data) + self.assertTrue('timestamp' in my_statistics_data) + self.assertTrue('lname' in my_statistics_data) + self.assertEqual(my_statistics_data['report_time'], "1970-01-01T00:00:00Z") + self.assertEqual(my_statistics_data['boot_time'], "1970-01-01T00:00:00Z") + self.assertEqual(my_statistics_data['last_update_time'], "1970-01-01T00:00:00Z") + self.assertEqual(my_statistics_data['timestamp'], 0.0) + self.assertEqual(my_statistics_data['lname'], "") + my_statistics_data = stats.parse_spec(self.stats.modules['Boss'].get_statistics_spec()) + self.assertTrue('boot_time' in my_statistics_data) + self.assertEqual(my_statistics_data['boot_time'], "1970-01-01T00:00:00Z") - def test_func_initialize_data(self): - """ - Test for initialize_data function + def test_get_statistics_data(self): + my_statistics_data = self.stats.get_statistics_data() + self.assertTrue('Stats' in my_statistics_data) + self.assertTrue('Boss' in my_statistics_data) + my_statistics_data = self.stats.get_statistics_data(owner='Stats') + self.assertTrue('report_time' in my_statistics_data) + self.assertTrue('boot_time' in my_statistics_data) + self.assertTrue('last_update_time' in my_statistics_data) + self.assertTrue('timestamp' in my_statistics_data) + self.assertTrue('lname' in my_statistics_data) + self.assertIsNone(self.stats.get_statistics_data(owner='Foo')) + my_statistics_data = self.stats.get_statistics_data(owner='Stats') + self.assertTrue('boot_time' in my_statistics_data) + my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='report_time') + self.assertEqual(my_statistics_data, "1970-01-01T00:00:00Z") + my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='boot_time') + self.assertEqual(my_statistics_data, "1970-01-01T00:00:00Z") + my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='last_update_time') + self.assertEqual(my_statistics_data, "1970-01-01T00:00:00Z") + my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='timestamp') + self.assertEqual(my_statistics_data, 0.0) + my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='lname') + self.assertEqual(my_statistics_data, '') + self.assertIsNone(self.stats.get_statistics_data(owner='Stats', name='Bar')) + self.assertIsNone(self.stats.get_statistics_data(owner='Foo', name='Bar')) + self.assertEqual(self.stats.get_statistics_data(name='Bar'), None) + + def test_update_statistics_data(self): + self.stats.update_statistics_data(owner='Stats', lname='foo@bar') + self.assertTrue('Stats' in self.stats.statistics_data) + my_statistics_data = self.stats.statistics_data['Stats'] + self.assertEqual(my_statistics_data['lname'], 'foo@bar') + self.stats.update_statistics_data(owner='Stats', last_update_time='2000-01-01T10:10:10Z') + self.assertTrue('Stats' in self.stats.statistics_data) + my_statistics_data = self.stats.statistics_data['Stats'] + self.assertEqual(my_statistics_data['last_update_time'], '2000-01-01T10:10:10Z') + self.assertEqual(self.stats.update_statistics_data(owner='Stats', lname=0.0), + ['0.0 should be a string']) + self.assertEqual(self.stats.update_statistics_data(owner='Dummy', foo='bar'), + ['unknown module name']) + + def test_command_status(self): + self.assertEqual(self.stats.command_status(), + isc.config.create_answer( + 0, "Stats is up. (PID " + str(os.getpid()) + ")")) - """ - # prepare for sample data set - stats_spec = [ - { - "item_name": "none_sample", - "item_type": "null", - "item_default": "None" - }, - { - "item_name": "boolean_sample", - "item_type": "boolean", - "item_default": True - }, - { - "item_name": "string_sample", - "item_type": "string", - "item_default": "A something" - }, - { - "item_name": "int_sample", - "item_type": "integer", - "item_default": 9999999 - }, - { - "item_name": "real_sample", - "item_type": "real", - "item_default": 0.0009 - }, - { - "item_name": "list_sample", - "item_type": "list", - "item_default": [0, 1, 2, 3, 4], - "list_item_spec": [] - }, - { - "item_name": "map_sample", - "item_type": "map", - "item_default": {'name':'value'}, - "map_item_spec": [] - }, - { - "item_name": "other_sample", - "item_type": "__unknown__", - "item_default": "__unknown__" - } - ] - # data for comparison - stats_data = { - 'none_sample': None, - 'boolean_sample': True, - 'string_sample': 'A something', - 'int_sample': 9999999, - 'real_sample': 0.0009, - 'list_sample': [0, 1, 2, 3, 4], - 'map_sample': {'name':'value'}, - 'other_sample': '__unknown__' - } - self.assertEqual(self.listener.initialize_data(stats_spec), stats_data) + def test_command_shutdown(self): + self.stats.running = True + self.assertEqual(self.stats.command_shutdown(), + isc.config.create_answer(0)) + self.assertFalse(self.stats.running) + + def test_command_show(self): + self.assertEqual(self.stats.command_show(owner='Foo', name=None), + isc.config.create_answer(1, "item name is not specified")) + self.assertEqual(self.stats.command_show(owner='Foo', name='_bar_'), + isc.config.create_answer( + 1, "specified module name and/or item name are incorrect")) + self.assertEqual(self.stats.command_show(owner='Foo', name='bar'), + isc.config.create_answer( + 1, "specified module name and/or item name are incorrect")) + orig_get_timestamp = stats.get_timestamp + orig_get_datetime = stats.get_datetime + stats.get_timestamp = lambda : 1308730448.965706 + stats.get_datetime = lambda : '2011-06-22T08:14:08Z' + self.assertEqual(stats.get_timestamp(), 1308730448.965706) + self.assertEqual(stats.get_datetime(), '2011-06-22T08:14:08Z') + self.assertEqual(self.stats.command_show(owner='Stats', name='report_time'), \ + isc.config.create_answer(0, '2011-06-22T08:14:08Z')) + self.assertEqual(self.stats.statistics_data['Stats']['timestamp'], 1308730448.965706) + self.assertEqual(self.stats.statistics_data['Stats']['boot_time'], '1970-01-01T00:00:00Z') + stats.get_timestamp = orig_get_timestamp + stats.get_datetime = orig_get_datetime + self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( + { "module_name": self.stats.module_name, + "statistics": [] } ) + self.assertRaises( + stats.StatsError, self.stats.command_show, owner='Foo', name='bar') + + def test_command_showchema(self): + (rcode, value) = isc.config.ccsession.parse_answer( + self.stats.command_showschema()) + self.assertEqual(rcode, 0) + self.assertEqual(len(value), 3) + self.assertTrue('Stats' in value) + self.assertTrue('Boss' in value) + self.assertTrue('Auth' in value) + self.assertFalse('__Dummy__' in value) + schema = value['Stats'] + self.assertEqual(len(schema), 5) + for item in schema: + self.assertTrue(len(item) == 6 or len(item) == 7) + self.assertTrue('item_name' in item) + self.assertTrue('item_type' in item) + self.assertTrue('item_optional' in item) + self.assertTrue('item_default' in item) + self.assertTrue('item_title' in item) + self.assertTrue('item_description' in item) + if len(item) == 7: + self.assertTrue('item_format' in item) - def test_func_main(self): - # explicitly make failed - self.session.close() - stats.main(session=self.session) + schema = value['Boss'] + self.assertEqual(len(schema), 1) + for item in schema: + self.assertTrue(len(item) == 7) + self.assertTrue('item_name' in item) + self.assertTrue('item_type' in item) + self.assertTrue('item_optional' in item) + self.assertTrue('item_default' in item) + self.assertTrue('item_title' in item) + self.assertTrue('item_description' in item) + self.assertTrue('item_format' in item) + + schema = value['Auth'] + self.assertEqual(len(schema), 2) + for item in schema: + self.assertTrue(len(item) == 6) + self.assertTrue('item_name' in item) + self.assertTrue('item_type' in item) + self.assertTrue('item_optional' in item) + self.assertTrue('item_default' in item) + self.assertTrue('item_title' in item) + self.assertTrue('item_description' in item) + + (rcode, value) = isc.config.ccsession.parse_answer( + self.stats.command_showschema(owner='Stats')) + self.assertEqual(rcode, 0) + self.assertFalse('Stats' in value) + self.assertFalse('Boss' in value) + self.assertFalse('Auth' in value) + for item in value: + self.assertTrue(len(item) == 6 or len(item) == 7) + self.assertTrue('item_name' in item) + self.assertTrue('item_type' in item) + self.assertTrue('item_optional' in item) + self.assertTrue('item_default' in item) + self.assertTrue('item_title' in item) + self.assertTrue('item_description' in item) + if len(item) == 7: + self.assertTrue('item_format' in item) + + (rcode, value) = isc.config.ccsession.parse_answer( + self.stats.command_showschema(owner='Stats', name='report_time')) + self.assertEqual(rcode, 0) + self.assertFalse('Stats' in value) + self.assertFalse('Boss' in value) + self.assertFalse('Auth' in value) + self.assertTrue(len(value) == 7) + self.assertTrue('item_name' in value) + self.assertTrue('item_type' in value) + self.assertTrue('item_optional' in value) + self.assertTrue('item_default' in value) + self.assertTrue('item_title' in value) + self.assertTrue('item_description' in value) + self.assertTrue('item_format' in value) + self.assertEqual(value['item_name'], 'report_time') + self.assertEqual(value['item_format'], 'date-time') + + self.assertEqual(self.stats.command_showschema(owner='Foo'), + isc.config.create_answer( + 1, "specified module name and/or item name are incorrect")) + self.assertEqual(self.stats.command_showschema(owner='Foo', name='bar'), + isc.config.create_answer( + 1, "specified module name and/or item name are incorrect")) + self.assertEqual(self.stats.command_showschema(owner='Stats', name='bar'), + isc.config.create_answer( + 1, "specified module name and/or item name are incorrect")) + self.assertEqual(self.stats.command_showschema(name='bar'), + isc.config.create_answer( + 1, "module name is not specified")) + + def test_command_set(self): + orig_get_datetime = stats.get_datetime + stats.get_datetime = lambda : '2011-06-22T06:12:38Z' + (rcode, value) = isc.config.ccsession.parse_answer( + self.stats.command_set(owner='Boss', + data={ 'boot_time' : '2011-06-22T13:15:04Z' })) + stats.get_datetime = orig_get_datetime + self.assertEqual(rcode, 0) + self.assertTrue(value is None) + self.assertEqual(self.stats.statistics_data['Boss']['boot_time'], + '2011-06-22T13:15:04Z') + self.assertEqual(self.stats.statistics_data['Stats']['last_update_time'], + '2011-06-22T06:12:38Z') + self.assertEqual(self.stats.command_set(owner='Stats', + data={ 'lname' : 'foo@bar' }), + isc.config.create_answer(0, None)) + self.stats.statistics_data['Stats'] = {} + self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( + { "module_name": self.stats.module_name, + "statistics": [] } ) + self.assertEqual(self.stats.command_set(owner='Stats', + data={ 'lname' : '_foo_@_bar_' }), + isc.config.create_answer( + 1, + "specified module name and/or statistics data are incorrect:" + + " No statistics specification")) + self.stats.statistics_data['Stats'] = {} + self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( + { "module_name": self.stats.module_name, + "statistics": [ + { + "item_name": "dummy", + "item_type": "string", + "item_optional": False, + "item_default": "", + "item_title": "Local Name", + "item_description": "brabra" + } ] } ) + self.assertRaises(stats.StatsError, + self.stats.command_set, owner='Stats', data={ 'dummy' : '_xxxx_yyyy_zzz_' }) def test_osenv(self): """ @@ -651,11 +515,8 @@ class TestStats2(unittest.TestCase): os.environ["B10_FROM_SOURCE"] = path imp.reload(stats) -def result_ok(*args): - if args: - return { 'result': list(args) } - else: - return { 'result': [ 0 ] } +def test_main(): + unittest.main() if __name__ == "__main__": - unittest.main() + test_main() diff --git a/src/bin/stats/tests/test_utils.py b/src/bin/stats/tests/test_utils.py index bd23182d2c..cfffc15a35 100644 --- a/src/bin/stats/tests/test_utils.py +++ b/src/bin/stats/tests/test_utils.py @@ -42,11 +42,10 @@ def send_shutdown(module_name): return send_command("shutdown", module_name) class ThreadingServerManager: - def __init__(self, server_class, verbose): + def __init__(self, server_class): self.server_class = server_class self.server_class_name = server_class.__name__ - self.verbose = verbose - self.server = self.server_class(self.verbose) + self.server = self.server_class() self.server._thread = threading.Thread( name=self.server_class_name, target=self.server.run) self.server._thread.daemon = True @@ -60,10 +59,9 @@ class ThreadingServerManager: self.server._thread.join(TIMEOUT_SEC) class MockMsgq: - def __init__(self, verbose): - self.verbose = verbose + def __init__(self): self._started = threading.Event() - self.msgq = msgq.MsgQ(None, verbose) + self.msgq = msgq.MsgQ(None) result = self.msgq.setup() if result: sys.exit("Error on Msgq startup: %s" % result) @@ -81,7 +79,7 @@ class MockMsgq: self.msgq.shutdown() class MockCfgmgr: - def __init__(self, verbose): + def __init__(self): self._started = threading.Event() self.cfgmgr = isc.config.cfgmgr.ConfigManager( os.environ['CONFIG_TESTDATA_PATH'], "b10-config.db") @@ -127,8 +125,7 @@ class MockBoss: """ _BASETIME = (2011, 6, 22, 8, 14, 8, 2, 173, 0) - def __init__(self, verbose): - self.verbose = verbose + def __init__(self): self._started = threading.Event() self.running = False self.spec_file = io.StringIO(self.spec_str) @@ -200,8 +197,7 @@ class MockAuth: } } """ - def __init__(self, verbose): - self.verbose = verbose + def __init__(self): self._started = threading.Event() self.running = False self.spec_file = io.StringIO(self.spec_str) @@ -239,9 +235,9 @@ class MockAuth: return isc.config.create_answer(1, "Unknown Command") class MyStats(stats.Stats): - def __init__(self, verbose): + def __init__(self): self._started = threading.Event() - stats.Stats.__init__(self, verbose) + stats.Stats.__init__(self) def run(self): self._started.set() @@ -251,9 +247,9 @@ class MyStats(stats.Stats): send_shutdown("Stats") class MyStatsHttpd(stats_httpd.StatsHttpd): - def __init__(self, verbose): + def __init__(self): self._started = threading.Event() - stats_httpd.StatsHttpd.__init__(self, verbose) + stats_httpd.StatsHttpd.__init__(self) def run(self): self._started.set() @@ -263,23 +259,22 @@ class MyStatsHttpd(stats_httpd.StatsHttpd): send_shutdown("StatsHttpd") class BaseModules: - def __init__(self, verbose): - self.verbose = verbose + def __init__(self): self.class_name = BaseModules.__name__ # Change value of BIND10_MSGQ_SOCKET_FILE in environment variables os.environ['BIND10_MSGQ_SOCKET_FILE'] = tempfile.mktemp(prefix='unix_socket.') # MockMsgq - self.msgq = ThreadingServerManager(MockMsgq, self.verbose) + self.msgq = ThreadingServerManager(MockMsgq) self.msgq.run() # MockCfgmgr - self.cfgmgr = ThreadingServerManager(MockCfgmgr, self.verbose) + self.cfgmgr = ThreadingServerManager(MockCfgmgr) self.cfgmgr.run() # MockBoss - self.boss = ThreadingServerManager(MockBoss, self.verbose) + self.boss = ThreadingServerManager(MockBoss) self.boss.run() # MockAuth - self.auth = ThreadingServerManager(MockAuth, self.verbose) + self.auth = ThreadingServerManager(MockAuth) self.auth.run() def shutdown(self): From 691328d91b4c4d15ace467ca47a3c987a9fb52b9 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 21:09:41 +0900 Subject: [PATCH 125/175] [trac930] add new entry for #928-#930 --- ChangeLog | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/ChangeLog b/ChangeLog index 56bf8e97d7..d4cd88de14 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,13 @@ +xxx. [func] naokikambe + Add statistics category in each module spec file for management of + statistics data schemas by each module. Add get_statistics_spec into + cfgmgr and related codes. show statistics data and data schema by each + module via both bintcl and HTTP/XML interfaces. Change item name in + each statistics data. (Remove prefix "xxx." indicating the module + name.) Add new mock modules for unittests of stats and stats httpd + modules. + (Trac #928,#929,#930, git nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn) + 278. [doc] jelte Add logging configuration documentation to the guide. (Trac #1011, git TODO) From aa108cc824539a1d32a4aa2f46f9e58171074a9e Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 8 Jul 2011 21:22:34 +0900 Subject: [PATCH 126/175] [trac930] remove unneeded empty TODO comments --- doc/guide/bind10-guide.xml | 2 -- 1 file changed, 2 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 4883bb0a29..297400cca6 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1453,7 +1453,6 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - This stats daemon provides commands to identify if it is running, show specified or all statistics data, show specified or all statistics data schema, and set specified statistics @@ -1461,7 +1460,6 @@ then change those defaults with config set Resolver/forward_addresses[0]/address For example, using bindctl: - > Stats show { From 4de3a5bdf367d87247cb9138f8929ab4798f014e Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 13 Jul 2011 20:25:54 +0900 Subject: [PATCH 127/175] [trac930] - increase seconds in sleep time which is before HTTP client connects to the server - delete 'test_log_message' because of the deletion of original function --- src/bin/stats/tests/b10-stats-httpd_test.py | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index ae07aa9f27..fcf95ad36f 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -68,7 +68,7 @@ class TestHttpHandler(unittest.TestCase): self.assertTrue(type(self.stats_httpd.httpd) is list) self.assertEqual(len(self.stats_httpd.httpd), 0) statshttpd_server.run() - time.sleep(TIMEOUT_SEC*5) + time.sleep(TIMEOUT_SEC*8) client = http.client.HTTPConnection(address, port) client._http_vsn_str = 'HTTP/1.0\n' client.connect() @@ -249,23 +249,6 @@ class TestHttpHandler(unittest.TestCase): client.close() statshttpd_server.shutdown() - def test_log_message(self): - class MyHttpHandler(stats_httpd.HttpHandler): - def __init__(self): - class _Dummy_class_(): pass - self.address_string = lambda : 'dummyhost' - self.log_date_time_string = lambda : \ - 'DD/MM/YYYY HH:MI:SS' - self.server = _Dummy_class_() - self.server.log_writer = self.log_writer - def log_writer(self, line): - self.logged_line = line - self.handler = MyHttpHandler() - self.handler.log_message("%s %d", 'ABCDEFG', 12345) - self.assertEqual(self.handler.logged_line, - "[b10-stats-httpd] dummyhost - - " - + "[DD/MM/YYYY HH:MI:SS] ABCDEFG 12345\n") - class TestHttpServerError(unittest.TestCase): """Tests for HttpServerError exception""" def test_raises(self): From 28cad73dff9dae43a38ad7dafbee406c690fb77c Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 20 Jul 2011 10:00:29 +0900 Subject: [PATCH 128/175] [trac930] add statistics validation for bob --- src/bin/bind10/bind10_src.py.in | 18 +++++++++++------- src/bin/bind10/tests/bind10_test.py.in | 18 ++++++++++++++++++ 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/src/bin/bind10/bind10_src.py.in b/src/bin/bind10/bind10_src.py.in index 5189802c27..f905892221 100755 --- a/src/bin/bind10/bind10_src.py.in +++ b/src/bin/bind10/bind10_src.py.in @@ -318,14 +318,18 @@ class BoB: answer = isc.config.ccsession.create_answer(0) elif command == "sendstats": # send statistics data to the stats daemon immediately - cmd = isc.config.ccsession.create_command( + statistics_data = { + 'boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', _BASETIME) + } + valid = self.ccs.get_module_spec().validate_statistics( + True, statistics_data) + if valid: + cmd = isc.config.ccsession.create_command( 'set', { "owner": "Boss", - "data": { - 'boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', _BASETIME) - }}) - seq = self.cc_session.group_sendmsg(cmd, 'Stats') - self.cc_session.group_recvmsg(True, seq) - answer = isc.config.ccsession.create_answer(0) + "data": statistics_data }) + seq = self.cc_session.group_sendmsg(cmd, 'Stats') + self.cc_session.group_recvmsg(True, seq) + answer = isc.config.ccsession.create_answer(0) elif command == "ping": answer = isc.config.ccsession.create_answer(0, "pong") elif command == "show_processes": diff --git a/src/bin/bind10/tests/bind10_test.py.in b/src/bin/bind10/tests/bind10_test.py.in index dc1d6603c4..af7b6f49ef 100644 --- a/src/bin/bind10/tests/bind10_test.py.in +++ b/src/bin/bind10/tests/bind10_test.py.in @@ -137,9 +137,27 @@ class TestBoB(unittest.TestCase): def group_sendmsg(self, msg, group): (self.msg, self.group) = (msg, group) def group_recvmsg(self, nonblock, seq): pass + class DummyModuleCCSession(): + module_spec = isc.config.module_spec.ModuleSpec({ + "module_name": "Boss", + "statistics": [ + { + "item_name": "boot_time", + "item_type": "string", + "item_optional": False, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Boot time", + "item_description": "A date time when bind10 process starts initially", + "item_format": "date-time" + } + ] + }) + def get_module_spec(self): + return self.module_spec bob = BoB() bob.verbose = True bob.cc_session = DummySession() + bob.ccs = DummyModuleCCSession() # a bad command self.assertEqual(bob.command_handler(-1, None), isc.config.ccsession.create_answer(1, "bad command")) From e95625332a20fb50afe43da2db0cab507efe8ebe Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 22 Jul 2011 18:28:40 +0900 Subject: [PATCH 129/175] [trac930] add new messages into the message file of Auth and Boss when validation of statistics data to send to statistics module is failed. --- src/bin/auth/auth_messages.mes | 3 +++ src/bin/bind10/bind10_messages.mes | 4 ++++ 2 files changed, 7 insertions(+) diff --git a/src/bin/auth/auth_messages.mes b/src/bin/auth/auth_messages.mes index 9f04b76264..1ffa6871ea 100644 --- a/src/bin/auth/auth_messages.mes +++ b/src/bin/auth/auth_messages.mes @@ -257,4 +257,7 @@ request. The zone manager component has been informed of the request, but has returned an error response (which is included in the message). The NOTIFY request will not be honored. +% AUTH_INVALID_STATISTICS_DATA invalid specification of statistics data specified +An error was encountered when the authoritiative server specified +statistics data which is invalid for the auth specification file. diff --git a/src/bin/bind10/bind10_messages.mes b/src/bin/bind10/bind10_messages.mes index 4bac069098..4debcdb3ec 100644 --- a/src/bin/bind10/bind10_messages.mes +++ b/src/bin/bind10/bind10_messages.mes @@ -198,3 +198,7 @@ the message channel. % BIND10_UNKNOWN_CHILD_PROCESS_ENDED unknown child pid %1 exited An unknown child process has exited. The PID is printed, but no further action will be taken by the boss process. + +% BIND10_INVALID_STATISTICS_DATA invalid specification of statistics data specified +An error was encountered when the boss module specified +statistics data which is invalid for the boss specification file. From df9a8f921f0d20bd70c519218335357297bffa7d Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 22 Jul 2011 18:32:22 +0900 Subject: [PATCH 130/175] [trac930] add the helper functions which are used around the registration of the function to validate the statistics data. --- src/bin/auth/auth_srv.cc | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/src/bin/auth/auth_srv.cc b/src/bin/auth/auth_srv.cc index 5a3144283a..c9dac88e99 100644 --- a/src/bin/auth/auth_srv.cc +++ b/src/bin/auth/auth_srv.cc @@ -125,6 +125,10 @@ public: /// The TSIG keyring const shared_ptr* keyring_; + + /// Bind the ModuleSpec object in config_session_ with + /// isc:config::ModuleSpec::validateStatistics. + void registerStatisticsValidator(); private: std::string db_file_; @@ -139,6 +143,9 @@ private: /// Increment query counter void incCounter(const int protocol); + + // validateStatistics + bool validateStatistics(isc::data::ConstElementPtr data) const; }; AuthSrvImpl::AuthSrvImpl(const bool use_cache, @@ -317,6 +324,7 @@ AuthSrv::setXfrinSession(AbstractSession* xfrin_session) { void AuthSrv::setConfigSession(ModuleCCSession* config_session) { impl_->config_session_ = config_session; + impl_->registerStatisticsValidator(); } void @@ -670,6 +678,22 @@ AuthSrvImpl::incCounter(const int protocol) { } } +void +AuthSrvImpl::registerStatisticsValidator() { + counters_.registerStatisticsValidator( + boost::bind(&AuthSrvImpl::validateStatistics, this, _1)); +} + +bool +AuthSrvImpl::validateStatistics(isc::data::ConstElementPtr data) const { + if (config_session_ == NULL) { + return (false); + } + return ( + config_session_->getModuleSpec().validateStatistics( + data, true)); +} + ConstElementPtr AuthSrvImpl::setDbFile(ConstElementPtr config) { ConstElementPtr answer = isc::config::createAnswer(); From a9a976d2a5871f1501018d697d3afd299ceec5da Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 22 Jul 2011 18:37:22 +0900 Subject: [PATCH 131/175] [trac930] - Add implementation to validate statistics data -- When validation is success, it sends data to statistics module. But when it fails, it doesn't send and logs the message. - Add the function to register the validation function into the class --- src/bin/auth/statistics.cc | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/src/bin/auth/statistics.cc b/src/bin/auth/statistics.cc index 444fb8b35b..e62719f7e2 100644 --- a/src/bin/auth/statistics.cc +++ b/src/bin/auth/statistics.cc @@ -37,11 +37,14 @@ public: void inc(const AuthCounters::CounterType type); bool submitStatistics() const; void setStatisticsSession(isc::cc::AbstractSession* statistics_session); + void registerStatisticsValidator + (AuthCounters::validator_type validator); // Currently for testing purpose only uint64_t getCounter(const AuthCounters::CounterType type) const; private: std::vector counters_; isc::cc::AbstractSession* statistics_session_; + AuthCounters::validator_type validator_; }; AuthCountersImpl::AuthCountersImpl() : @@ -78,6 +81,14 @@ AuthCountersImpl::submitStatistics() const { << "]}"; isc::data::ConstElementPtr statistics_element = isc::data::Element::fromJSON(statistics_string); + // validate the statistics data before send + if (validator_) { + if (!validator_( + statistics_element->get("command")->get(1)->get("data"))) { + LOG_ERROR(auth_logger, AUTH_INVALID_STATISTICS_DATA); + return (false); + } + } try { // group_{send,recv}msg() can throw an exception when encountering // an error, and group_recvmsg() will throw an exception on timeout. @@ -106,6 +117,13 @@ AuthCountersImpl::setStatisticsSession statistics_session_ = statistics_session; } +void +AuthCountersImpl::registerStatisticsValidator + (AuthCounters::validator_type validator) +{ + validator_ = validator; +} + // Currently for testing purpose only uint64_t AuthCountersImpl::getCounter(const AuthCounters::CounterType type) const { @@ -140,3 +158,10 @@ uint64_t AuthCounters::getCounter(const AuthCounters::CounterType type) const { return (impl_->getCounter(type)); } + +void +AuthCounters::registerStatisticsValidator + (AuthCounters::validator_type validator) const +{ + return (impl_->registerStatisticsValidator(validator)); +} From d0d5a67123b8009e89e84515eee4f93b37ec8497 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 22 Jul 2011 18:41:34 +0900 Subject: [PATCH 132/175] [trac930] Add prototypes of validator_typea and registerStatisticsValidator - validator_type -- a type of statistics validation function - registerStatisticsValidator -- the function to register the validation function --- src/bin/auth/statistics.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/src/bin/auth/statistics.h b/src/bin/auth/statistics.h index 5bf643656d..c930414c65 100644 --- a/src/bin/auth/statistics.h +++ b/src/bin/auth/statistics.h @@ -131,6 +131,26 @@ public: /// \return the value of the counter specified by \a type. /// uint64_t getCounter(const AuthCounters::CounterType type) const; + + /// \brief A type of validation function for the specification in + /// isc::config::ModuleSpec. + /// + /// This type might be useful for not only statistics + /// specificatoin but also for config_data specification and for + /// commnad. + /// + typedef boost::function + validator_type; + + /// \brief Register a function type of the statistics validation + /// function for AuthCounters. + /// + /// This method never throws an exception. + /// + /// \param validator A function type of the validation of + /// statistics specification. + /// + void registerStatisticsValidator(AuthCounters::validator_type validator) const; }; #endif // __STATISTICS_H From ae8748f77a0261623216b1a11f9d979f555fe892 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 22 Jul 2011 18:43:26 +0900 Subject: [PATCH 133/175] [trac930] Add unittests to test sumitStatistics with the validation of statistics data and add mock ModuleSpec class --- src/bin/auth/tests/statistics_unittest.cc | 66 ++++++++++++++++++++++- 1 file changed, 65 insertions(+), 1 deletion(-) diff --git a/src/bin/auth/tests/statistics_unittest.cc b/src/bin/auth/tests/statistics_unittest.cc index cd2755b110..98e573b495 100644 --- a/src/bin/auth/tests/statistics_unittest.cc +++ b/src/bin/auth/tests/statistics_unittest.cc @@ -16,6 +16,8 @@ #include +#include + #include #include @@ -76,6 +78,13 @@ protected: } MockSession statistics_session_; AuthCounters counters; + // no need to be inherited from the original class here. + class MockModuleSpec { + public: + bool validateStatistics(ConstElementPtr, const bool valid) const + { return (valid); } + }; + MockModuleSpec module_spec_; }; void @@ -181,7 +190,7 @@ TEST_F(AuthCountersTest, submitStatisticsWithException) { statistics_session_.setThrowSessionTimeout(false); } -TEST_F(AuthCountersTest, submitStatistics) { +TEST_F(AuthCountersTest, submitStatisticsWithoutValidator) { // Submit statistics data. // Validate if it submits correct data. @@ -211,4 +220,59 @@ TEST_F(AuthCountersTest, submitStatistics) { EXPECT_EQ(1, statistics_data->get("queries.tcp")->intValue()); } +TEST_F(AuthCountersTest, submitStatisticsWithValidator) { + + //a validator for the unittest + AuthCounters::validator_type validator; + ConstElementPtr el; + + // Submit statistics data with correct statistics validator. + validator = boost::bind( + &AuthCountersTest::MockModuleSpec::validateStatistics, + &module_spec_, _1, true); + + EXPECT_TRUE(validator(el)); + + // register validator to AuthCounters + counters.registerStatisticsValidator(validator); + + // Counters should be initialized to 0. + EXPECT_EQ(0, counters.getCounter(AuthCounters::COUNTER_UDP_QUERY)); + EXPECT_EQ(0, counters.getCounter(AuthCounters::COUNTER_TCP_QUERY)); + + // UDP query counter is set to 2. + counters.inc(AuthCounters::COUNTER_UDP_QUERY); + counters.inc(AuthCounters::COUNTER_UDP_QUERY); + // TCP query counter is set to 1. + counters.inc(AuthCounters::COUNTER_TCP_QUERY); + + // checks the value returned by submitStatistics + EXPECT_TRUE(counters.submitStatistics()); + + // Destination is "Stats". + EXPECT_EQ("Stats", statistics_session_.msg_destination); + // Command is "set". + EXPECT_EQ("set", statistics_session_.sent_msg->get("command") + ->get(0)->stringValue()); + EXPECT_EQ("Auth", statistics_session_.sent_msg->get("command") + ->get(1)->get("owner")->stringValue()); + ConstElementPtr statistics_data = statistics_session_.sent_msg + ->get("command")->get(1) + ->get("data"); + // UDP query counter is 2 and TCP query counter is 1. + EXPECT_EQ(2, statistics_data->get("queries.udp")->intValue()); + EXPECT_EQ(1, statistics_data->get("queries.tcp")->intValue()); + + // Submit statistics data with incorrect statistics validator. + validator = boost::bind( + &AuthCountersTest::MockModuleSpec::validateStatistics, + &module_spec_, _1, false); + + EXPECT_FALSE(validator(el)); + + counters.registerStatisticsValidator(validator); + + // checks the value returned by submitStatistics + EXPECT_FALSE(counters.submitStatistics()); +} } From a142fa6302e1e0ea2ad1c9faf59d6a70a53a6489 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 22 Jul 2011 18:45:19 +0900 Subject: [PATCH 134/175] [trac930] add the logging when the validation of statistics data fails --- src/bin/bind10/bind10_src.py.in | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/src/bin/bind10/bind10_src.py.in b/src/bin/bind10/bind10_src.py.in index f905892221..3deba6172b 100755 --- a/src/bin/bind10/bind10_src.py.in +++ b/src/bin/bind10/bind10_src.py.in @@ -330,6 +330,10 @@ class BoB: seq = self.cc_session.group_sendmsg(cmd, 'Stats') self.cc_session.group_recvmsg(True, seq) answer = isc.config.ccsession.create_answer(0) + else: + logger.fatal(BIND10_INVALID_STATISTICS_DATA); + answer = isc.config.ccsession.create_answer( + 1, "specified statistics data is invalid") elif command == "ping": answer = isc.config.ccsession.create_answer(0, "pong") elif command == "show_processes": From bcf37a11b08922d69d02fa2ea1b280b2fa2c21e0 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 22 Jul 2011 18:50:41 +0900 Subject: [PATCH 135/175] [trac930] add changes because query counter names described in the specfile are changed. --- tests/system/bindctl/tests.sh | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/tests/system/bindctl/tests.sh b/tests/system/bindctl/tests.sh index 6923c4167c..49ef0f17b0 100755 --- a/tests/system/bindctl/tests.sh +++ b/tests/system/bindctl/tests.sh @@ -24,6 +24,10 @@ SYSTEMTESTTOP=.. status=0 n=0 +# TODO: consider consistency with statistics definition in auth.spec +auth_queries_tcp="\" +auth_queries_udp="\" + echo "I:Checking b10-auth is working by default ($n)" $DIG +norec @10.53.0.1 -p 53210 ns.example.com. A >dig.out.$n || status=1 # perform a simple check on the output (digcomp would be too much for this) @@ -40,8 +44,8 @@ echo 'Stats show --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # the server should have received 1 UDP and 1 TCP queries (TCP query was # sent from the server startup script) -grep "\"auth.queries.tcp\": 1," bindctl.out.$n > /dev/null || status=1 -grep "\"auth.queries.udp\": 1," bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_tcp".*\<1\>" bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_udp".*\<1\>" bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` @@ -73,8 +77,8 @@ echo 'Stats show ' | $RUN_BINDCTL \ --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # The statistics counters should have been reset while stop/start. -grep "\"auth.queries.tcp\": 0," bindctl.out.$n > /dev/null || status=1 -grep "\"auth.queries.udp\": 1," bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_tcp".*\<0\>" bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_udp".*\<1\>" bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` @@ -97,8 +101,8 @@ echo 'Stats show ' | $RUN_BINDCTL \ --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # The statistics counters shouldn't be reset due to hot-swapping datasource. -grep "\"auth.queries.tcp\": 0," bindctl.out.$n > /dev/null || status=1 -grep "\"auth.queries.udp\": 2," bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_tcp".*\<0\>" bindctl.out.$n > /dev/null || status=1 +grep $auth_queries_udp".*\<2\>" bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` From 7275c59de54593d3baca81345226dda2d3a19c30 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 22 Jul 2011 21:40:07 +0900 Subject: [PATCH 136/175] [trac930] fix conflicts with trac1021 --- src/bin/stats/tests/b10-stats-httpd_test.py | 89 ++++++++++++--------- 1 file changed, 51 insertions(+), 38 deletions(-) diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index fcf95ad36f..2cc78ddc32 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -548,10 +548,11 @@ class TestStatsHttpd(unittest.TestCase): def test_xml_handler(self): orig_get_stats_data = stats_httpd.StatsHttpd.get_stats_data - stats_httpd.StatsHttpd.get_stats_data = lambda x: {'foo':'bar'} + stats_httpd.StatsHttpd.get_stats_data = lambda x: \ + { 'Dummy' : { 'foo':'bar' } } xml_body1 = stats_httpd.StatsHttpd().open_template( stats_httpd.XML_TEMPLATE_LOCATION).substitute( - xml_string='bar', + xml_string='bar', xsd_namespace=stats_httpd.XSD_NAMESPACE, xsd_url_path=stats_httpd.XSD_URL_PATH, xsl_url_path=stats_httpd.XSL_URL_PATH) @@ -559,7 +560,8 @@ class TestStatsHttpd(unittest.TestCase): self.assertEqual(type(xml_body1), str) self.assertEqual(type(xml_body2), str) self.assertEqual(xml_body1, xml_body2) - stats_httpd.StatsHttpd.get_stats_data = lambda x: {'bar':'foo'} + stats_httpd.StatsHttpd.get_stats_data = lambda x: \ + { 'Dummy' : {'bar':'foo'} } xml_body2 = stats_httpd.StatsHttpd().xml_handler() self.assertNotEqual(xml_body1, xml_body2) stats_httpd.StatsHttpd.get_stats_data = orig_get_stats_data @@ -567,35 +569,41 @@ class TestStatsHttpd(unittest.TestCase): def test_xsd_handler(self): orig_get_stats_spec = stats_httpd.StatsHttpd.get_stats_spec stats_httpd.StatsHttpd.get_stats_spec = lambda x: \ - [{ - "item_name": "foo", - "item_type": "string", - "item_optional": False, - "item_default": "bar", - "item_description": "foo is bar", - "item_title": "Foo" - }] + { "Dummy" : + [{ + "item_name": "foo", + "item_type": "string", + "item_optional": False, + "item_default": "bar", + "item_description": "foo is bar", + "item_title": "Foo" + }] + } xsd_body1 = stats_httpd.StatsHttpd().open_template( stats_httpd.XSD_TEMPLATE_LOCATION).substitute( - xsd_string='' \ + xsd_string=\ + '' \ + '' \ + 'Foo' \ + 'foo is bar' \ - + '', + + '' \ + + '', xsd_namespace=stats_httpd.XSD_NAMESPACE) xsd_body2 = stats_httpd.StatsHttpd().xsd_handler() self.assertEqual(type(xsd_body1), str) self.assertEqual(type(xsd_body2), str) self.assertEqual(xsd_body1, xsd_body2) stats_httpd.StatsHttpd.get_stats_spec = lambda x: \ - [{ - "item_name": "bar", - "item_type": "string", - "item_optional": False, - "item_default": "foo", - "item_description": "bar is foo", - "item_title": "bar" - }] + { "Dummy" : + [{ + "item_name": "bar", + "item_type": "string", + "item_optional": False, + "item_default": "foo", + "item_description": "bar is foo", + "item_title": "bar" + }] + } xsd_body2 = stats_httpd.StatsHttpd().xsd_handler() self.assertNotEqual(xsd_body1, xsd_body2) stats_httpd.StatsHttpd.get_stats_spec = orig_get_stats_spec @@ -603,19 +611,22 @@ class TestStatsHttpd(unittest.TestCase): def test_xsl_handler(self): orig_get_stats_spec = stats_httpd.StatsHttpd.get_stats_spec stats_httpd.StatsHttpd.get_stats_spec = lambda x: \ - [{ - "item_name": "foo", - "item_type": "string", - "item_optional": False, - "item_default": "bar", - "item_description": "foo is bar", - "item_title": "Foo" - }] + { "Dummy" : + [{ + "item_name": "foo", + "item_type": "string", + "item_optional": False, + "item_default": "bar", + "item_description": "foo is bar", + "item_title": "Foo" + }] + } xsl_body1 = stats_httpd.StatsHttpd().open_template( stats_httpd.XSL_TEMPLATE_LOCATION).substitute( xsl_string='

' \ + + '' \ + '' \ - + '' \ + + '' \ + '', xsd_namespace=stats_httpd.XSD_NAMESPACE) xsl_body2 = stats_httpd.StatsHttpd().xsl_handler() @@ -623,14 +634,16 @@ class TestStatsHttpd(unittest.TestCase): self.assertEqual(type(xsl_body2), str) self.assertEqual(xsl_body1, xsl_body2) stats_httpd.StatsHttpd.get_stats_spec = lambda x: \ - [{ - "item_name": "bar", - "item_type": "string", - "item_optional": False, - "item_default": "foo", - "item_description": "bar is foo", - "item_title": "bar" - }] + { "Dummy" : + [{ + "item_name": "bar", + "item_type": "string", + "item_optional": False, + "item_default": "foo", + "item_description": "bar is foo", + "item_title": "bar" + }] + } xsl_body2 = stats_httpd.StatsHttpd().xsl_handler() self.assertNotEqual(xsl_body1, xsl_body2) stats_httpd.StatsHttpd.get_stats_spec = orig_get_stats_spec From 8a24b9066537caf373d0cfc11dca855eb6c3e4d9 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 27 Jul 2011 10:14:57 +0900 Subject: [PATCH 137/175] [trac930] modify parse_spec function returns empty dict if list-type is not specified in the argument --- src/bin/stats/stats.py.in | 1 + src/bin/stats/tests/b10-stats_test.py | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index 3faa3059a0..1abe4491df 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -72,6 +72,7 @@ def parse_spec(spec): """ parse spec type data """ + if type(spec) is not list: return {} def _parse_spec(spec): item_type = spec['item_type'] if item_type == "integer": diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index b013c7a8bc..b0f87f4377 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -69,7 +69,7 @@ class TestUtilties(unittest.TestCase): 'test_list2' : [0,0,0], 'test_map2' : { 'A' : 0, 'B' : 0, 'C' : 0 }, 'test_none' : None }) - self.assertRaises(TypeError, stats.parse_spec, None) + self.assertEqual(stats.parse_spec(None), {}) self.assertRaises(KeyError, stats.parse_spec, [{'item_name':'Foo'}]) def test_get_timestamp(self): From 2c22d334a05ec1e77299a6c55252f1d1c33082af Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 27 Jul 2011 10:18:07 +0900 Subject: [PATCH 138/175] [trac930] add a test pattern which the set command with a non-existent item name is sent --- src/bin/stats/tests/b10-stats_test.py | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index b0f87f4377..6ddd39b18a 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -483,6 +483,15 @@ class TestStats(unittest.TestCase): self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( { "module_name": self.stats.module_name, "statistics": [] } ) + self.assertEqual(self.stats.command_set(owner='Stats', + data={ 'lname' : '_foo_@_bar_' }), + isc.config.create_answer( + 1, + "specified module name and/or statistics data are incorrect:" + + " unknown item lname")) + self.stats.statistics_data['Stats'] = {} + self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( + { "module_name": self.stats.module_name } ) self.assertEqual(self.stats.command_set(owner='Stats', data={ 'lname' : '_foo_@_bar_' }), isc.config.create_answer( From e8a22472e58bfc7df4a661d665152fe4d70454a6 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 27 Jul 2011 16:42:54 +0900 Subject: [PATCH 139/175] [trac930] remove unnecessary a white space --- src/bin/stats/tests/test_utils.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/stats/tests/test_utils.py b/src/bin/stats/tests/test_utils.py index cfffc15a35..b3e20b5f76 100644 --- a/src/bin/stats/tests/test_utils.py +++ b/src/bin/stats/tests/test_utils.py @@ -182,7 +182,7 @@ class MockAuth: "item_type": "integer", "item_optional": false, "item_default": 0, - "item_title": "Queries TCP ", + "item_title": "Queries TCP", "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially" }, { From aaffb9c83c0fe59d9c7d590c5bea559ed8876269 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 27 Jul 2011 16:49:21 +0900 Subject: [PATCH 140/175] [trac930] - correct error messages in bindctl it prints together with arguments. - modify the command_show function it reports statistics data of the module even if name is not specified. - add/modify unittests depending on the changes of error messages --- src/bin/stats/stats.py.in | 17 ++++---- src/bin/stats/tests/b10-stats_test.py | 56 +++++++++++++++++++++------ 2 files changed, 53 insertions(+), 20 deletions(-) diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index 1abe4491df..84cdf9ce64 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -259,7 +259,7 @@ class Stats: self.statistics_data[owner].update(data) return except KeyError: - errors.append('unknown module name') + errors.append("unknown module name: " + str(owner)) return errors def command_status(self): @@ -289,8 +289,6 @@ class Stats: else: logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_SHOW_ALL_COMMAND) - if owner and not name: - return isc.config.create_answer(1, "item name is not specified") errors = self.update_statistics_data( self.module_name, timestamp=get_timestamp(), @@ -298,11 +296,12 @@ class Stats: ) if errors: raise StatsError("stats spec file is incorrect") ret = self.get_statistics_data(owner, name) - if ret: + if ret is not None: return isc.config.create_answer(0, ret) else: return isc.config.create_answer( - 1, "specified module name and/or item name are incorrect") + 1, "specified arguments are incorrect: " \ + + "owner: " + str(owner) + ", name: " + str(name)) def command_showschema(self, owner=None, name=None): """ @@ -335,7 +334,8 @@ class Stats: else: return isc.config.create_answer(0, schema) return isc.config.create_answer( - 1, "specified module name and/or item name are incorrect") + 1, "specified arguments are incorrect: " \ + + "owner: " + str(owner) + ", name: " + str(name)) def command_set(self, owner, data): """ @@ -344,9 +344,8 @@ class Stats: errors = self.update_statistics_data(owner, **data) if errors: return isc.config.create_answer( - 1, - "specified module name and/or statistics data are incorrect: " - + ", ".join(errors)) + 1, "errors while setting statistics data: " \ + + ", ".join(errors)) errors = self.update_statistics_data( self.module_name, last_update_time=get_datetime() ) if errors: diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index 6ddd39b18a..640b796fad 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -331,7 +331,7 @@ class TestStats(unittest.TestCase): self.assertEqual(self.stats.update_statistics_data(owner='Stats', lname=0.0), ['0.0 should be a string']) self.assertEqual(self.stats.update_statistics_data(owner='Dummy', foo='bar'), - ['unknown module name']) + ['unknown module name: Dummy']) def test_command_status(self): self.assertEqual(self.stats.command_status(), @@ -346,13 +346,20 @@ class TestStats(unittest.TestCase): def test_command_show(self): self.assertEqual(self.stats.command_show(owner='Foo', name=None), - isc.config.create_answer(1, "item name is not specified")) + isc.config.create_answer( + 1, "specified arguments are incorrect: owner: Foo, name: None")) self.assertEqual(self.stats.command_show(owner='Foo', name='_bar_'), isc.config.create_answer( - 1, "specified module name and/or item name are incorrect")) + 1, "specified arguments are incorrect: owner: Foo, name: _bar_")) self.assertEqual(self.stats.command_show(owner='Foo', name='bar'), isc.config.create_answer( - 1, "specified module name and/or item name are incorrect")) + 1, "specified arguments are incorrect: owner: Foo, name: bar")) + self.assertEqual(self.stats.command_show(owner='Auth'), + isc.config.create_answer( + 0, {'queries.tcp': 0, 'queries.udp': 0})) + self.assertEqual(self.stats.command_show(owner='Auth', name='queries.udp'), + isc.config.create_answer( + 0, 0)) orig_get_timestamp = stats.get_timestamp orig_get_datetime = stats.get_datetime stats.get_timestamp = lambda : 1308730448.965706 @@ -452,13 +459,42 @@ class TestStats(unittest.TestCase): self.assertEqual(self.stats.command_showschema(owner='Foo'), isc.config.create_answer( - 1, "specified module name and/or item name are incorrect")) + 1, "specified arguments are incorrect: owner: Foo, name: None")) self.assertEqual(self.stats.command_showschema(owner='Foo', name='bar'), isc.config.create_answer( - 1, "specified module name and/or item name are incorrect")) + 1, "specified arguments are incorrect: owner: Foo, name: bar")) + self.assertEqual(self.stats.command_showschema(owner='Auth'), + isc.config.create_answer( + 0, [{ + "item_default": 0, + "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially", + "item_name": "queries.tcp", + "item_optional": False, + "item_title": "Queries TCP", + "item_type": "integer" + }, + { + "item_default": 0, + "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially", + "item_name": "queries.udp", + "item_optional": False, + "item_title": "Queries UDP", + "item_type": "integer" + }])) + self.assertEqual(self.stats.command_showschema(owner='Auth', name='queries.tcp'), + isc.config.create_answer( + 0, { + "item_default": 0, + "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially", + "item_name": "queries.tcp", + "item_optional": False, + "item_title": "Queries TCP", + "item_type": "integer" + })) + self.assertEqual(self.stats.command_showschema(owner='Stats', name='bar'), isc.config.create_answer( - 1, "specified module name and/or item name are incorrect")) + 1, "specified arguments are incorrect: owner: Stats, name: bar")) self.assertEqual(self.stats.command_showschema(name='bar'), isc.config.create_answer( 1, "module name is not specified")) @@ -487,8 +523,7 @@ class TestStats(unittest.TestCase): data={ 'lname' : '_foo_@_bar_' }), isc.config.create_answer( 1, - "specified module name and/or statistics data are incorrect:" - + " unknown item lname")) + "errors while setting statistics data: unknown item lname")) self.stats.statistics_data['Stats'] = {} self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( { "module_name": self.stats.module_name } ) @@ -496,8 +531,7 @@ class TestStats(unittest.TestCase): data={ 'lname' : '_foo_@_bar_' }), isc.config.create_answer( 1, - "specified module name and/or statistics data are incorrect:" - + " No statistics specification")) + "errors while setting statistics data: No statistics specification")) self.stats.statistics_data['Stats'] = {} self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( { "module_name": self.stats.module_name, From 4c2732cbf0bb7384ed61ab3604855f143a0c6c5d Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 27 Jul 2011 20:45:18 +0900 Subject: [PATCH 141/175] [trac930] modify the update_modues function There is no part of statistics category in the spec file of a module which has no statistics data. --- src/bin/stats/stats.py.in | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index 84cdf9ce64..0d43570cb8 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -208,8 +208,7 @@ class Stats: (rcode, value) = isc.config.ccsession.parse_answer(answer) if rcode == 0: for mod in value: - spec = { "module_name" : mod, - "statistics" : [] } + spec = { "module_name" : mod } if value[mod] and type(value[mod]) is list: spec["statistics"] = value[mod] modules[mod] = isc.config.module_spec.ModuleSpec(spec) From d86a9dceaddf5a2cee44170e6e677f492df5e0ea Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Thu, 28 Jul 2011 22:07:15 +0900 Subject: [PATCH 142/175] [trac930] modify logging add loggings and new messages for logging remove unused messages from the message file add test logging names into unittest scripts --- src/bin/stats/stats.py.in | 22 ++++++++++++--------- src/bin/stats/stats_httpd.py.in | 3 +-- src/bin/stats/stats_messages.mes | 21 ++++++++++---------- src/bin/stats/tests/b10-stats-httpd_test.py | 3 +++ src/bin/stats/tests/b10-stats_test.py | 3 +++ 5 files changed, 31 insertions(+), 21 deletions(-) diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index 0d43570cb8..ca206bf83a 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -148,9 +148,7 @@ class Stats: Start stats module """ self.running = True - # TODO: should be added into new logging interface - # if self.verbose: - # sys.stdout.write("[b10-stats] starting\n") + logger.info(STATS_STARTING) # request Bob to send statistics data logger.debug(DBG_STATS_MESSAGING, STATS_SEND_REQUEST_BOSS) @@ -281,7 +279,7 @@ class Stats: """ handle show command """ - if (owner or name): + if owner or name: logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_SHOW_NAME_COMMAND, str(owner)+", "+str(name)) @@ -306,9 +304,13 @@ class Stats: """ handle show command """ - # TODO: should be added into new logging interface - # if self.verbose: - # sys.stdout.write("[b10-stats] 'showschema' command received\n") + if owner or name: + logger.debug(DBG_STATS_MESSAGING, + STATS_RECEIVED_SHOWSCHEMA_NAME_COMMAND, + str(owner)+", "+str(name)) + else: + logger.debug(DBG_STATS_MESSAGING, + STATS_RECEIVED_SHOWSCHEMA_ALL_COMMAND) self.update_modules() schema = {} schema_byname = {} @@ -364,10 +366,12 @@ if __name__ == "__main__": stats.start() except OptionValueError as ove: logger.fatal(STATS_BAD_OPTION_VALUE, ove) + sys.exit(1) except SessionError as se: logger.fatal(STATS_CC_SESSION_ERROR, se) - # TODO: should be added into new logging interface + sys.exit(1) except StatsError as se: - sys.exit("[b10-stats] %s" % se) + logger.fatal(STATS_START_ERROR, se) + sys.exit(1) except KeyboardInterrupt as kie: logger.info(STATS_STOPPED_BY_KEYBOARD) diff --git a/src/bin/stats/stats_httpd.py.in b/src/bin/stats/stats_httpd.py.in index cc9c6045c2..32ec6b78fe 100755 --- a/src/bin/stats/stats_httpd.py.in +++ b/src/bin/stats/stats_httpd.py.in @@ -301,8 +301,7 @@ class StatsHttpd: # restore old config self.load_config(old_config) self.open_httpd() - return isc.config.ccsession.create_answer( - 1, "[b10-stats-httpd] %s" % err) + return isc.config.ccsession.create_answer(1, str(err)) else: return isc.config.ccsession.create_answer(0) diff --git a/src/bin/stats/stats_messages.mes b/src/bin/stats/stats_messages.mes index 9ad07cf493..cfffb3adb8 100644 --- a/src/bin/stats/stats_messages.mes +++ b/src/bin/stats/stats_messages.mes @@ -28,16 +28,6 @@ control bus. A likely problem is that the message bus daemon This debug message is printed when the stats module has received a configuration update from the configuration manager. -% STATS_RECEIVED_REMOVE_COMMAND received command to remove %1 -A remove command for the given name was sent to the stats module, and -the given statistics value will now be removed. It will not appear in -statistics reports until it appears in a statistics update from a -module again. - -% STATS_RECEIVED_RESET_COMMAND received command to reset all statistics -The stats module received a command to clear all collected statistics. -The data is cleared until it receives an update from the modules again. - % STATS_RECEIVED_SHOW_ALL_COMMAND received command to show all statistics The stats module received a command to show all statistics that it has collected. @@ -72,4 +62,15 @@ installation problem, where the specification file stats.spec is from a different version of BIND 10 than the stats module itself. Please check your installation. +% STATS_STARTING starting +The stats module will be now starting. +% STATS_RECEIVED_SHOWSCHEMA_ALL_COMMAND received command to show all statistics schema +The stats module received a command to show all statistics schemas of all modules. + +% STATS_RECEIVED_SHOWSCHEMA_NAME_COMMAND received command to show statistics schema for %1 +The stats module received a command to show the specified statistics schema of the specified module. + +% STATS_START_ERROR stats module error: %1 +An internal error occurred while starting the stats module. The stats +module will be now shutting down. diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index 2cc78ddc32..89dea29501 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -30,6 +30,9 @@ import stats_httpd import stats from test_utils import BaseModules, ThreadingServerManager, MyStats, MyStatsHttpd, TIMEOUT_SEC +# set test name for logger +isc.log.init("b10-stats-httpd_test") + DUMMY_DATA = { 'Boss' : { "boot_time": "2011-03-04T11:59:06Z" diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index 640b796fad..4c6bde0f5d 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -24,6 +24,9 @@ import stats import isc.cc.session from test_utils import BaseModules, ThreadingServerManager, MyStats, send_command, TIMEOUT_SEC +# set test name for logger +isc.log.init("b10-stats_test") + class TestUtilties(unittest.TestCase): items = [ { 'item_name': 'test_int1', 'item_type': 'integer', 'item_default': 12345 }, From e906efc3747f052128eef50bed0107a0d53546c8 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 29 Jul 2011 22:11:38 +0900 Subject: [PATCH 143/175] [trac930] remove a unnecessary x bit from stats_httpd.py.in --- src/bin/stats/stats_httpd.py.in | 0 1 file changed, 0 insertions(+), 0 deletions(-) mode change 100755 => 100644 src/bin/stats/stats_httpd.py.in diff --git a/src/bin/stats/stats_httpd.py.in b/src/bin/stats/stats_httpd.py.in old mode 100755 new mode 100644 From db0371fc9e5c7a85ab524ab7bc0b8169b9ba0486 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Mon, 1 Aug 2011 18:21:23 +0900 Subject: [PATCH 144/175] [trac930] rename the function name - rename the name of 'parse_spec' to 'get_spec_defaults' in the result of consideration of what it is doing - modify the description of the function as docstring - fix unitttests for the stats module depending on the function name --- src/bin/stats/stats.py.in | 18 ++++++++++-------- src/bin/stats/tests/b10-stats_test.py | 12 ++++++------ 2 files changed, 16 insertions(+), 14 deletions(-) diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index ca206bf83a..aab98614ce 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -68,12 +68,14 @@ def get_datetime(gmt=None): if not gmt: gmt = gmtime() return strftime("%Y-%m-%dT%H:%M:%SZ", gmt) -def parse_spec(spec): +def get_spec_defaults(spec): """ - parse spec type data + extracts the default values of the items from spec specified in + arg, and returns the dict-type variable which is a set of the item + names and the default values """ if type(spec) is not list: return {} - def _parse_spec(spec): + def _get_spec_defaults(spec): item_type = spec['item_type'] if item_type == "integer": return int(spec.get('item_default', 0)) @@ -86,14 +88,14 @@ def parse_spec(spec): elif item_type == "list": return spec.get( "item_default", - [ _parse_spec(s) for s in spec["list_item_spec"] ]) + [ _get_spec_defaults(s) for s in spec["list_item_spec"] ]) elif item_type == "map": return spec.get( "item_default", - dict([ (s["item_name"], _parse_spec(s)) for s in spec["map_item_spec"] ]) ) + dict([ (s["item_name"], _get_spec_defaults(s)) for s in spec["map_item_spec"] ]) ) else: return spec.get("item_default", None) - return dict([ (s['item_name'], _parse_spec(s)) for s in spec ]) + return dict([ (s['item_name'], _get_spec_defaults(s)) for s in spec ]) class Callback(): """ @@ -137,7 +139,7 @@ class Stats: name = "command_" + cmd["command_name"] try: callback = getattr(self, name) - kwargs = parse_spec(cmd["command_args"]) + kwargs = get_spec_defaults(cmd["command_args"]) self.callbacks[name] = Callback(command=callback, kwargs=kwargs) except AttributeError: raise StatsError(STATS_UNKNOWN_COMMAND_IN_SPEC, cmd["command_name"]) @@ -240,7 +242,7 @@ class Stats: self.update_modules() statistics_data = {} for (name, module) in self.modules.items(): - value = parse_spec(module.get_statistics_spec()) + value = get_spec_defaults(module.get_statistics_spec()) if module.validate_statistics(True, value): statistics_data[name] = value for (name, value) in self.statistics_data.items(): diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index 4c6bde0f5d..011c95d9cb 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -56,9 +56,9 @@ class TestUtilties(unittest.TestCase): { 'item_name': 'test_none', 'item_type': 'none' } ] - def test_parse_spec(self): + def test_get_spec_defaults(self): self.assertEqual( - stats.parse_spec(self.items), { + stats.get_spec_defaults(self.items), { 'test_int1' : 12345 , 'test_real1' : 12345.6789 , 'test_bool1' : True , @@ -72,8 +72,8 @@ class TestUtilties(unittest.TestCase): 'test_list2' : [0,0,0], 'test_map2' : { 'A' : 0, 'B' : 0, 'C' : 0 }, 'test_none' : None }) - self.assertEqual(stats.parse_spec(None), {}) - self.assertRaises(KeyError, stats.parse_spec, [{'item_name':'Foo'}]) + self.assertEqual(stats.get_spec_defaults(None), {}) + self.assertRaises(KeyError, stats.get_spec_defaults, [{'item_name':'Foo'}]) def test_get_timestamp(self): self.assertEqual(stats.get_timestamp(), 1308730448.965706) @@ -280,7 +280,7 @@ class TestStats(unittest.TestCase): self.assertTrue('Stats' in self.stats.modules) self.assertTrue('Boss' in self.stats.modules) self.assertFalse('Dummy' in self.stats.modules) - my_statistics_data = stats.parse_spec(self.stats.modules['Stats'].get_statistics_spec()) + my_statistics_data = stats.get_spec_defaults(self.stats.modules['Stats'].get_statistics_spec()) self.assertTrue('report_time' in my_statistics_data) self.assertTrue('boot_time' in my_statistics_data) self.assertTrue('last_update_time' in my_statistics_data) @@ -291,7 +291,7 @@ class TestStats(unittest.TestCase): self.assertEqual(my_statistics_data['last_update_time'], "1970-01-01T00:00:00Z") self.assertEqual(my_statistics_data['timestamp'], 0.0) self.assertEqual(my_statistics_data['lname'], "") - my_statistics_data = stats.parse_spec(self.stats.modules['Boss'].get_statistics_spec()) + my_statistics_data = stats.get_spec_defaults(self.stats.modules['Boss'].get_statistics_spec()) self.assertTrue('boot_time' in my_statistics_data) self.assertEqual(my_statistics_data['boot_time'], "1970-01-01T00:00:00Z") From c6948a6df9aeedd3753bc4c5e3a553088cd98f63 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Mon, 1 Aug 2011 18:38:35 +0900 Subject: [PATCH 145/175] [trac930] raise StatsError including errors in the stats spec file --- src/bin/stats/stats.py.in | 10 +++++++--- src/bin/stats/tests/b10-stats_test.py | 2 +- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index aab98614ce..bea70168e2 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -165,7 +165,8 @@ class Stats: boot_time=get_datetime(_BASETIME) ) if errors: - raise StatsError("stats spec file is incorrect") + raise StatsError("stats spec file is incorrect: " + + ", ".join(errors)) while self.running: self.mccs.check_command(False) @@ -293,7 +294,9 @@ class Stats: timestamp=get_timestamp(), report_time=get_datetime() ) - if errors: raise StatsError("stats spec file is incorrect") + if errors: + raise StatsError("stats spec file is incorrect: " + + ", ".join(errors)) ret = self.get_statistics_data(owner, name) if ret is not None: return isc.config.create_answer(0, ret) @@ -352,7 +355,8 @@ class Stats: errors = self.update_statistics_data( self.module_name, last_update_time=get_datetime() ) if errors: - raise StatsError("stats spec file is incorrect") + raise StatsError("stats spec file is incorrect: " + + ", ".join(errors)) return isc.config.create_answer(0) if __name__ == "__main__": diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index 011c95d9cb..7eb25559d8 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -191,7 +191,7 @@ class TestStats(unittest.TestCase): def test_start_with_err(self): statsd = stats.Stats() - statsd.update_statistics_data = lambda x,**y: [1] + statsd.update_statistics_data = lambda x,**y: ['an error'] self.assertRaises(stats.StatsError, statsd.start) def test_config_handler(self): From 1d1a87939a010bd16ed23cd817261e9a655bf98f Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 2 Aug 2011 19:57:58 +0900 Subject: [PATCH 146/175] [trac930] remove tailing whitespaces. --- src/bin/stats/b10-stats-httpd.xml | 2 +- src/bin/stats/stats_httpd.py.in | 2 +- src/bin/stats/tests/Makefile.am | 2 +- src/bin/stats/tests/b10-stats-httpd_test.py | 10 +++++----- src/bin/stats/tests/b10-stats_test.py | 16 ++++++++-------- src/bin/stats/tests/test_utils.py | 8 ++++---- src/bin/tests/Makefile.am | 2 +- 7 files changed, 21 insertions(+), 21 deletions(-) diff --git a/src/bin/stats/b10-stats-httpd.xml b/src/bin/stats/b10-stats-httpd.xml index 3636d9d543..c8df9b8a6e 100644 --- a/src/bin/stats/b10-stats-httpd.xml +++ b/src/bin/stats/b10-stats-httpd.xml @@ -132,7 +132,7 @@ CONFIGURATION AND COMMANDS - The configurable setting in + The configurable setting in stats-httpd.spec is: diff --git a/src/bin/stats/stats_httpd.py.in b/src/bin/stats/stats_httpd.py.in index 32ec6b78fe..f8a09e5610 100644 --- a/src/bin/stats/stats_httpd.py.in +++ b/src/bin/stats/stats_httpd.py.in @@ -415,7 +415,7 @@ class StatsHttpd: annotation.append(documentation) element.append(annotation) alltag.append(element) - + complextype = xml.etree.ElementTree.Element("complexType") complextype.append(alltag) mod_element = xml.etree.ElementTree.Element("element", { "name" : mod }) diff --git a/src/bin/stats/tests/Makefile.am b/src/bin/stats/tests/Makefile.am index 19f6117334..368e90c700 100644 --- a/src/bin/stats/tests/Makefile.am +++ b/src/bin/stats/tests/Makefile.am @@ -13,7 +13,7 @@ endif # test using command-line arguments, so use check-local target instead of TESTS check-local: if ENABLE_PYTHON_COVERAGE - touch $(abs_top_srcdir)/.coverage + touch $(abs_top_srcdir)/.coverage rm -f .coverage ${LN_S} $(abs_top_srcdir)/.coverage .coverage endif diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index 89dea29501..5d22f72441 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -30,7 +30,7 @@ import stats_httpd import stats from test_utils import BaseModules, ThreadingServerManager, MyStats, MyStatsHttpd, TIMEOUT_SEC -# set test name for logger +# set test name for logger isc.log.init("b10-stats-httpd_test") DUMMY_DATA = { @@ -364,7 +364,7 @@ class TestStatsHttpd(unittest.TestCase): for ht in self.stats_httpd.httpd: self.assertTrue(isinstance(ht.socket, socket.socket)) self.stats_httpd.close_httpd() - + # dual stack (address is ipv4) if self.ipv6_enabled: self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] @@ -380,20 +380,20 @@ class TestStatsHttpd(unittest.TestCase): for ht in self.stats_httpd.httpd: self.assertTrue(isinstance(ht.socket, socket.socket)) self.stats_httpd.close_httpd() - + # only-ipv4 single stack (force set ipv6 ) if not self.ipv6_enabled: self.stats_httpd.http_addrs = [ ('::1', 8000) ] self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) - + # hostname self.stats_httpd.http_addrs = [ ('localhost', 8000) ] self.stats_httpd.open_httpd() for ht in self.stats_httpd.httpd: self.assertTrue(isinstance(ht.socket, socket.socket)) self.stats_httpd.close_httpd() - + self.stats_httpd.http_addrs = [ ('my.host.domain', 8000) ] self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) self.assertEqual(type(self.stats_httpd.httpd), list) diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index 7eb25559d8..cd53a57821 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -24,7 +24,7 @@ import stats import isc.cc.session from test_utils import BaseModules, ThreadingServerManager, MyStats, send_command, TIMEOUT_SEC -# set test name for logger +# set test name for logger isc.log.init("b10-stats_test") class TestUtilties(unittest.TestCase): @@ -205,19 +205,19 @@ class TestStats(unittest.TestCase): self.base.boss.server._started.wait() self.assertEqual( send_command( - 'show', 'Stats', + 'show', 'Stats', params={ 'owner' : 'Boss', 'name' : 'boot_time' }), (0, '2011-06-22T08:14:08Z')) self.assertEqual( send_command( - 'set', 'Stats', + 'set', 'Stats', params={ 'owner' : 'Boss', 'data' : { 'boot_time' : '2012-06-22T18:24:08Z' } }), (0, None)) self.assertEqual( send_command( - 'show', 'Stats', + 'show', 'Stats', params={ 'owner' : 'Boss', 'name' : 'boot_time' }), (0, '2012-06-22T18:24:08Z')) @@ -267,7 +267,7 @@ class TestStats(unittest.TestCase): self.assertTrue('item_description' in item) if len(item) == 7: self.assertTrue('item_format' in item) - + self.assertEqual( send_command('__UNKNOWN__', 'Stats'), (1, "Unknown command: '__UNKNOWN__'")) @@ -340,13 +340,13 @@ class TestStats(unittest.TestCase): self.assertEqual(self.stats.command_status(), isc.config.create_answer( 0, "Stats is up. (PID " + str(os.getpid()) + ")")) - + def test_command_shutdown(self): self.stats.running = True self.assertEqual(self.stats.command_shutdown(), isc.config.create_answer(0)) self.assertFalse(self.stats.running) - + def test_command_show(self): self.assertEqual(self.stats.command_show(owner='Foo', name=None), isc.config.create_answer( @@ -380,7 +380,7 @@ class TestStats(unittest.TestCase): "statistics": [] } ) self.assertRaises( stats.StatsError, self.stats.command_show, owner='Foo', name='bar') - + def test_command_showchema(self): (rcode, value) = isc.config.ccsession.parse_answer( self.stats.command_showschema()) diff --git a/src/bin/stats/tests/test_utils.py b/src/bin/stats/tests/test_utils.py index b3e20b5f76..a793dc69a6 100644 --- a/src/bin/stats/tests/test_utils.py +++ b/src/bin/stats/tests/test_utils.py @@ -10,7 +10,7 @@ import threading import tempfile import msgq -import isc.config.cfgmgr +import isc.config.cfgmgr import stats import stats_httpd @@ -49,7 +49,7 @@ class ThreadingServerManager: self.server._thread = threading.Thread( name=self.server_class_name, target=self.server.run) self.server._thread.daemon = True - + def run(self): self.server._thread.start() self.server._started.wait() @@ -94,7 +94,7 @@ class MockCfgmgr: def shutdown(self): self.cfgmgr.running = False - + class MockBoss: spec_str = """\ { @@ -157,7 +157,7 @@ class MockBoss: params = { "owner": "Boss", "data": { 'boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', self._BASETIME) - } + } } return send_command("set", "Stats", params=params, session=self.cc_session) return isc.config.create_answer(1, "Unknown Command") diff --git a/src/bin/tests/Makefile.am b/src/bin/tests/Makefile.am index 56ff68b0c7..0dc5021302 100644 --- a/src/bin/tests/Makefile.am +++ b/src/bin/tests/Makefile.am @@ -14,7 +14,7 @@ endif # test using command-line arguments, so use check-local target instead of TESTS check-local: if ENABLE_PYTHON_COVERAGE - touch $(abs_top_srcdir)/.coverage + touch $(abs_top_srcdir)/.coverage rm -f .coverage ${LN_S} $(abs_top_srcdir)/.coverage .coverage endif From e18a678b62d03729f065c40650d7183e2f260b22 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 2 Aug 2011 20:17:28 +0900 Subject: [PATCH 147/175] [trac930] modify b10-stats_test.py - set the constant variables in the setUp method in the TestUtilties class, and compare values returned from the functions with these constants in testing methods. [trac930] remove the tearDown method which has no test case in the TestCallback class --- src/bin/stats/tests/b10-stats_test.py | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index cd53a57821..0c2fa3c2f3 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -56,6 +56,13 @@ class TestUtilties(unittest.TestCase): { 'item_name': 'test_none', 'item_type': 'none' } ] + def setUp(self): + self.const_timestamp = 1308730448.965706 + self.const_timetuple = (2011, 6, 22, 8, 14, 8, 2, 173, 0) + self.const_datetime = '2011-06-22T08:14:08Z' + stats.time = lambda : self.const_timestamp + stats.gmtime = lambda : self.const_timetuple + def test_get_spec_defaults(self): self.assertEqual( stats.get_spec_defaults(self.items), { @@ -76,14 +83,12 @@ class TestUtilties(unittest.TestCase): self.assertRaises(KeyError, stats.get_spec_defaults, [{'item_name':'Foo'}]) def test_get_timestamp(self): - self.assertEqual(stats.get_timestamp(), 1308730448.965706) + self.assertEqual(stats.get_timestamp(), self.const_timestamp) def test_get_datetime(self): - stats.time = lambda : 1308730448.965706 - stats.gmtime = lambda : (2011, 6, 22, 8, 14, 8, 2, 173, 0) - self.assertEqual(stats.get_datetime(), '2011-06-22T08:14:08Z') + self.assertEqual(stats.get_datetime(), self.const_datetime) self.assertNotEqual(stats.get_datetime( - (2011, 6, 22, 8, 23, 40, 2, 173, 0)), '2011-06-22T08:14:08Z') + (2011, 6, 22, 8, 23, 40, 2, 173, 0)), self.const_datetime) class TestCallback(unittest.TestCase): def setUp(self): @@ -108,9 +113,6 @@ class TestCallback(unittest.TestCase): args=self.dummy_args ) - def tearDown(self): - pass - def test_init(self): self.assertEqual((self.cback1.command, self.cback1.args, self.cback1.kwargs), (self.dummy_func, self.dummy_args, self.dummy_kwargs)) From 7a31e95e63013a298b449573cc5336bcd64a0419 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 2 Aug 2011 21:44:07 +0900 Subject: [PATCH 148/175] [trac930] modify stats.py - add more documentations into update_modules, get_statistics_data and update_statistics_data methods - modify two methods: "update_modules" and "get_statistics_data" methods raise StatsError instead of just returning None, when communication between stats module and config manager is failed or when it can't find specified statistics data. - also modify the unittest depending on the changes of these behaviors. --- src/bin/stats/stats.py.in | 29 ++++++++++++++++++++------- src/bin/stats/tests/b10-stats_test.py | 15 ++++++++++---- 2 files changed, 33 insertions(+), 11 deletions(-) diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index bea70168e2..9f24c67a9f 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -197,7 +197,10 @@ class Stats: def update_modules(self): """ - update information of each module + updates information of each module. This method gets each + module's information from the config manager and sets it into + self.modules. If its getting from the config manager fails, it + raises StatsError. """ modules = {} seq = self.cc_session.group_sendmsg( @@ -213,12 +216,16 @@ class Stats: if value[mod] and type(value[mod]) is list: spec["statistics"] = value[mod] modules[mod] = isc.config.module_spec.ModuleSpec(spec) + else: + raise StatsError("Updating module spec fails: " + str(value)) modules[self.module_name] = self.mccs.get_module_spec() self.modules = modules def get_statistics_data(self, owner=None, name=None): """ - return statistics data which stats module has of each module + returns statistics data which stats module has of each + module. If it can't find specified statistics data, it raises + StatsError. """ self.update_statistics_data() if owner and name: @@ -235,10 +242,18 @@ class Stats: pass else: return self.statistics_data + raise StatsError("No statistics data found: " + + "owner: " + str(owner) + ", " + + "name: " + str(name)) def update_statistics_data(self, owner=None, **data): """ - change statistics date of specified module into specified data + change statistics date of specified module into specified + data. It updates information of each module first, and it + updates statistics data. If specified data is invalid for + statistics spec of specified owner, it returns a list of error + messeges. If there is no error or if neither owner nor data is + specified in args, it returns None. """ self.update_modules() statistics_data = {} @@ -297,10 +312,10 @@ class Stats: if errors: raise StatsError("stats spec file is incorrect: " + ", ".join(errors)) - ret = self.get_statistics_data(owner, name) - if ret is not None: - return isc.config.create_answer(0, ret) - else: + try: + return isc.config.create_answer( + 0, self.get_statistics_data(owner, name)) + except StatsError: return isc.config.create_answer( 1, "specified arguments are incorrect: " \ + "owner: " + str(owner) + ", name: " + str(name)) diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index 0c2fa3c2f3..acf2269c2b 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -296,6 +296,10 @@ class TestStats(unittest.TestCase): my_statistics_data = stats.get_spec_defaults(self.stats.modules['Boss'].get_statistics_spec()) self.assertTrue('boot_time' in my_statistics_data) self.assertEqual(my_statistics_data['boot_time'], "1970-01-01T00:00:00Z") + orig_parse_answer = stats.isc.config.ccsession.parse_answer + stats.isc.config.ccsession.parse_answer = lambda x: (99, 'error') + self.assertRaises(stats.StatsError, self.stats.update_modules) + stats.isc.config.ccsession.parse_answer = orig_parse_answer def test_get_statistics_data(self): my_statistics_data = self.stats.get_statistics_data() @@ -307,7 +311,7 @@ class TestStats(unittest.TestCase): self.assertTrue('last_update_time' in my_statistics_data) self.assertTrue('timestamp' in my_statistics_data) self.assertTrue('lname' in my_statistics_data) - self.assertIsNone(self.stats.get_statistics_data(owner='Foo')) + self.assertRaises(stats.StatsError, self.stats.get_statistics_data, owner='Foo') my_statistics_data = self.stats.get_statistics_data(owner='Stats') self.assertTrue('boot_time' in my_statistics_data) my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='report_time') @@ -320,9 +324,12 @@ class TestStats(unittest.TestCase): self.assertEqual(my_statistics_data, 0.0) my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='lname') self.assertEqual(my_statistics_data, '') - self.assertIsNone(self.stats.get_statistics_data(owner='Stats', name='Bar')) - self.assertIsNone(self.stats.get_statistics_data(owner='Foo', name='Bar')) - self.assertEqual(self.stats.get_statistics_data(name='Bar'), None) + self.assertRaises(stats.StatsError, self.stats.get_statistics_data, + owner='Stats', name='Bar') + self.assertRaises(stats.StatsError, self.stats.get_statistics_data, + owner='Foo', name='Bar') + self.assertRaises(stats.StatsError, self.stats.get_statistics_data, + name='Bar') def test_update_statistics_data(self): self.stats.update_statistics_data(owner='Stats', lname='foo@bar') From b8cecbbd905c10d28bcb905def7160d9e406dac4 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 2 Aug 2011 22:00:11 +0900 Subject: [PATCH 149/175] [trac930] add comments about abstracts of the test scripts in their headers --- src/bin/stats/tests/b10-stats-httpd_test.py | 7 +++++++ src/bin/stats/tests/b10-stats_test.py | 7 +++++++ 2 files changed, 14 insertions(+) diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index 5d22f72441..e1c05dae42 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -13,6 +13,13 @@ # NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. +""" +This unittests run Msgq, Cfgmgr, Auth, Boss and Stats as mock in +background. Because the stats httpd communicates various other modules +in runtime. However this aim is not to actually simulate a whole +system running. +""" + import unittest import os import imp diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index acf2269c2b..f8830eb5ea 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -13,6 +13,13 @@ # NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. +""" +This unittests run Msgq, Cfgmgr, Auth and Boss as mock in +background. Because the stats module communicates various other +modules in runtime. However this aim is not to actually simulate a +whole system running. +""" + import unittest import os import threading From 0314c7bb66b85775dea73c95463eed88e9e286c3 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Wed, 3 Aug 2011 11:41:05 +0900 Subject: [PATCH 150/175] [trac930] refactor unittests - remove time.sleep from various unittests and add in the "run" method in ThreadingServerManager - adjust the sleep time (TIMEOUT_SEC) - join some small unittests (test_start_with_err, test_command_status, test_command_shutdown) --- src/bin/stats/tests/b10-stats-httpd_test.py | 7 ------- src/bin/stats/tests/b10-stats_test.py | 23 +++++++++++---------- src/bin/stats/tests/test_utils.py | 6 ++++-- 3 files changed, 16 insertions(+), 20 deletions(-) diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index e1c05dae42..870e6b9038 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -78,7 +78,6 @@ class TestHttpHandler(unittest.TestCase): self.assertTrue(type(self.stats_httpd.httpd) is list) self.assertEqual(len(self.stats_httpd.httpd), 0) statshttpd_server.run() - time.sleep(TIMEOUT_SEC*8) client = http.client.HTTPConnection(address, port) client._http_vsn_str = 'HTTP/1.0\n' client.connect() @@ -175,7 +174,6 @@ class TestHttpHandler(unittest.TestCase): statshttpd_server.run() self.assertTrue(self.stats_server.server.running) self.stats_server.shutdown() - time.sleep(TIMEOUT_SEC*2) self.assertFalse(self.stats_server.server.running) statshttpd.cc_session.set_timeout(milliseconds=TIMEOUT_SEC/1000) client = http.client.HTTPConnection(address, port) @@ -213,7 +211,6 @@ class TestHttpHandler(unittest.TestCase): lambda cmd, args: \ isc.config.ccsession.create_answer(1, "I have an error.") ) - time.sleep(TIMEOUT_SEC*5) client = http.client.HTTPConnection(address, port) client.connect() @@ -244,7 +241,6 @@ class TestHttpHandler(unittest.TestCase): self.stats_httpd = statshttpd_server.server self.stats_httpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) statshttpd_server.run() - time.sleep(TIMEOUT_SEC*5) client = http.client.HTTPConnection(address, port) client.connect() client.putrequest('HEAD', stats_httpd.XML_URL_PATH) @@ -423,7 +419,6 @@ class TestStatsHttpd(unittest.TestCase): self.statshttpd_server = ThreadingServerManager(MyStatsHttpd) self.statshttpd_server.server.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65454 }]}) self.statshttpd_server.run() - time.sleep(TIMEOUT_SEC) self.stats_httpd.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65454 }]}) self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) self.statshttpd_server.shutdown() @@ -434,7 +429,6 @@ class TestStatsHttpd(unittest.TestCase): self.stats_httpd = self.statshttpd_server.server self.stats_httpd.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65455 }]}) self.statshttpd_server.run() - time.sleep(TIMEOUT_SEC*2) self.assertTrue(self.stats_httpd.running) self.statshttpd_server.shutdown() self.assertFalse(self.stats_httpd.running) @@ -461,7 +455,6 @@ class TestStatsHttpd(unittest.TestCase): statshttpd = statshttpd_server.server statshttpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) statshttpd_server.run() - time.sleep(TIMEOUT_SEC*2) statshttpd_server.shutdown() def test_open_template(self): diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index f8830eb5ea..b2c1b7fdea 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -189,28 +189,28 @@ class TestStats(unittest.TestCase): stats.SPECFILE_LOCATION = orig_spec_location def test_start(self): + # start without err statsserver = ThreadingServerManager(MyStats) - stats = statsserver.server - self.assertFalse(stats.running) + statsd = statsserver.server + self.assertFalse(statsd.running) statsserver.run() - time.sleep(TIMEOUT_SEC) - self.assertTrue(stats.running) + self.assertTrue(statsd.running) statsserver.shutdown() - self.assertFalse(stats.running) + self.assertFalse(statsd.running) - def test_start_with_err(self): + # start with err statsd = stats.Stats() statsd.update_statistics_data = lambda x,**y: ['an error'] self.assertRaises(stats.StatsError, statsd.start) - def test_config_handler(self): + def test_handlers(self): + # config_handler self.assertEqual(self.stats.config_handler({'foo':'bar'}), isc.config.create_answer(0)) - def test_command_handler(self): + # command_handler statsserver = ThreadingServerManager(MyStats) statsserver.run() - time.sleep(TIMEOUT_SEC*4) self.base.boss.server._started.wait() self.assertEqual( send_command( @@ -352,12 +352,13 @@ class TestStats(unittest.TestCase): self.assertEqual(self.stats.update_statistics_data(owner='Dummy', foo='bar'), ['unknown module name: Dummy']) - def test_command_status(self): + def test_commands(self): + # status self.assertEqual(self.stats.command_status(), isc.config.create_answer( 0, "Stats is up. (PID " + str(os.getpid()) + ")")) - def test_command_shutdown(self): + # shutdown self.stats.running = True self.assertEqual(self.stats.command_shutdown(), isc.config.create_answer(0)) diff --git a/src/bin/stats/tests/test_utils.py b/src/bin/stats/tests/test_utils.py index a793dc69a6..f9ab969e13 100644 --- a/src/bin/stats/tests/test_utils.py +++ b/src/bin/stats/tests/test_utils.py @@ -15,9 +15,9 @@ import stats import stats_httpd # TODO: consider appropriate timeout seconds -TIMEOUT_SEC = 0.01 +TIMEOUT_SEC = 0.05 -def send_command(command_name, module_name, params=None, session=None, nonblock=False, timeout=TIMEOUT_SEC*2): +def send_command(command_name, module_name, params=None, session=None, nonblock=False, timeout=TIMEOUT_SEC): if not session: cc_session = isc.cc.Session() else: @@ -53,6 +53,8 @@ class ThreadingServerManager: def run(self): self.server._thread.start() self.server._started.wait() + # waiting for the server's being ready for listening + time.sleep(TIMEOUT_SEC) def shutdown(self): self.server.shutdown() From da5d5926cb26ca8dbdae119c03687cd3415f6638 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 5 Aug 2011 14:48:27 +0900 Subject: [PATCH 151/175] [trac930] - change address for test to 127.0.0.1 due to platform 127.0.0.2 can't be assigned - remove unnecessary thread.Event.wait() - add thread.Event.clear() after thread.Event.wait() --- src/bin/stats/tests/b10-stats-httpd_test.py | 4 ++-- src/bin/stats/tests/b10-stats_test.py | 1 - src/bin/stats/tests/test_utils.py | 1 + 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index 870e6b9038..3677c5f1f3 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -512,13 +512,13 @@ class TestStatsHttpd(unittest.TestCase): self.assertEqual( self.stats_httpd.config_handler( - dict(listen_on=[dict(address="127.0.0.2",port=8000)])), + dict(listen_on=[dict(address="127.0.0.1",port=8000)])), isc.config.ccsession.create_answer(0)) self.assertTrue("listen_on" in self.stats_httpd.config) for addr in self.stats_httpd.config["listen_on"]: self.assertTrue("address" in addr) self.assertTrue("port" in addr) - self.assertTrue(addr["address"] == "127.0.0.2") + self.assertTrue(addr["address"] == "127.0.0.1") self.assertTrue(addr["port"] == 8000) if self.ipv6_enabled: diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index b2c1b7fdea..2757c61ece 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -211,7 +211,6 @@ class TestStats(unittest.TestCase): # command_handler statsserver = ThreadingServerManager(MyStats) statsserver.run() - self.base.boss.server._started.wait() self.assertEqual( send_command( 'show', 'Stats', diff --git a/src/bin/stats/tests/test_utils.py b/src/bin/stats/tests/test_utils.py index f9ab969e13..e79db48951 100644 --- a/src/bin/stats/tests/test_utils.py +++ b/src/bin/stats/tests/test_utils.py @@ -53,6 +53,7 @@ class ThreadingServerManager: def run(self): self.server._thread.start() self.server._started.wait() + self.server._started.clear() # waiting for the server's being ready for listening time.sleep(TIMEOUT_SEC) From fcc707041d663b98c1992cdd1402cc183155d3c0 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Fri, 5 Aug 2011 16:24:03 +0900 Subject: [PATCH 152/175] [trac930] - revise header comments in each test script - replace some hard-coded time strings with the constants defined in the setUp function - merged several checks about B10_FROM_SOURCE into the TestOSEnv class --- src/bin/stats/tests/b10-stats-httpd_test.py | 9 ++- src/bin/stats/tests/b10-stats_test.py | 90 ++++++++++++--------- 2 files changed, 55 insertions(+), 44 deletions(-) diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index 3677c5f1f3..8c84277930 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -14,10 +14,11 @@ # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. """ -This unittests run Msgq, Cfgmgr, Auth, Boss and Stats as mock in -background. Because the stats httpd communicates various other modules -in runtime. However this aim is not to actually simulate a whole -system running. +In each of these tests we start several virtual components. They are +not the real components, no external processes are started. They are +just simple mock objects running each in its own thread and pretending +to be bind10 modules. This helps testing the stats http server in a +close to real environment. """ import unittest diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index 2757c61ece..7cf4f7ede0 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -14,10 +14,11 @@ # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. """ -This unittests run Msgq, Cfgmgr, Auth and Boss as mock in -background. Because the stats module communicates various other -modules in runtime. However this aim is not to actually simulate a -whole system running. +In each of these tests we start several virtual components. They are +not the real components, no external processes are started. They are +just simple mock objects running each in its own thread and pretending +to be bind10 modules. This helps testing the stats module in a close +to real environment. """ import unittest @@ -146,11 +147,9 @@ class TestStats(unittest.TestCase): def setUp(self): self.base = BaseModules() self.stats = stats.Stats() - self.assertTrue("B10_FROM_SOURCE" in os.environ) - self.assertEqual(stats.SPECFILE_LOCATION, \ - os.environ["B10_FROM_SOURCE"] + os.sep + \ - "src" + os.sep + "bin" + os.sep + "stats" + \ - os.sep + "stats.spec") + self.const_timestamp = 1308730448.965706 + self.const_datetime = '2011-06-22T08:14:08Z' + self.const_default_datetime = '1970-01-01T00:00:00Z' def tearDown(self): self.base.shutdown() @@ -216,19 +215,19 @@ class TestStats(unittest.TestCase): 'show', 'Stats', params={ 'owner' : 'Boss', 'name' : 'boot_time' }), - (0, '2011-06-22T08:14:08Z')) + (0, self.const_datetime)) self.assertEqual( send_command( 'set', 'Stats', params={ 'owner' : 'Boss', - 'data' : { 'boot_time' : '2012-06-22T18:24:08Z' } }), + 'data' : { 'boot_time' : self.const_datetime } }), (0, None)) self.assertEqual( send_command( 'show', 'Stats', params={ 'owner' : 'Boss', 'name' : 'boot_time' }), - (0, '2012-06-22T18:24:08Z')) + (0, self.const_datetime)) self.assertEqual( send_command('status', 'Stats'), (0, "Stats is up. (PID " + str(os.getpid()) + ")")) @@ -242,7 +241,7 @@ class TestStats(unittest.TestCase): self.assertEqual(len(value['Stats']), 5) self.assertEqual(len(value['Boss']), 1) self.assertTrue('boot_time' in value['Boss']) - self.assertEqual(value['Boss']['boot_time'], '2012-06-22T18:24:08Z') + self.assertEqual(value['Boss']['boot_time'], self.const_datetime) self.assertTrue('report_time' in value['Stats']) self.assertTrue('boot_time' in value['Stats']) self.assertTrue('last_update_time' in value['Stats']) @@ -294,14 +293,14 @@ class TestStats(unittest.TestCase): self.assertTrue('last_update_time' in my_statistics_data) self.assertTrue('timestamp' in my_statistics_data) self.assertTrue('lname' in my_statistics_data) - self.assertEqual(my_statistics_data['report_time'], "1970-01-01T00:00:00Z") - self.assertEqual(my_statistics_data['boot_time'], "1970-01-01T00:00:00Z") - self.assertEqual(my_statistics_data['last_update_time'], "1970-01-01T00:00:00Z") + self.assertEqual(my_statistics_data['report_time'], self.const_default_datetime) + self.assertEqual(my_statistics_data['boot_time'], self.const_default_datetime) + self.assertEqual(my_statistics_data['last_update_time'], self.const_default_datetime) self.assertEqual(my_statistics_data['timestamp'], 0.0) self.assertEqual(my_statistics_data['lname'], "") my_statistics_data = stats.get_spec_defaults(self.stats.modules['Boss'].get_statistics_spec()) self.assertTrue('boot_time' in my_statistics_data) - self.assertEqual(my_statistics_data['boot_time'], "1970-01-01T00:00:00Z") + self.assertEqual(my_statistics_data['boot_time'], self.const_default_datetime) orig_parse_answer = stats.isc.config.ccsession.parse_answer stats.isc.config.ccsession.parse_answer = lambda x: (99, 'error') self.assertRaises(stats.StatsError, self.stats.update_modules) @@ -321,11 +320,11 @@ class TestStats(unittest.TestCase): my_statistics_data = self.stats.get_statistics_data(owner='Stats') self.assertTrue('boot_time' in my_statistics_data) my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='report_time') - self.assertEqual(my_statistics_data, "1970-01-01T00:00:00Z") + self.assertEqual(my_statistics_data, self.const_default_datetime) my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='boot_time') - self.assertEqual(my_statistics_data, "1970-01-01T00:00:00Z") + self.assertEqual(my_statistics_data, self.const_default_datetime) my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='last_update_time') - self.assertEqual(my_statistics_data, "1970-01-01T00:00:00Z") + self.assertEqual(my_statistics_data, self.const_default_datetime) my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='timestamp') self.assertEqual(my_statistics_data, 0.0) my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='lname') @@ -342,10 +341,10 @@ class TestStats(unittest.TestCase): self.assertTrue('Stats' in self.stats.statistics_data) my_statistics_data = self.stats.statistics_data['Stats'] self.assertEqual(my_statistics_data['lname'], 'foo@bar') - self.stats.update_statistics_data(owner='Stats', last_update_time='2000-01-01T10:10:10Z') + self.stats.update_statistics_data(owner='Stats', last_update_time=self.const_datetime) self.assertTrue('Stats' in self.stats.statistics_data) my_statistics_data = self.stats.statistics_data['Stats'] - self.assertEqual(my_statistics_data['last_update_time'], '2000-01-01T10:10:10Z') + self.assertEqual(my_statistics_data['last_update_time'], self.const_datetime) self.assertEqual(self.stats.update_statistics_data(owner='Stats', lname=0.0), ['0.0 should be a string']) self.assertEqual(self.stats.update_statistics_data(owner='Dummy', foo='bar'), @@ -381,14 +380,14 @@ class TestStats(unittest.TestCase): 0, 0)) orig_get_timestamp = stats.get_timestamp orig_get_datetime = stats.get_datetime - stats.get_timestamp = lambda : 1308730448.965706 - stats.get_datetime = lambda : '2011-06-22T08:14:08Z' - self.assertEqual(stats.get_timestamp(), 1308730448.965706) - self.assertEqual(stats.get_datetime(), '2011-06-22T08:14:08Z') + stats.get_timestamp = lambda : self.const_timestamp + stats.get_datetime = lambda : self.const_datetime + self.assertEqual(stats.get_timestamp(), self.const_timestamp) + self.assertEqual(stats.get_datetime(), self.const_datetime) self.assertEqual(self.stats.command_show(owner='Stats', name='report_time'), \ - isc.config.create_answer(0, '2011-06-22T08:14:08Z')) - self.assertEqual(self.stats.statistics_data['Stats']['timestamp'], 1308730448.965706) - self.assertEqual(self.stats.statistics_data['Stats']['boot_time'], '1970-01-01T00:00:00Z') + isc.config.create_answer(0, self.const_datetime)) + self.assertEqual(self.stats.statistics_data['Stats']['timestamp'], self.const_timestamp) + self.assertEqual(self.stats.statistics_data['Stats']['boot_time'], self.const_default_datetime) stats.get_timestamp = orig_get_timestamp stats.get_datetime = orig_get_datetime self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( @@ -520,17 +519,17 @@ class TestStats(unittest.TestCase): def test_command_set(self): orig_get_datetime = stats.get_datetime - stats.get_datetime = lambda : '2011-06-22T06:12:38Z' + stats.get_datetime = lambda : self.const_datetime (rcode, value) = isc.config.ccsession.parse_answer( self.stats.command_set(owner='Boss', - data={ 'boot_time' : '2011-06-22T13:15:04Z' })) + data={ 'boot_time' : self.const_datetime })) stats.get_datetime = orig_get_datetime self.assertEqual(rcode, 0) self.assertTrue(value is None) self.assertEqual(self.stats.statistics_data['Boss']['boot_time'], - '2011-06-22T13:15:04Z') + self.const_datetime) self.assertEqual(self.stats.statistics_data['Stats']['last_update_time'], - '2011-06-22T06:12:38Z') + self.const_datetime) self.assertEqual(self.stats.command_set(owner='Stats', data={ 'lname' : 'foo@bar' }), isc.config.create_answer(0, None)) @@ -566,16 +565,27 @@ class TestStats(unittest.TestCase): self.assertRaises(stats.StatsError, self.stats.command_set, owner='Stats', data={ 'dummy' : '_xxxx_yyyy_zzz_' }) +class TestOSEnv(unittest.TestCase): def test_osenv(self): """ - test for not having environ "B10_FROM_SOURCE" + test for the environ variable "B10_FROM_SOURCE" + "B10_FROM_SOURCE" is set in Makefile """ - if "B10_FROM_SOURCE" in os.environ: - path = os.environ["B10_FROM_SOURCE"] - os.environ.pop("B10_FROM_SOURCE") - imp.reload(stats) - os.environ["B10_FROM_SOURCE"] = path - imp.reload(stats) + # test case having B10_FROM_SOURCE + self.assertTrue("B10_FROM_SOURCE" in os.environ) + self.assertEqual(stats.SPECFILE_LOCATION, \ + os.environ["B10_FROM_SOURCE"] + os.sep + \ + "src" + os.sep + "bin" + os.sep + "stats" + \ + os.sep + "stats.spec") + # test case not having B10_FROM_SOURCE + path = os.environ["B10_FROM_SOURCE"] + os.environ.pop("B10_FROM_SOURCE") + self.assertFalse("B10_FROM_SOURCE" in os.environ) + # import stats again + imp.reload(stats) + # revert the changes + os.environ["B10_FROM_SOURCE"] = path + imp.reload(stats) def test_main(): unittest.main() From f20be125d667bceea0d940fc5fabf87b2eef86cd Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 9 Aug 2011 15:57:22 +0900 Subject: [PATCH 153/175] [trac930] revise the entry of ChangeLog for trac928, trac929 and trac930 --- ChangeLog | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/ChangeLog b/ChangeLog index d4cd88de14..3e1efbef88 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,11 +1,8 @@ -xxx. [func] naokikambe - Add statistics category in each module spec file for management of - statistics data schemas by each module. Add get_statistics_spec into - cfgmgr and related codes. show statistics data and data schema by each - module via both bintcl and HTTP/XML interfaces. Change item name in - each statistics data. (Remove prefix "xxx." indicating the module - name.) Add new mock modules for unittests of stats and stats httpd - modules. +xxx. [func] naokikambe + Statistics items are specified by each module's spec file. + Stats module can read these through the config manager. Stats + module and stats httpd report statistics data and statistics + schema by each module via both bindctl and HTTP/XML. (Trac #928,#929,#930, git nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn) 278. [doc] jelte From 004afad6ea3fba7c8dd7730428b50fd770daec66 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Mon, 15 Aug 2011 14:59:08 +0900 Subject: [PATCH 154/175] [master] update the ChangeLog entry for trac928, trac929 and trac930 --- ChangeLog | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ChangeLog b/ChangeLog index 3e1efbef88..94b9a22f48 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,9 +1,9 @@ -xxx. [func] naokikambe +279. [func] naokikambe Statistics items are specified by each module's spec file. Stats module can read these through the config manager. Stats module and stats httpd report statistics data and statistics schema by each module via both bindctl and HTTP/XML. - (Trac #928,#929,#930, git nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn) + (Trac #928,#929,#930, git f20be125d667bceea0d940fc5fabf87b2eef86cd) 278. [doc] jelte Add logging configuration documentation to the guide. From 3ce7b09732207eac03998fa5e267672760e475c9 Mon Sep 17 00:00:00 2001 From: Jelte Jansen Date: Mon, 15 Aug 2011 17:24:29 +0200 Subject: [PATCH 155/175] [1063] a couple of style fixes --- src/lib/datasrc/database.cc | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 6afd3dce85..287602ab1f 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -51,7 +51,7 @@ DatabaseClient::findZone(const Name& name) const { ZoneFinderPtr(new Finder(database_, zone.second, name)))); } - // Than super domains + // Then super domains // Start from 1, as 0 is covered above for (size_t i(1); i < name.getLabelCount(); ++i) { isc::dns::Name superdomain(name.split(i)); @@ -276,7 +276,7 @@ DatabaseClient::Finder::getRRset(const isc::dns::Name& name, if (result_rrset) { sig_store.appendSignatures(result_rrset); } - return std::pair(records_found, result_rrset); + return (std::pair(records_found, result_rrset)); } @@ -298,16 +298,17 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, try { // First, do we have any kind of delegation (NS/DNAME) here? Name origin(getOrigin()); - size_t originLabelCount(origin.getLabelCount()); - size_t currentLabelCount(name.getLabelCount()); + size_t origin_label_count(origin.getLabelCount()); + size_t current_label_count(name.getLabelCount()); // This is how many labels we remove to get origin - size_t removeLabels(currentLabelCount - originLabelCount); + size_t remove_labels(current_label_count - origin_label_count); + // Now go trough all superdomains from origin down - for (int i(removeLabels); i > 0; -- i) { + for (int i(remove_labels); i > 0; --i) { Name superdomain(name.split(i)); // Look if there's NS or DNAME (but ignore the NS in origin) found = getRRset(superdomain, NULL, false, true, - i != removeLabels); + i != remove_labels); if (found.second) { // We found something redirecting somewhere else // (it can be only NS or DNAME here) From b4a1bc9ba28398dbd5fdbe4ee4f118a2faf59efa Mon Sep 17 00:00:00 2001 From: chenzhengzhang Date: Tue, 16 Aug 2011 10:57:17 +0800 Subject: [PATCH 156/175] [trac1114] implement afsdb rdata --- src/lib/dns/Makefile.am | 2 + src/lib/dns/rdata/generic/afsdb_18.cc | 168 ++++++++++++++ src/lib/dns/rdata/generic/afsdb_18.h | 74 ++++++ src/lib/dns/tests/Makefile.am | 1 + src/lib/dns/tests/rdata_afsdb_unittest.cc | 210 ++++++++++++++++++ src/lib/dns/tests/testdata/Makefile.am | 8 + .../tests/testdata/rdata_afsdb_fromWire1.spec | 3 + .../tests/testdata/rdata_afsdb_fromWire2.spec | 6 + .../tests/testdata/rdata_afsdb_fromWire3.spec | 4 + .../tests/testdata/rdata_afsdb_fromWire4.spec | 4 + .../tests/testdata/rdata_afsdb_fromWire5.spec | 4 + .../tests/testdata/rdata_afsdb_toWire1.spec | 4 + .../tests/testdata/rdata_afsdb_toWire2.spec | 8 + src/lib/util/python/gen_wiredata.py.in | 21 ++ 14 files changed, 517 insertions(+) create mode 100644 src/lib/dns/rdata/generic/afsdb_18.cc create mode 100644 src/lib/dns/rdata/generic/afsdb_18.h create mode 100644 src/lib/dns/tests/rdata_afsdb_unittest.cc create mode 100644 src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec create mode 100644 src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec create mode 100644 src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec create mode 100644 src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec create mode 100644 src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec create mode 100644 src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec create mode 100644 src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec diff --git a/src/lib/dns/Makefile.am b/src/lib/dns/Makefile.am index 4a0173cb17..43737a9cb0 100644 --- a/src/lib/dns/Makefile.am +++ b/src/lib/dns/Makefile.am @@ -51,6 +51,8 @@ EXTRA_DIST += rdata/generic/soa_6.cc EXTRA_DIST += rdata/generic/soa_6.h EXTRA_DIST += rdata/generic/txt_16.cc EXTRA_DIST += rdata/generic/txt_16.h +EXTRA_DIST += rdata/generic/afsdb_18.cc +EXTRA_DIST += rdata/generic/afsdb_18.h EXTRA_DIST += rdata/hs_4/a_1.cc EXTRA_DIST += rdata/hs_4/a_1.h EXTRA_DIST += rdata/in_1/a_1.cc diff --git a/src/lib/dns/rdata/generic/afsdb_18.cc b/src/lib/dns/rdata/generic/afsdb_18.cc new file mode 100644 index 0000000000..0aca23f133 --- /dev/null +++ b/src/lib/dns/rdata/generic/afsdb_18.cc @@ -0,0 +1,168 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include + +#include +#include + +#include +#include +#include +#include + +using namespace std; +using namespace isc::util::str; + +// BEGIN_ISC_NAMESPACE +// BEGIN_RDATA_NAMESPACE + +/// \brief Constructor from string. +/// +/// \c afsdb_str must be formatted as follows: +/// \code +/// \endcode +/// where server name field must represent a valid domain name. +/// +/// An example of valid string is: +/// \code "1 server.example.com." \endcode +/// +/// Exceptions +/// +/// \exception InvalidRdataText The number of RDATA fields (must be 2) is +/// incorrect. +/// \exception std::bad_alloc Memory allocation fails. +/// \exception Other The constructor of the \c Name class will throw if the +/// names in the string is invalid. +AFSDB::AFSDB(const std::string& afsdb_str) : + subtype_(0), server_(Name::ROOT_NAME()) +{ + istringstream iss(afsdb_str); + + try { + const uint32_t subtype = tokenToNum(getToken(iss)); + const Name servername(getToken(iss)); + string server; + + if (!iss.eof()) { + isc_throw(InvalidRdataText, "Unexpected input for AFSDB" + "RDATA: " << afsdb_str); + } + + subtype_ = subtype; + server_ = servername; + + } catch (const StringTokenError& ste) { + isc_throw(InvalidRdataText, "Invalid AFSDB text: " << + ste.what() << ": " << afsdb_str); + } +} + +/// \brief Constructor from wire-format data. +/// +/// This constructor doesn't check the validity of the second parameter (rdata +/// length) for parsing. +/// If necessary, the caller will check consistency. +/// +/// \exception std::bad_alloc Memory allocation fails. +/// \exception Other The constructor of the \c Name class will throw if the +/// names in the wire is invalid. +AFSDB::AFSDB(InputBuffer& buffer, size_t) : + subtype_(buffer.readUint16()), server_(buffer) +{} + +/// \brief Copy constructor. +/// +/// \exception std::bad_alloc Memory allocation fails in copying internal +/// member variables (this should be very rare). +AFSDB::AFSDB(const AFSDB& other) : + Rdata(), subtype_(other.subtype_), server_(other.server_) +{} + +AFSDB& +AFSDB::operator=(const AFSDB& source) { + subtype_ = source.subtype_; + server_ = source.server_; + + return (*this); +} + +/// \brief Convert the \c AFSDB to a string. +/// +/// The output of this method is formatted as described in the "from string" +/// constructor (\c AFSDB(const std::string&))). +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \return A \c string object that represents the \c AFSDB object. +string +AFSDB::toText() const { + return (lexical_cast(subtype_) + " " + server_.toText()); +} + +/// \brief Render the \c AFSDB in the wire format without name compression. +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \param buffer An output buffer to store the wire data. +void +AFSDB::toWire(OutputBuffer& buffer) const { + buffer.writeUint16(subtype_); + server_.toWire(buffer); +} + +/// \brief Render the \c AFSDB in the wire format with taking into account +/// compression. +/// +/// As specified in RFC3597, TYPE AFSDB is not "well-known", the server +/// field (domain name) will not be compressed. +/// +/// \exception std::bad_alloc Internal resource allocation fails. +/// +/// \param renderer DNS message rendering context that encapsulates the +/// output buffer and name compression information. +void +AFSDB::toWire(AbstractMessageRenderer& renderer) const { + renderer.writeUint16(subtype_); + renderer.writeName(server_, false); +} + +/// \brief Compare two instances of \c AFSDB RDATA. +/// +/// See documentation in \c Rdata. +int +AFSDB::compare(const Rdata& other) const { + const AFSDB& other_afsdb = dynamic_cast(other); + if (subtype_ < other_afsdb.subtype_) { + return (-1); + } else if (subtype_ > other_afsdb.subtype_) { + return (1); + } + + return (compareNames(server_, other_afsdb.server_)); +} + +const Name& +AFSDB::getServer() const { + return (server_); +} + +uint16_t +AFSDB::getSubtype() const { + return (subtype_); +} + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE diff --git a/src/lib/dns/rdata/generic/afsdb_18.h b/src/lib/dns/rdata/generic/afsdb_18.h new file mode 100644 index 0000000000..4a4677502c --- /dev/null +++ b/src/lib/dns/rdata/generic/afsdb_18.h @@ -0,0 +1,74 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +// BEGIN_HEADER_GUARD + +#include + +#include + +#include +#include + +// BEGIN_ISC_NAMESPACE + +// BEGIN_COMMON_DECLARATIONS +// END_COMMON_DECLARATIONS + +// BEGIN_RDATA_NAMESPACE + +/// \brief \c rdata::AFSDB class represents the AFSDB RDATA as defined %in +/// RFC1183. +/// +/// This class implements the basic interfaces inherited from the abstract +/// \c rdata::Rdata class, and provides trivial accessors specific to the +/// AFSDB RDATA. +class AFSDB : public Rdata { +public: + // BEGIN_COMMON_MEMBERS + // END_COMMON_MEMBERS + + /// \brief Assignment operator. + /// + /// This method never throws an exception. + AFSDB& operator=(const AFSDB& source); + /// + /// Specialized methods + /// + + /// \brief Return the value of the server field. + /// + /// \return A reference to a \c Name class object corresponding to the + /// internal server name. + /// + /// This method never throws an exception. + const Name& getServer() const; + + /// \brief Return the value of the subtype field. + /// + /// This method never throws an exception. + uint16_t getSubtype() const; + +private: + uint16_t subtype_; + Name server_; +}; + +// END_RDATA_NAMESPACE +// END_ISC_NAMESPACE +// END_HEADER_GUARD + +// Local Variables: +// mode: c++ +// End: diff --git a/src/lib/dns/tests/Makefile.am b/src/lib/dns/tests/Makefile.am index bd6fbe2a6e..ab33a17b37 100644 --- a/src/lib/dns/tests/Makefile.am +++ b/src/lib/dns/tests/Makefile.am @@ -32,6 +32,7 @@ run_unittests_SOURCES += rdata_ns_unittest.cc rdata_soa_unittest.cc run_unittests_SOURCES += rdata_txt_unittest.cc rdata_mx_unittest.cc run_unittests_SOURCES += rdata_ptr_unittest.cc rdata_cname_unittest.cc run_unittests_SOURCES += rdata_dname_unittest.cc +run_unittests_SOURCES += rdata_afsdb_unittest.cc run_unittests_SOURCES += rdata_opt_unittest.cc run_unittests_SOURCES += rdata_dnskey_unittest.cc run_unittests_SOURCES += rdata_ds_unittest.cc diff --git a/src/lib/dns/tests/rdata_afsdb_unittest.cc b/src/lib/dns/tests/rdata_afsdb_unittest.cc new file mode 100644 index 0000000000..7df8d83659 --- /dev/null +++ b/src/lib/dns/tests/rdata_afsdb_unittest.cc @@ -0,0 +1,210 @@ +// Copyright (C) 2011 Internet Systems Consortium, Inc. ("ISC") +// +// Permission to use, copy, modify, and/or distribute this software for any +// purpose with or without fee is hereby granted, provided that the above +// copyright notice and this permission notice appear in all copies. +// +// THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH +// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +// AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, +// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE +// OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +// PERFORMANCE OF THIS SOFTWARE. + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +using isc::UnitTestUtil; +using namespace std; +using namespace isc::dns; +using namespace isc::util; +using namespace isc::dns::rdata; + +const char* const afsdb_text = "1 afsdb.example.com."; +const char* const afsdb_text2 = "0 root.example.com."; +const char* const too_long_label("012345678901234567890123456789" + "0123456789012345678901234567890123"); + +namespace { +class Rdata_AFSDB_Test : public RdataTest { +protected: + Rdata_AFSDB_Test() : + rdata_afsdb(string(afsdb_text)), rdata_afsdb2(string(afsdb_text2)) + {} + + const generic::AFSDB rdata_afsdb; + const generic::AFSDB rdata_afsdb2; + vector expected_wire; +}; + + +TEST_F(Rdata_AFSDB_Test, createFromText) { + EXPECT_EQ(1, rdata_afsdb.getSubtype()); + EXPECT_EQ(Name("afsdb.example.com."), rdata_afsdb.getServer()); + + EXPECT_EQ(0, rdata_afsdb2.getSubtype()); + EXPECT_EQ(Name("root.example.com."), rdata_afsdb2.getServer()); +} + +TEST_F(Rdata_AFSDB_Test, badText) { + // subtype is too large + EXPECT_THROW(const generic::AFSDB rdata_afsdb("99999999 afsdb.example.com."), + InvalidRdataText); + // incomplete text + EXPECT_THROW(const generic::AFSDB rdata_afsdb("10"), InvalidRdataText); + EXPECT_THROW(const generic::AFSDB rdata_afsdb("SPOON"), InvalidRdataText); + EXPECT_THROW(const generic::AFSDB rdata_afsdb("1root.example.com."), InvalidRdataText); + // number of fields (must be 2) is incorrect + EXPECT_THROW(const generic::AFSDB rdata_afsdb("10 afsdb. example.com."), + InvalidRdataText); + // bad name + EXPECT_THROW(const generic::AFSDB rdata_afsdb("1 afsdb.example.com." + + string(too_long_label)), TooLongLabel); +} + +TEST_F(Rdata_AFSDB_Test, assignment) { + generic::AFSDB copy((string(afsdb_text2))); + copy = rdata_afsdb; + EXPECT_EQ(0, copy.compare(rdata_afsdb)); + + // Check if the copied data is valid even after the original is deleted + generic::AFSDB* copy2 = new generic::AFSDB(rdata_afsdb); + generic::AFSDB copy3((string(afsdb_text2))); + copy3 = *copy2; + delete copy2; + EXPECT_EQ(0, copy3.compare(rdata_afsdb)); + + // Self assignment + copy = copy; + EXPECT_EQ(0, copy.compare(rdata_afsdb)); +} + +TEST_F(Rdata_AFSDB_Test, createFromWire) { + // uncompressed names + EXPECT_EQ(0, rdata_afsdb.compare( + *rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire1.wire"))); + // compressed name + EXPECT_EQ(0, rdata_afsdb.compare( + *rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire2.wire", 13))); + // RDLENGTH is too short + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire3.wire"), + InvalidRdataLength); + // RDLENGTH is too long + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire4.wire"), + InvalidRdataLength); + // bogus server name, the error should be detected in the name + // constructor + EXPECT_THROW(rdataFactoryFromFile(RRType::AFSDB(), RRClass::IN(), + "rdata_afsdb_fromWire5.wire"), + DNSMessageFORMERR); +} + +TEST_F(Rdata_AFSDB_Test, toWireBuffer) { + // construct actual data + rdata_afsdb.toWire(obuffer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire1.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + &expected_wire[0], expected_wire.size()); + + // clear buffer for the next test + obuffer.clear(); + + // construct actual data + Name("example.com.").toWire(obuffer); + rdata_afsdb2.toWire(obuffer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire2.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + obuffer.getData(), obuffer.getLength(), + &expected_wire[0], expected_wire.size()); +} + +TEST_F(Rdata_AFSDB_Test, toWireRenderer) { + // similar to toWireBuffer, but names in RDATA could be compressed due to + // preceding names. Actually they must not be compressed according to + // RFC3597, and this test checks that. + + // construct actual data + rdata_afsdb.toWire(renderer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire1.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + renderer.getData(), renderer.getLength(), + &expected_wire[0], expected_wire.size()); + + // clear renderer for the next test + renderer.clear(); + + // construct actual data + Name("example.com.").toWire(obuffer); + rdata_afsdb2.toWire(renderer); + + // construct expected data + UnitTestUtil::readWireData("rdata_afsdb_toWire2.wire", expected_wire); + + // then compare them + EXPECT_PRED_FORMAT4(UnitTestUtil::matchWireData, + renderer.getData(), renderer.getLength(), + &expected_wire[0], expected_wire.size()); +} + +TEST_F(Rdata_AFSDB_Test, toText) { + EXPECT_EQ(afsdb_text, rdata_afsdb.toText()); + EXPECT_EQ(afsdb_text2, rdata_afsdb2.toText()); +} + +TEST_F(Rdata_AFSDB_Test, compare) { + // check reflexivity + EXPECT_EQ(0, rdata_afsdb.compare(rdata_afsdb)); + + // name must be compared in case-insensitive manner + EXPECT_EQ(0, rdata_afsdb.compare(generic::AFSDB("1 " + "AFSDB.example.com."))); + + const generic::AFSDB small1("10 afsdb.example.com"); + const generic::AFSDB large1("65535 afsdb.example.com"); + const generic::AFSDB large2("256 afsdb.example.com"); + + // confirm these are compared as unsigned values + EXPECT_GT(0, rdata_afsdb.compare(large1)); + EXPECT_LT(0, large1.compare(rdata_afsdb)); + + // confirm these are compared in network byte order + EXPECT_GT(0, small1.compare(large2)); + EXPECT_LT(0, large2.compare(small1)); + + // another AFSDB whose server name is larger than that of rdata_afsdb. + const generic::AFSDB large3("256 zzzzz.example.com"); + EXPECT_GT(0, large2.compare(large3)); + EXPECT_LT(0, large3.compare(large2)); + + // comparison attempt between incompatible RR types should be rejected + EXPECT_THROW(rdata_afsdb.compare(*rdata_nomatch), bad_cast); +} +} diff --git a/src/lib/dns/tests/testdata/Makefile.am b/src/lib/dns/tests/testdata/Makefile.am index 743b5d2418..d93470eed2 100644 --- a/src/lib/dns/tests/testdata/Makefile.am +++ b/src/lib/dns/tests/testdata/Makefile.am @@ -30,6 +30,10 @@ BUILT_SOURCES += rdata_rp_fromWire1.wire rdata_rp_fromWire2.wire BUILT_SOURCES += rdata_rp_fromWire3.wire rdata_rp_fromWire4.wire BUILT_SOURCES += rdata_rp_fromWire5.wire rdata_rp_fromWire6.wire BUILT_SOURCES += rdata_rp_toWire1.wire rdata_rp_toWire2.wire +BUILT_SOURCES += rdata_afsdb_fromWire1.wire rdata_afsdb_fromWire2.wire +BUILT_SOURCES += rdata_afsdb_fromWire3.wire rdata_afsdb_fromWire4.wire +BUILT_SOURCES += rdata_afsdb_fromWire5.wire +BUILT_SOURCES += rdata_afsdb_toWire1.wire rdata_afsdb_toWire2.wire BUILT_SOURCES += rdata_soa_toWireUncompressed.wire BUILT_SOURCES += rdata_txt_fromWire2.wire rdata_txt_fromWire3.wire BUILT_SOURCES += rdata_txt_fromWire4.wire rdata_txt_fromWire5.wire @@ -99,6 +103,10 @@ EXTRA_DIST += rdata_rp_fromWire1.spec rdata_rp_fromWire2.spec EXTRA_DIST += rdata_rp_fromWire3.spec rdata_rp_fromWire4.spec EXTRA_DIST += rdata_rp_fromWire5.spec rdata_rp_fromWire6.spec EXTRA_DIST += rdata_rp_toWire1.spec rdata_rp_toWire2.spec +EXTRA_DIST += rdata_afsdb_fromWire1.spec rdata_afsdb_fromWire2.spec +EXTRA_DIST += rdata_afsdb_fromWire3.spec rdata_afsdb_fromWire4.spec +EXTRA_DIST += rdata_afsdb_fromWire5.spec +EXTRA_DIST += rdata_afsdb_toWire1.spec rdata_afsdb_toWire2.spec EXTRA_DIST += rdata_soa_fromWire rdata_soa_toWireUncompressed.spec EXTRA_DIST += rdata_srv_fromWire EXTRA_DIST += rdata_txt_fromWire1 rdata_txt_fromWire2.spec diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec new file mode 100644 index 0000000000..f831313827 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire1.spec @@ -0,0 +1,3 @@ +[custom] +sections: afsdb +[afsdb] diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec new file mode 100644 index 0000000000..f33e768589 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire2.spec @@ -0,0 +1,6 @@ +[custom] +sections: name:afsdb +[name] +name: example.com +[afsdb] +server: afsdb.ptr=0 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec new file mode 100644 index 0000000000..993032f605 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire3.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: 3 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec new file mode 100644 index 0000000000..37abf134c5 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire4.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: 80 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec new file mode 100644 index 0000000000..0ea79dd173 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_fromWire5.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +server: "01234567890123456789012345678901234567890123456789012345678901234" diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec b/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec new file mode 100644 index 0000000000..19464589e1 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_toWire1.spec @@ -0,0 +1,4 @@ +[custom] +sections: afsdb +[afsdb] +rdlen: -1 diff --git a/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec b/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec new file mode 100644 index 0000000000..c80011a488 --- /dev/null +++ b/src/lib/dns/tests/testdata/rdata_afsdb_toWire2.spec @@ -0,0 +1,8 @@ +[custom] +sections: name:afsdb +[name] +name: example.com. +[afsdb] +subtype: 0 +server: root.example.com +rdlen: -1 diff --git a/src/lib/util/python/gen_wiredata.py.in b/src/lib/util/python/gen_wiredata.py.in index 8e1f0798bd..6a69c2915f 100755 --- a/src/lib/util/python/gen_wiredata.py.in +++ b/src/lib/util/python/gen_wiredata.py.in @@ -822,6 +822,27 @@ class RP(RR): f.write('# MAILBOX=%s TEXT=%s\n' % (self.mailbox, self.text)) f.write('%s %s\n' % (mailbox_wire, text_wire)) +class AFSDB(RR): + '''Implements rendering AFSDB RDATA in the test data format. + + Configurable parameters are as follows (see the description of the + same name of attribute for the default value): + - subtype (16 bit int): The subtype field. + - server (string): The server field. + The string must be interpreted as a valid domain name. + ''' + subtype = 1 + server = 'afsdb.example.com' + def dump(self, f): + server_wire = encode_name(self.server) + if self.rdlen is None: + self.rdlen = 2 + len(server_wire) / 2 + else: + self.rdlen = int(self.rdlen) + self.dump_header(f, self.rdlen) + f.write('# SUBTYPE=%d SERVER=%s\n' % (self.subtype, self.server)) + f.write('%04x %s\n' % (self.subtype, server_wire)) + class NSECBASE(RR): '''Implements rendering NSEC/NSEC3 type bitmaps commonly used for these RRs. The NSEC and NSEC3 classes will be inherited from this From 5de7909a21a077238567b64e489ed5345824b2a0 Mon Sep 17 00:00:00 2001 From: Naoki Kambe Date: Tue, 16 Aug 2011 14:19:01 +0900 Subject: [PATCH 157/175] [master] Revert trac930 because of failures on biuldbots: "[master] update the ChangeLog entry for trac928, trac929 and trac930" 004afad6ea3fba7c8dd7730428b50fd770daec66 "[trac930] revise the entry of ChangeLog for trac928, trac929 and trac930" f20be125d667bceea0d940fc5fabf87b2eef86cd "[trac930]" fcc707041d663b98c1992cdd1402cc183155d3c0 "[trac930]" da5d5926cb26ca8dbdae119c03687cd3415f6638 "[trac930] refactor unittests" 0314c7bb66b85775dea73c95463eed88e9e286c3 "[trac930] add comments about abstracts of the test scripts in their headers" b8cecbbd905c10d28bcb905def7160d9e406dac4 "[trac930] modify stats.py" 7a31e95e63013a298b449573cc5336bcd64a0419 "[trac930] modify b10-stats_test.py" e18a678b62d03729f065c40650d7183e2f260b22 "[trac930] remove tailing whitespaces." 1d1a87939a010bd16ed23cd817261e9a655bf98f "[trac930] raise StatsError including errors in the stats spec file" c6948a6df9aeedd3753bc4c5e3a553088cd98f63 "[trac930] rename the function name" db0371fc9e5c7a85ab524ab7bc0b8169b9ba0486 "[trac930] remove a unnecessary x bit from stats_httpd.py.in" e906efc3747f052128eef50bed0107a0d53546c8 "[trac930] modify logging" d86a9dceaddf5a2cee44170e6e677f492df5e0ea "[trac930] modify the update_modues function" 4c2732cbf0bb7384ed61ab3604855f143a0c6c5d "[trac930]" aaffb9c83c0fe59d9c7d590c5bea559ed8876269 "[trac930] remove unnecessary a white space" e8a22472e58bfc7df4a661d665152fe4d70454a6 "[trac930] add a test pattern which the set command with a non-existent item" 2c22d334a05ec1e77299a6c55252f1d1c33082af "[trac930] modify parse_spec function" 8a24b9066537caf373d0cfc11dca855eb6c3e4d9 "[trac930] fix conflicts with trac1021" 7275c59de54593d3baca81345226dda2d3a19c30 "[trac930] add changes because query counter names described in the specfile are changed." bcf37a11b08922d69d02fa2ea1b280b2fa2c21e0 "[trac930] add the logging when the validation of statistics data fails" a142fa6302e1e0ea2ad1c9faf59d6a70a53a6489 "[trac930] Add unittests to test sumitStatistics with the validation of statistics data and add mock ModuleSpec class" ae8748f77a0261623216b1a11f9d979f555fe892 "[trac930] Add prototypes of validator_typea and registerStatisticsValidator" d0d5a67123b8009e89e84515eee4f93b37ec8497 "[trac930]" a9a976d2a5871f1501018d697d3afd299ceec5da "[trac930] add the helper functions which are used around the registration of the function to validate the statistics data." df9a8f921f0d20bd70c519218335357297bffa7d "[trac930] add new messages into the message file of Auth and Boss" e95625332a20fb50afe43da2db0cab507efe8ebe "[trac930] add statistics validation for bob" 28cad73dff9dae43a38ad7dafbee406c690fb77c "[trac930]" 4de3a5bdf367d87247cb9138f8929ab4798f014e "[trac930] remove unneeded empty TODO comments" aa108cc824539a1d32a4aa2f46f9e58171074a9e "[trac930] add new entry for #928-#930" 691328d91b4c4d15ace467ca47a3c987a9fb52b9 "[trac930] refurbish the unittests for new stats module, new stats httpd module" c06463cf96ea7401325a208af8ba457e661d1cec "[trac930] modify Stats" c074f6e0b72c3facf6b325b17dea1ca13a2788cc "[trac930]" daa1d6dd07292142d3dec5928583b0ab1da89adf "[trac930] update spec file of stats module" e7b4337aeaa760947e8e7906e64077ad7aaadc66 "[trac930] update argument name and argument format of set command in auth module and boss module" 0b235902f38d611606d44661506f32baf266fdda "[trac930] remove description about removing statistics data by stats module" c19a295eb4125b4d2a391de65972271002412258 "[trac930] add a column "Owner" in the table tag" 9261da8717a433cf20218af08d3642fbeffb7d4b "[trac930] remove descriptions about "stats-schema.spec" and add description about new" d4078d52343247b07c47370b497927a3a47a4f9a "[trac930] add utilities and mock-up modules for unittests of" 1aa728ddf691657611680385c920e3a7bd5fee12 "[trac930] remove unneeded mockups, fake modules and dummy data" 1768e822df82943f075ebed023b72d225b3b0216 "[trac930] remove unneeded specfile "stats-schema.spec"" 326885a3f98c49a848a67dc48db693b8bcc7b508 --- ChangeLog | 7 - configure.ac | 7 + doc/guide/bind10-guide.html | 30 +- doc/guide/bind10-guide.xml | 30 +- src/bin/auth/auth_messages.mes | 3 - src/bin/auth/auth_srv.cc | 24 - src/bin/auth/statistics.cc | 32 +- src/bin/auth/statistics.h | 20 - src/bin/auth/tests/statistics_unittest.cc | 74 +- src/bin/bind10/bind10_messages.mes | 4 - src/bin/bind10/bind10_src.py.in | 25 +- src/bin/bind10/tests/bind10_test.py.in | 23 +- src/bin/stats/Makefile.am | 4 +- src/bin/stats/b10-stats-httpd.8 | 6 +- src/bin/stats/b10-stats-httpd.xml | 10 +- src/bin/stats/b10-stats.8 | 4 + src/bin/stats/b10-stats.xml | 6 + src/bin/stats/stats-httpd-xsl.tpl | 1 - src/bin/stats/stats-schema.spec | 86 ++ src/bin/stats/stats.py.in | 600 ++++----- src/bin/stats/stats.spec | 75 +- src/bin/stats/stats_httpd.py.in | 230 ++-- src/bin/stats/stats_messages.mes | 21 +- src/bin/stats/tests/Makefile.am | 10 +- src/bin/stats/tests/b10-stats-httpd_test.py | 663 ++++------ src/bin/stats/tests/b10-stats_test.py | 1159 +++++++++--------- src/bin/stats/tests/fake_select.py | 43 + src/bin/stats/tests/fake_socket.py | 70 ++ src/bin/stats/tests/fake_time.py | 47 + src/bin/stats/tests/http/Makefile.am | 6 + src/bin/stats/tests/http/__init__.py | 0 src/bin/stats/tests/http/server.py | 96 ++ src/bin/stats/tests/isc/Makefile.am | 8 + src/bin/stats/tests/isc/__init__.py | 0 src/bin/stats/tests/isc/cc/Makefile.am | 7 + src/bin/stats/tests/isc/cc/__init__.py | 1 + src/bin/stats/tests/isc/cc/session.py | 148 +++ src/bin/stats/tests/isc/config/Makefile.am | 7 + src/bin/stats/tests/isc/config/__init__.py | 1 + src/bin/stats/tests/isc/config/ccsession.py | 249 ++++ src/bin/stats/tests/isc/log/Makefile.am | 7 + src/bin/stats/tests/isc/log/__init__.py | 33 + src/bin/stats/tests/isc/util/Makefile.am | 7 + src/bin/stats/tests/isc/util/__init__.py | 0 src/bin/stats/tests/isc/util/process.py | 21 + src/bin/stats/tests/test_utils.py | 291 ----- src/bin/stats/tests/testdata/Makefile.am | 1 + src/bin/stats/tests/testdata/stats_test.spec | 19 + src/bin/tests/Makefile.am | 2 +- tests/system/bindctl/tests.sh | 16 +- 50 files changed, 2265 insertions(+), 1969 deletions(-) create mode 100644 src/bin/stats/stats-schema.spec mode change 100644 => 100755 src/bin/stats/stats_httpd.py.in create mode 100644 src/bin/stats/tests/fake_select.py create mode 100644 src/bin/stats/tests/fake_socket.py create mode 100644 src/bin/stats/tests/fake_time.py create mode 100644 src/bin/stats/tests/http/Makefile.am create mode 100644 src/bin/stats/tests/http/__init__.py create mode 100644 src/bin/stats/tests/http/server.py create mode 100644 src/bin/stats/tests/isc/Makefile.am create mode 100644 src/bin/stats/tests/isc/__init__.py create mode 100644 src/bin/stats/tests/isc/cc/Makefile.am create mode 100644 src/bin/stats/tests/isc/cc/__init__.py create mode 100644 src/bin/stats/tests/isc/cc/session.py create mode 100644 src/bin/stats/tests/isc/config/Makefile.am create mode 100644 src/bin/stats/tests/isc/config/__init__.py create mode 100644 src/bin/stats/tests/isc/config/ccsession.py create mode 100644 src/bin/stats/tests/isc/log/Makefile.am create mode 100644 src/bin/stats/tests/isc/log/__init__.py create mode 100644 src/bin/stats/tests/isc/util/Makefile.am create mode 100644 src/bin/stats/tests/isc/util/__init__.py create mode 100644 src/bin/stats/tests/isc/util/process.py delete mode 100644 src/bin/stats/tests/test_utils.py create mode 100644 src/bin/stats/tests/testdata/Makefile.am create mode 100644 src/bin/stats/tests/testdata/stats_test.spec diff --git a/ChangeLog b/ChangeLog index 94b9a22f48..56bf8e97d7 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,10 +1,3 @@ -279. [func] naokikambe - Statistics items are specified by each module's spec file. - Stats module can read these through the config manager. Stats - module and stats httpd report statistics data and statistics - schema by each module via both bindctl and HTTP/XML. - (Trac #928,#929,#930, git f20be125d667bceea0d940fc5fabf87b2eef86cd) - 278. [doc] jelte Add logging configuration documentation to the guide. (Trac #1011, git TODO) diff --git a/configure.ac b/configure.ac index ee990eb412..6e129b6093 100644 --- a/configure.ac +++ b/configure.ac @@ -801,6 +801,13 @@ AC_CONFIG_FILES([Makefile src/bin/zonemgr/tests/Makefile src/bin/stats/Makefile src/bin/stats/tests/Makefile + src/bin/stats/tests/isc/Makefile + src/bin/stats/tests/isc/cc/Makefile + src/bin/stats/tests/isc/config/Makefile + src/bin/stats/tests/isc/util/Makefile + src/bin/stats/tests/isc/log/Makefile + src/bin/stats/tests/testdata/Makefile + src/bin/stats/tests/http/Makefile src/bin/usermgr/Makefile src/bin/tests/Makefile src/lib/Makefile diff --git a/doc/guide/bind10-guide.html b/doc/guide/bind10-guide.html index 4415d42550..5754cf001e 100644 --- a/doc/guide/bind10-guide.html +++ b/doc/guide/bind10-guide.html @@ -664,30 +664,24 @@ This may be a temporary setting until then.

- This stats daemon provides commands to identify if it is - running, show specified or all statistics data, show specified - or all statistics data schema, and set specified statistics - data. + This stats daemon provides commands to identify if it is running, + show specified or all statistics data, set values, remove data, + and reset data. For example, using bindctl:

 > Stats show
 {
-    "Auth": {
-        "queries.tcp": 1749,
-        "queries.udp": 867868
-    },
-    "Boss": {
-        "boot_time": "2011-01-20T16:59:03Z"
-    },
-    "Stats": {
-        "boot_time": "2011-01-20T16:59:05Z",
-        "last_update_time": "2011-01-20T17:04:05Z",
-        "lname": "4d3869d9_a@jreed.example.net",
-        "report_time": "2011-01-20T17:04:06Z",
-        "timestamp": 1295543046.823504
-    }
+    "auth.queries.tcp": 1749,
+    "auth.queries.udp": 867868,
+    "bind10.boot_time": "2011-01-20T16:59:03Z",
+    "report_time": "2011-01-20T17:04:06Z",
+    "stats.boot_time": "2011-01-20T16:59:05Z",
+    "stats.last_update_time": "2011-01-20T17:04:05Z",
+    "stats.lname": "4d3869d9_a@jreed.example.net",
+    "stats.start_time": "2011-01-20T16:59:05Z",
+    "stats.timestamp": 1295543046.823504
 }
        

Chapter 14. Logging

diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 297400cca6..ef66f3d3fb 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1453,30 +1453,24 @@ then change those defaults with config set Resolver/forward_addresses[0]/address - This stats daemon provides commands to identify if it is - running, show specified or all statistics data, show specified - or all statistics data schema, and set specified statistics - data. + This stats daemon provides commands to identify if it is running, + show specified or all statistics data, set values, remove data, + and reset data. For example, using bindctl: > Stats show { - "Auth": { - "queries.tcp": 1749, - "queries.udp": 867868 - }, - "Boss": { - "boot_time": "2011-01-20T16:59:03Z" - }, - "Stats": { - "boot_time": "2011-01-20T16:59:05Z", - "last_update_time": "2011-01-20T17:04:05Z", - "lname": "4d3869d9_a@jreed.example.net", - "report_time": "2011-01-20T17:04:06Z", - "timestamp": 1295543046.823504 - } + "auth.queries.tcp": 1749, + "auth.queries.udp": 867868, + "bind10.boot_time": "2011-01-20T16:59:03Z", + "report_time": "2011-01-20T17:04:06Z", + "stats.boot_time": "2011-01-20T16:59:05Z", + "stats.last_update_time": "2011-01-20T17:04:05Z", + "stats.lname": "4d3869d9_a@jreed.example.net", + "stats.start_time": "2011-01-20T16:59:05Z", + "stats.timestamp": 1295543046.823504 } diff --git a/src/bin/auth/auth_messages.mes b/src/bin/auth/auth_messages.mes index 1ffa6871ea..9f04b76264 100644 --- a/src/bin/auth/auth_messages.mes +++ b/src/bin/auth/auth_messages.mes @@ -257,7 +257,4 @@ request. The zone manager component has been informed of the request, but has returned an error response (which is included in the message). The NOTIFY request will not be honored. -% AUTH_INVALID_STATISTICS_DATA invalid specification of statistics data specified -An error was encountered when the authoritiative server specified -statistics data which is invalid for the auth specification file. diff --git a/src/bin/auth/auth_srv.cc b/src/bin/auth/auth_srv.cc index c9dac88e99..5a3144283a 100644 --- a/src/bin/auth/auth_srv.cc +++ b/src/bin/auth/auth_srv.cc @@ -125,10 +125,6 @@ public: /// The TSIG keyring const shared_ptr* keyring_; - - /// Bind the ModuleSpec object in config_session_ with - /// isc:config::ModuleSpec::validateStatistics. - void registerStatisticsValidator(); private: std::string db_file_; @@ -143,9 +139,6 @@ private: /// Increment query counter void incCounter(const int protocol); - - // validateStatistics - bool validateStatistics(isc::data::ConstElementPtr data) const; }; AuthSrvImpl::AuthSrvImpl(const bool use_cache, @@ -324,7 +317,6 @@ AuthSrv::setXfrinSession(AbstractSession* xfrin_session) { void AuthSrv::setConfigSession(ModuleCCSession* config_session) { impl_->config_session_ = config_session; - impl_->registerStatisticsValidator(); } void @@ -678,22 +670,6 @@ AuthSrvImpl::incCounter(const int protocol) { } } -void -AuthSrvImpl::registerStatisticsValidator() { - counters_.registerStatisticsValidator( - boost::bind(&AuthSrvImpl::validateStatistics, this, _1)); -} - -bool -AuthSrvImpl::validateStatistics(isc::data::ConstElementPtr data) const { - if (config_session_ == NULL) { - return (false); - } - return ( - config_session_->getModuleSpec().validateStatistics( - data, true)); -} - ConstElementPtr AuthSrvImpl::setDbFile(ConstElementPtr config) { ConstElementPtr answer = isc::config::createAnswer(); diff --git a/src/bin/auth/statistics.cc b/src/bin/auth/statistics.cc index e62719f7e2..76e50074fc 100644 --- a/src/bin/auth/statistics.cc +++ b/src/bin/auth/statistics.cc @@ -37,14 +37,11 @@ public: void inc(const AuthCounters::CounterType type); bool submitStatistics() const; void setStatisticsSession(isc::cc::AbstractSession* statistics_session); - void registerStatisticsValidator - (AuthCounters::validator_type validator); // Currently for testing purpose only uint64_t getCounter(const AuthCounters::CounterType type) const; private: std::vector counters_; isc::cc::AbstractSession* statistics_session_; - AuthCounters::validator_type validator_; }; AuthCountersImpl::AuthCountersImpl() : @@ -70,25 +67,16 @@ AuthCountersImpl::submitStatistics() const { } std::stringstream statistics_string; statistics_string << "{\"command\": [\"set\"," - << "{ \"owner\": \"Auth\"," - << " \"data\":" - << "{ \"queries.udp\": " + << "{ \"stats_data\": " + << "{ \"auth.queries.udp\": " << counters_.at(AuthCounters::COUNTER_UDP_QUERY) - << ", \"queries.tcp\": " + << ", \"auth.queries.tcp\": " << counters_.at(AuthCounters::COUNTER_TCP_QUERY) << " }" << "}" << "]}"; isc::data::ConstElementPtr statistics_element = isc::data::Element::fromJSON(statistics_string); - // validate the statistics data before send - if (validator_) { - if (!validator_( - statistics_element->get("command")->get(1)->get("data"))) { - LOG_ERROR(auth_logger, AUTH_INVALID_STATISTICS_DATA); - return (false); - } - } try { // group_{send,recv}msg() can throw an exception when encountering // an error, and group_recvmsg() will throw an exception on timeout. @@ -117,13 +105,6 @@ AuthCountersImpl::setStatisticsSession statistics_session_ = statistics_session; } -void -AuthCountersImpl::registerStatisticsValidator - (AuthCounters::validator_type validator) -{ - validator_ = validator; -} - // Currently for testing purpose only uint64_t AuthCountersImpl::getCounter(const AuthCounters::CounterType type) const { @@ -158,10 +139,3 @@ uint64_t AuthCounters::getCounter(const AuthCounters::CounterType type) const { return (impl_->getCounter(type)); } - -void -AuthCounters::registerStatisticsValidator - (AuthCounters::validator_type validator) const -{ - return (impl_->registerStatisticsValidator(validator)); -} diff --git a/src/bin/auth/statistics.h b/src/bin/auth/statistics.h index c930414c65..5bf643656d 100644 --- a/src/bin/auth/statistics.h +++ b/src/bin/auth/statistics.h @@ -131,26 +131,6 @@ public: /// \return the value of the counter specified by \a type. /// uint64_t getCounter(const AuthCounters::CounterType type) const; - - /// \brief A type of validation function for the specification in - /// isc::config::ModuleSpec. - /// - /// This type might be useful for not only statistics - /// specificatoin but also for config_data specification and for - /// commnad. - /// - typedef boost::function - validator_type; - - /// \brief Register a function type of the statistics validation - /// function for AuthCounters. - /// - /// This method never throws an exception. - /// - /// \param validator A function type of the validation of - /// statistics specification. - /// - void registerStatisticsValidator(AuthCounters::validator_type validator) const; }; #endif // __STATISTICS_H diff --git a/src/bin/auth/tests/statistics_unittest.cc b/src/bin/auth/tests/statistics_unittest.cc index 98e573b495..9a3dded837 100644 --- a/src/bin/auth/tests/statistics_unittest.cc +++ b/src/bin/auth/tests/statistics_unittest.cc @@ -16,8 +16,6 @@ #include -#include - #include #include @@ -78,13 +76,6 @@ protected: } MockSession statistics_session_; AuthCounters counters; - // no need to be inherited from the original class here. - class MockModuleSpec { - public: - bool validateStatistics(ConstElementPtr, const bool valid) const - { return (valid); } - }; - MockModuleSpec module_spec_; }; void @@ -190,7 +181,7 @@ TEST_F(AuthCountersTest, submitStatisticsWithException) { statistics_session_.setThrowSessionTimeout(false); } -TEST_F(AuthCountersTest, submitStatisticsWithoutValidator) { +TEST_F(AuthCountersTest, submitStatistics) { // Submit statistics data. // Validate if it submits correct data. @@ -210,69 +201,12 @@ TEST_F(AuthCountersTest, submitStatisticsWithoutValidator) { // Command is "set". EXPECT_EQ("set", statistics_session_.sent_msg->get("command") ->get(0)->stringValue()); - EXPECT_EQ("Auth", statistics_session_.sent_msg->get("command") - ->get(1)->get("owner")->stringValue()); ConstElementPtr statistics_data = statistics_session_.sent_msg ->get("command")->get(1) - ->get("data"); + ->get("stats_data"); // UDP query counter is 2 and TCP query counter is 1. - EXPECT_EQ(2, statistics_data->get("queries.udp")->intValue()); - EXPECT_EQ(1, statistics_data->get("queries.tcp")->intValue()); + EXPECT_EQ(2, statistics_data->get("auth.queries.udp")->intValue()); + EXPECT_EQ(1, statistics_data->get("auth.queries.tcp")->intValue()); } -TEST_F(AuthCountersTest, submitStatisticsWithValidator) { - - //a validator for the unittest - AuthCounters::validator_type validator; - ConstElementPtr el; - - // Submit statistics data with correct statistics validator. - validator = boost::bind( - &AuthCountersTest::MockModuleSpec::validateStatistics, - &module_spec_, _1, true); - - EXPECT_TRUE(validator(el)); - - // register validator to AuthCounters - counters.registerStatisticsValidator(validator); - - // Counters should be initialized to 0. - EXPECT_EQ(0, counters.getCounter(AuthCounters::COUNTER_UDP_QUERY)); - EXPECT_EQ(0, counters.getCounter(AuthCounters::COUNTER_TCP_QUERY)); - - // UDP query counter is set to 2. - counters.inc(AuthCounters::COUNTER_UDP_QUERY); - counters.inc(AuthCounters::COUNTER_UDP_QUERY); - // TCP query counter is set to 1. - counters.inc(AuthCounters::COUNTER_TCP_QUERY); - - // checks the value returned by submitStatistics - EXPECT_TRUE(counters.submitStatistics()); - - // Destination is "Stats". - EXPECT_EQ("Stats", statistics_session_.msg_destination); - // Command is "set". - EXPECT_EQ("set", statistics_session_.sent_msg->get("command") - ->get(0)->stringValue()); - EXPECT_EQ("Auth", statistics_session_.sent_msg->get("command") - ->get(1)->get("owner")->stringValue()); - ConstElementPtr statistics_data = statistics_session_.sent_msg - ->get("command")->get(1) - ->get("data"); - // UDP query counter is 2 and TCP query counter is 1. - EXPECT_EQ(2, statistics_data->get("queries.udp")->intValue()); - EXPECT_EQ(1, statistics_data->get("queries.tcp")->intValue()); - - // Submit statistics data with incorrect statistics validator. - validator = boost::bind( - &AuthCountersTest::MockModuleSpec::validateStatistics, - &module_spec_, _1, false); - - EXPECT_FALSE(validator(el)); - - counters.registerStatisticsValidator(validator); - - // checks the value returned by submitStatistics - EXPECT_FALSE(counters.submitStatistics()); -} } diff --git a/src/bin/bind10/bind10_messages.mes b/src/bin/bind10/bind10_messages.mes index 4debcdb3ec..4bac069098 100644 --- a/src/bin/bind10/bind10_messages.mes +++ b/src/bin/bind10/bind10_messages.mes @@ -198,7 +198,3 @@ the message channel. % BIND10_UNKNOWN_CHILD_PROCESS_ENDED unknown child pid %1 exited An unknown child process has exited. The PID is printed, but no further action will be taken by the boss process. - -% BIND10_INVALID_STATISTICS_DATA invalid specification of statistics data specified -An error was encountered when the boss module specified -statistics data which is invalid for the boss specification file. diff --git a/src/bin/bind10/bind10_src.py.in b/src/bin/bind10/bind10_src.py.in index 3deba6172b..b497f7c922 100755 --- a/src/bin/bind10/bind10_src.py.in +++ b/src/bin/bind10/bind10_src.py.in @@ -85,7 +85,7 @@ isc.util.process.rename(sys.argv[0]) # number, and the overall BIND 10 version number (set in configure.ac). VERSION = "bind10 20110223 (BIND 10 @PACKAGE_VERSION@)" -# This is for boot_time of Boss +# This is for bind10.boottime of stats module _BASETIME = time.gmtime() class RestartSchedule: @@ -318,22 +318,13 @@ class BoB: answer = isc.config.ccsession.create_answer(0) elif command == "sendstats": # send statistics data to the stats daemon immediately - statistics_data = { - 'boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', _BASETIME) - } - valid = self.ccs.get_module_spec().validate_statistics( - True, statistics_data) - if valid: - cmd = isc.config.ccsession.create_command( - 'set', { "owner": "Boss", - "data": statistics_data }) - seq = self.cc_session.group_sendmsg(cmd, 'Stats') - self.cc_session.group_recvmsg(True, seq) - answer = isc.config.ccsession.create_answer(0) - else: - logger.fatal(BIND10_INVALID_STATISTICS_DATA); - answer = isc.config.ccsession.create_answer( - 1, "specified statistics data is invalid") + cmd = isc.config.ccsession.create_command( + 'set', { "stats_data": { + 'bind10.boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', _BASETIME) + }}) + seq = self.cc_session.group_sendmsg(cmd, 'Stats') + self.cc_session.group_recvmsg(True, seq) + answer = isc.config.ccsession.create_answer(0) elif command == "ping": answer = isc.config.ccsession.create_answer(0, "pong") elif command == "show_processes": diff --git a/src/bin/bind10/tests/bind10_test.py.in b/src/bin/bind10/tests/bind10_test.py.in index af7b6f49ef..077190c865 100644 --- a/src/bin/bind10/tests/bind10_test.py.in +++ b/src/bin/bind10/tests/bind10_test.py.in @@ -137,27 +137,9 @@ class TestBoB(unittest.TestCase): def group_sendmsg(self, msg, group): (self.msg, self.group) = (msg, group) def group_recvmsg(self, nonblock, seq): pass - class DummyModuleCCSession(): - module_spec = isc.config.module_spec.ModuleSpec({ - "module_name": "Boss", - "statistics": [ - { - "item_name": "boot_time", - "item_type": "string", - "item_optional": False, - "item_default": "1970-01-01T00:00:00Z", - "item_title": "Boot time", - "item_description": "A date time when bind10 process starts initially", - "item_format": "date-time" - } - ] - }) - def get_module_spec(self): - return self.module_spec bob = BoB() bob.verbose = True bob.cc_session = DummySession() - bob.ccs = DummyModuleCCSession() # a bad command self.assertEqual(bob.command_handler(-1, None), isc.config.ccsession.create_answer(1, "bad command")) @@ -171,9 +153,8 @@ class TestBoB(unittest.TestCase): self.assertEqual(bob.cc_session.group, "Stats") self.assertEqual(bob.cc_session.msg, isc.config.ccsession.create_command( - "set", { "owner": "Boss", - "data": { - "boot_time": time.strftime("%Y-%m-%dT%H:%M:%SZ", _BASETIME) + 'set', { "stats_data": { + 'bind10.boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', _BASETIME) }})) # "ping" command self.assertEqual(bob.command_handler("ping", None), diff --git a/src/bin/stats/Makefile.am b/src/bin/stats/Makefile.am index 49cadad4c9..e830f65d60 100644 --- a/src/bin/stats/Makefile.am +++ b/src/bin/stats/Makefile.am @@ -5,7 +5,7 @@ pkglibexecdir = $(libexecdir)/@PACKAGE@ pkglibexec_SCRIPTS = b10-stats b10-stats-httpd b10_statsdir = $(pkgdatadir) -b10_stats_DATA = stats.spec stats-httpd.spec +b10_stats_DATA = stats.spec stats-httpd.spec stats-schema.spec b10_stats_DATA += stats-httpd-xml.tpl stats-httpd-xsd.tpl stats-httpd-xsl.tpl pyexec_DATA = stats_messages.py stats_httpd_messages.py @@ -16,7 +16,7 @@ CLEANFILES += stats_httpd_messages.py stats_httpd_messages.pyc man_MANS = b10-stats.8 b10-stats-httpd.8 EXTRA_DIST = $(man_MANS) b10-stats.xml b10-stats-httpd.xml -EXTRA_DIST += stats.spec stats-httpd.spec +EXTRA_DIST += stats.spec stats-httpd.spec stats-schema.spec EXTRA_DIST += stats-httpd-xml.tpl stats-httpd-xsd.tpl stats-httpd-xsl.tpl EXTRA_DIST += stats_messages.mes stats_httpd_messages.mes diff --git a/src/bin/stats/b10-stats-httpd.8 b/src/bin/stats/b10-stats-httpd.8 index 1206e1d791..ed4aafa6c6 100644 --- a/src/bin/stats/b10-stats-httpd.8 +++ b/src/bin/stats/b10-stats-httpd.8 @@ -36,7 +36,7 @@ b10-stats-httpd \- BIND 10 HTTP server for HTTP/XML interface of statistics .PP \fBb10\-stats\-httpd\fR -is a standalone HTTP server\&. It is intended for HTTP/XML interface for statistics module\&. This server process runs as a process separated from the process of the BIND 10 Stats daemon (\fBb10\-stats\fR)\&. The server is initially executed by the BIND 10 boss process (\fBbind10\fR) and eventually exited by it\&. The server is intended to be server requests by HTTP clients like web browsers and third\-party modules\&. When the server is asked, it requests BIND 10 statistics data or its schema from +is a standalone HTTP server\&. It is intended for HTTP/XML interface for statistics module\&. This server process runs as a process separated from the process of the BIND 10 Stats daemon (\fBb10\-stats\fR)\&. The server is initially executed by the BIND 10 boss process (\fBbind10\fR) and eventually exited by it\&. The server is intended to be server requests by HTTP clients like web browsers and third\-party modules\&. When the server is asked, it requests BIND 10 statistics data from \fBb10\-stats\fR, and it sends the data back in Python dictionary format and the server converts it into XML format\&. The server sends it to the HTTP client\&. The server can send three types of document, which are XML (Extensible Markup Language), XSD (XML Schema definition) and XSL (Extensible Stylesheet Language)\&. The XML document is the statistics data of BIND 10, The XSD document is the data schema of it, and The XSL document is the style sheet to be showed for the web browsers\&. There is different URL for each document\&. But please note that you would be redirected to the URL of XML document if you request the URL of the root document\&. For example, you would be redirected to http://127\&.0\&.0\&.1:8000/bind10/statistics/xml if you request http://127\&.0\&.0\&.1:8000/\&. Please see the manual and the spec file of \fBb10\-stats\fR for more details about the items of BIND 10 statistics\&. The server uses CC session in communication with @@ -66,6 +66,10 @@ bindctl(1)\&. Please see the manual of bindctl(1) about how to configure the settings\&. .PP +/usr/local/share/bind10\-devel/stats\-schema\&.spec +\(em This is a spec file for data schema of of BIND 10 statistics\&. This schema cannot be configured via +bindctl(1)\&. +.PP /usr/local/share/bind10\-devel/stats\-httpd\-xml\&.tpl \(em the template file of XML document\&. diff --git a/src/bin/stats/b10-stats-httpd.xml b/src/bin/stats/b10-stats-httpd.xml index c8df9b8a6e..34c704f509 100644 --- a/src/bin/stats/b10-stats-httpd.xml +++ b/src/bin/stats/b10-stats-httpd.xml @@ -57,7 +57,7 @@ by the BIND 10 boss process (bind10) and eventually exited by it. The server is intended to be server requests by HTTP clients like web browsers and third-party modules. When the server is - asked, it requests BIND 10 statistics data or its schema from + asked, it requests BIND 10 statistics data from b10-stats, and it sends the data back in Python dictionary format and the server converts it into XML format. The server sends it to the HTTP client. The server can send three types of document, @@ -112,6 +112,12 @@ of bindctl1 about how to configure the settings. + /usr/local/share/bind10-devel/stats-schema.spec + + — This is a spec file for data schema of + of BIND 10 statistics. This schema cannot be configured + via bindctl1. + /usr/local/share/bind10-devel/stats-httpd-xml.tpl @@ -132,7 +138,7 @@ CONFIGURATION AND COMMANDS - The configurable setting in + The configurable setting in stats-httpd.spec is: diff --git a/src/bin/stats/b10-stats.8 b/src/bin/stats/b10-stats.8 index 2c75cbcc0e..f69e4d37fa 100644 --- a/src/bin/stats/b10-stats.8 +++ b/src/bin/stats/b10-stats.8 @@ -66,6 +66,10 @@ switches to verbose mode\&. It sends verbose messages to STDOUT\&. \fBb10\-stats\fR\&. It contains commands for \fBb10\-stats\fR\&. They can be invoked via bindctl(1)\&. +.PP +/usr/local/share/bind10\-devel/stats\-schema\&.spec +\(em This is a spec file for data schema of of BIND 10 statistics\&. This schema cannot be configured via +bindctl(1)\&. .SH "SEE ALSO" .PP diff --git a/src/bin/stats/b10-stats.xml b/src/bin/stats/b10-stats.xml index bd2400a2d5..f0c472dd29 100644 --- a/src/bin/stats/b10-stats.xml +++ b/src/bin/stats/b10-stats.xml @@ -95,6 +95,12 @@ invoked via bindctl1. + /usr/local/share/bind10-devel/stats-schema.spec + + — This is a spec file for data schema of + of BIND 10 statistics. This schema cannot be configured + via bindctl1. + diff --git a/src/bin/stats/stats-httpd-xsl.tpl b/src/bin/stats/stats-httpd-xsl.tpl index a1f6406a5a..01ffdc681b 100644 --- a/src/bin/stats/stats-httpd-xsl.tpl +++ b/src/bin/stats/stats-httpd-xsl.tpl @@ -44,7 +44,6 @@ td.title {

BIND 10 Statistics

Owner Title Value
DummyFoo
- diff --git a/src/bin/stats/stats-schema.spec b/src/bin/stats/stats-schema.spec new file mode 100644 index 0000000000..52528657e8 --- /dev/null +++ b/src/bin/stats/stats-schema.spec @@ -0,0 +1,86 @@ +{ + "module_spec": { + "module_name": "Stats", + "module_description": "Statistics data schema", + "config_data": [ + { + "item_name": "report_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "Report time", + "item_description": "A date time when stats module reports", + "item_format": "date-time" + }, + { + "item_name": "bind10.boot_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "bind10.BootTime", + "item_description": "A date time when bind10 process starts initially", + "item_format": "date-time" + }, + { + "item_name": "stats.boot_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "stats.BootTime", + "item_description": "A date time when the stats module starts initially or when the stats module restarts", + "item_format": "date-time" + }, + { + "item_name": "stats.start_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "stats.StartTime", + "item_description": "A date time when the stats module starts collecting data or resetting values last time", + "item_format": "date-time" + }, + { + "item_name": "stats.last_update_time", + "item_type": "string", + "item_optional": false, + "item_default": "1970-01-01T00:00:00Z", + "item_title": "stats.LastUpdateTime", + "item_description": "The latest date time when the stats module receives from other modules like auth server or boss process and so on", + "item_format": "date-time" + }, + { + "item_name": "stats.timestamp", + "item_type": "real", + "item_optional": false, + "item_default": 0.0, + "item_title": "stats.Timestamp", + "item_description": "A current time stamp since epoch time (1970-01-01T00:00:00Z)" + }, + { + "item_name": "stats.lname", + "item_type": "string", + "item_optional": false, + "item_default": "", + "item_title": "stats.LocalName", + "item_description": "A localname of stats module given via CC protocol" + }, + { + "item_name": "auth.queries.tcp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "auth.queries.tcp", + "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially" + }, + { + "item_name": "auth.queries.udp", + "item_type": "integer", + "item_optional": false, + "item_default": 0, + "item_title": "auth.queries.udp", + "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially" + } + ], + "commands": [] + } +} diff --git a/src/bin/stats/stats.py.in b/src/bin/stats/stats.py.in index 9f24c67a9f..ce3d9f4612 100644 --- a/src/bin/stats/stats.py.in +++ b/src/bin/stats/stats.py.in @@ -15,17 +15,16 @@ # NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -""" -Statistics daemon in BIND 10 - -""" import sys; sys.path.append ('@@PYTHONPATH@@') import os +import signal +import select from time import time, strftime, gmtime from optparse import OptionParser, OptionValueError +from collections import defaultdict +from isc.config.ccsession import ModuleCCSession, create_answer +from isc.cc import Session, SessionError -import isc -import isc.util.process import isc.log from stats_messages import * @@ -36,140 +35,211 @@ logger = isc.log.Logger("stats") # have #1074 DBG_STATS_MESSAGING = 30 -# This is for boot_time of Stats -_BASETIME = gmtime() - # for setproctitle +import isc.util.process isc.util.process.rename() # If B10_FROM_SOURCE is set in the environment, we use data files # from a directory relative to that, otherwise we use the ones # installed on the system if "B10_FROM_SOURCE" in os.environ: - SPECFILE_LOCATION = os.environ["B10_FROM_SOURCE"] + os.sep + \ - "src" + os.sep + "bin" + os.sep + "stats" + os.sep + "stats.spec" + BASE_LOCATION = os.environ["B10_FROM_SOURCE"] + os.sep + \ + "src" + os.sep + "bin" + os.sep + "stats" else: PREFIX = "@prefix@" DATAROOTDIR = "@datarootdir@" - SPECFILE_LOCATION = "@datadir@" + os.sep + "@PACKAGE@" + os.sep + "stats.spec" - SPECFILE_LOCATION = SPECFILE_LOCATION.replace("${datarootdir}", DATAROOTDIR)\ - .replace("${prefix}", PREFIX) + BASE_LOCATION = "@datadir@" + os.sep + "@PACKAGE@" + BASE_LOCATION = BASE_LOCATION.replace("${datarootdir}", DATAROOTDIR).replace("${prefix}", PREFIX) +SPECFILE_LOCATION = BASE_LOCATION + os.sep + "stats.spec" +SCHEMA_SPECFILE_LOCATION = BASE_LOCATION + os.sep + "stats-schema.spec" -def get_timestamp(): +class Singleton(type): """ - get current timestamp + A abstract class of singleton pattern """ - return time() + # Because of singleton pattern: + # At the beginning of coding, one UNIX domain socket is needed + # for config manager, another socket is needed for stats module, + # then stats module might need two sockets. So I adopted the + # singleton pattern because I avoid creating multiple sockets in + # one stats module. But in the initial version stats module + # reports only via bindctl, so just one socket is needed. To use + # the singleton pattern is not important now. :( -def get_datetime(gmt=None): - """ - get current datetime - """ - if not gmt: gmt = gmtime() - return strftime("%Y-%m-%dT%H:%M:%SZ", gmt) + def __init__(self, *args, **kwargs): + type.__init__(self, *args, **kwargs) + self._instances = {} -def get_spec_defaults(spec): - """ - extracts the default values of the items from spec specified in - arg, and returns the dict-type variable which is a set of the item - names and the default values - """ - if type(spec) is not list: return {} - def _get_spec_defaults(spec): - item_type = spec['item_type'] - if item_type == "integer": - return int(spec.get('item_default', 0)) - elif item_type == "real": - return float(spec.get('item_default', 0.0)) - elif item_type == "boolean": - return bool(spec.get('item_default', False)) - elif item_type == "string": - return str(spec.get('item_default', "")) - elif item_type == "list": - return spec.get( - "item_default", - [ _get_spec_defaults(s) for s in spec["list_item_spec"] ]) - elif item_type == "map": - return spec.get( - "item_default", - dict([ (s["item_name"], _get_spec_defaults(s)) for s in spec["map_item_spec"] ]) ) - else: - return spec.get("item_default", None) - return dict([ (s['item_name'], _get_spec_defaults(s)) for s in spec ]) + def __call__(self, *args, **kwargs): + if args not in self._instances: + self._instances[args]={} + kw = tuple(kwargs.items()) + if kw not in self._instances[args]: + self._instances[args][kw] = type.__call__(self, *args, **kwargs) + return self._instances[args][kw] class Callback(): """ A Callback handler class """ - def __init__(self, command=None, args=(), kwargs={}): - self.command = command + def __init__(self, name=None, callback=None, args=(), kwargs={}): + self.name = name + self.callback = callback self.args = args self.kwargs = kwargs def __call__(self, *args, **kwargs): - if not args: args = self.args - if not kwargs: kwargs = self.kwargs - if self.command: return self.command(*args, **kwargs) + if not args: + args = self.args + if not kwargs: + kwargs = self.kwargs + if self.callback: + return self.callback(*args, **kwargs) -class StatsError(Exception): - """Exception class for Stats class""" - pass +class Subject(): + """ + A abstract subject class of observer pattern + """ + # Because of observer pattern: + # In the initial release, I'm also sure that observer pattern + # isn't definitely needed because the interface between gathering + # and reporting statistics data is single. However in the future + # release, the interfaces may be multiple, that is, multiple + # listeners may be needed. For example, one interface, which + # stats module has, is for between ''config manager'' and stats + # module, another interface is for between ''HTTP server'' and + # stats module, and one more interface is for between ''SNMP + # server'' and stats module. So by considering that stats module + # needs multiple interfaces in the future release, I adopted the + # observer pattern in stats module. But I don't have concrete + # ideas in case of multiple listener currently. -class Stats: - """ - Main class of stats module - """ def __init__(self): + self._listeners = [] + + def attach(self, listener): + if not listener in self._listeners: + self._listeners.append(listener) + + def detach(self, listener): + try: + self._listeners.remove(listener) + except ValueError: + pass + + def notify(self, event, modifier=None): + for listener in self._listeners: + if modifier != listener: + listener.update(event) + +class Listener(): + """ + A abstract listener class of observer pattern + """ + def __init__(self, subject): + self.subject = subject + self.subject.attach(self) + self.events = {} + + def update(self, name): + if name in self.events: + callback = self.events[name] + return callback() + + def add_event(self, event): + self.events[event.name]=event + +class SessionSubject(Subject, metaclass=Singleton): + """ + A concrete subject class which creates CC session object + """ + def __init__(self, session=None): + Subject.__init__(self) + self.session=session self.running = False + + def start(self): + self.running = True + self.notify('start') + + def stop(self): + self.running = False + self.notify('stop') + + def check(self): + self.notify('check') + +class CCSessionListener(Listener): + """ + A concrete listener class which creates SessionSubject object and + ModuleCCSession object + """ + def __init__(self, subject): + Listener.__init__(self, subject) + self.session = subject.session + self.boot_time = get_datetime() + # create ModuleCCSession object - self.mccs = isc.config.ModuleCCSession(SPECFILE_LOCATION, - self.config_handler, - self.command_handler) - self.cc_session = self.mccs._session - # get module spec - self.module_name = self.mccs.get_module_spec().get_module_name() - self.modules = {} - self.statistics_data = {} + self.cc_session = ModuleCCSession(SPECFILE_LOCATION, + self.config_handler, + self.command_handler, + self.session) + + self.session = self.subject.session = self.cc_session._session + + # initialize internal data + self.stats_spec = isc.config.module_spec_from_file(SCHEMA_SPECFILE_LOCATION).get_config_spec() + self.stats_data = self.initialize_data(self.stats_spec) + + # add event handler invoked via SessionSubject object + self.add_event(Callback('start', self.start)) + self.add_event(Callback('stop', self.stop)) + self.add_event(Callback('check', self.check)) + # don't add 'command_' suffix to the special commands in + # order to prevent executing internal command via bindctl + # get commands spec - self.commands_spec = self.mccs.get_module_spec().get_commands_spec() + self.commands_spec = self.cc_session.get_module_spec().get_commands_spec() + # add event handler related command_handler of ModuleCCSession - self.callbacks = {} + # invoked via bindctl for cmd in self.commands_spec: - # add prefix "command_" - name = "command_" + cmd["command_name"] try: + # add prefix "command_" + name = "command_" + cmd["command_name"] callback = getattr(self, name) - kwargs = get_spec_defaults(cmd["command_args"]) - self.callbacks[name] = Callback(command=callback, kwargs=kwargs) - except AttributeError: - raise StatsError(STATS_UNKNOWN_COMMAND_IN_SPEC, cmd["command_name"]) - self.mccs.start() + kwargs = self.initialize_data(cmd["command_args"]) + self.add_event(Callback(name=name, callback=callback, args=(), kwargs=kwargs)) + except AttributeError as ae: + logger.error(STATS_UNKNOWN_COMMAND_IN_SPEC, cmd["command_name"]) def start(self): """ - Start stats module + start the cc chanel """ - self.running = True - logger.info(STATS_STARTING) - + # set initial value + self.stats_data['stats.boot_time'] = self.boot_time + self.stats_data['stats.start_time'] = get_datetime() + self.stats_data['stats.last_update_time'] = get_datetime() + self.stats_data['stats.lname'] = self.session.lname + self.cc_session.start() # request Bob to send statistics data logger.debug(DBG_STATS_MESSAGING, STATS_SEND_REQUEST_BOSS) cmd = isc.config.ccsession.create_command("sendstats", None) - seq = self.cc_session.group_sendmsg(cmd, 'Boss') - self.cc_session.group_recvmsg(True, seq) + seq = self.session.group_sendmsg(cmd, 'Boss') + self.session.group_recvmsg(True, seq) - # initialized Statistics data - errors = self.update_statistics_data( - self.module_name, - lname=self.cc_session.lname, - boot_time=get_datetime(_BASETIME) - ) - if errors: - raise StatsError("stats spec file is incorrect: " - + ", ".join(errors)) + def stop(self): + """ + stop the cc chanel + """ + return self.cc_session.close() - while self.running: - self.mccs.check_command(False) + def check(self): + """ + check the cc chanel + """ + return self.cc_session.check_command(False) def config_handler(self, new_config): """ @@ -177,222 +247,174 @@ class Stats: """ logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_NEW_CONFIG, new_config) - # do nothing currently - return isc.config.create_answer(0) - def command_handler(self, command, kwargs): + # do nothing currently + return create_answer(0) + + def command_handler(self, command, *args, **kwargs): """ handle commands from the cc channel """ + # add 'command_' suffix in order to executing command via bindctl name = 'command_' + command - if name in self.callbacks: - callback = self.callbacks[name] - if kwargs: - return callback(**kwargs) - else: - return callback() + + if name in self.events: + event = self.events[name] + return event(*args, **kwargs) else: - logger.error(STATS_RECEIVED_UNKNOWN_COMMAND, command) - return isc.config.create_answer(1, "Unknown command: '"+str(command)+"'") + return self.command_unknown(command, args) - def update_modules(self): - """ - updates information of each module. This method gets each - module's information from the config manager and sets it into - self.modules. If its getting from the config manager fails, it - raises StatsError. - """ - modules = {} - seq = self.cc_session.group_sendmsg( - isc.config.ccsession.create_command( - isc.config.ccsession.COMMAND_GET_STATISTICS_SPEC), - 'ConfigManager') - (answer, env) = self.cc_session.group_recvmsg(False, seq) - if answer: - (rcode, value) = isc.config.ccsession.parse_answer(answer) - if rcode == 0: - for mod in value: - spec = { "module_name" : mod } - if value[mod] and type(value[mod]) is list: - spec["statistics"] = value[mod] - modules[mod] = isc.config.module_spec.ModuleSpec(spec) - else: - raise StatsError("Updating module spec fails: " + str(value)) - modules[self.module_name] = self.mccs.get_module_spec() - self.modules = modules - - def get_statistics_data(self, owner=None, name=None): - """ - returns statistics data which stats module has of each - module. If it can't find specified statistics data, it raises - StatsError. - """ - self.update_statistics_data() - if owner and name: - try: - return self.statistics_data[owner][name] - except KeyError: - pass - elif owner: - try: - return self.statistics_data[owner] - except KeyError: - pass - elif name: - pass - else: - return self.statistics_data - raise StatsError("No statistics data found: " - + "owner: " + str(owner) + ", " - + "name: " + str(name)) - - def update_statistics_data(self, owner=None, **data): - """ - change statistics date of specified module into specified - data. It updates information of each module first, and it - updates statistics data. If specified data is invalid for - statistics spec of specified owner, it returns a list of error - messeges. If there is no error or if neither owner nor data is - specified in args, it returns None. - """ - self.update_modules() - statistics_data = {} - for (name, module) in self.modules.items(): - value = get_spec_defaults(module.get_statistics_spec()) - if module.validate_statistics(True, value): - statistics_data[name] = value - for (name, value) in self.statistics_data.items(): - if name in statistics_data: - statistics_data[name].update(value) - else: - statistics_data[name] = value - self.statistics_data = statistics_data - if owner and data: - errors = [] - try: - if self.modules[owner].validate_statistics(False, data, errors): - self.statistics_data[owner].update(data) - return - except KeyError: - errors.append("unknown module name: " + str(owner)) - return errors - - def command_status(self): - """ - handle status command - """ - logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_STATUS_COMMAND) - return isc.config.create_answer( - 0, "Stats is up. (PID " + str(os.getpid()) + ")") - - def command_shutdown(self): + def command_shutdown(self, args): """ handle shutdown command """ logger.info(STATS_RECEIVED_SHUTDOWN_COMMAND) - self.running = False - return isc.config.create_answer(0) + self.subject.running = False + return create_answer(0) - def command_show(self, owner=None, name=None): - """ - handle show command - """ - if owner or name: - logger.debug(DBG_STATS_MESSAGING, - STATS_RECEIVED_SHOW_NAME_COMMAND, - str(owner)+", "+str(name)) - else: - logger.debug(DBG_STATS_MESSAGING, - STATS_RECEIVED_SHOW_ALL_COMMAND) - errors = self.update_statistics_data( - self.module_name, - timestamp=get_timestamp(), - report_time=get_datetime() - ) - if errors: - raise StatsError("stats spec file is incorrect: " - + ", ".join(errors)) - try: - return isc.config.create_answer( - 0, self.get_statistics_data(owner, name)) - except StatsError: - return isc.config.create_answer( - 1, "specified arguments are incorrect: " \ - + "owner: " + str(owner) + ", name: " + str(name)) - - def command_showschema(self, owner=None, name=None): - """ - handle show command - """ - if owner or name: - logger.debug(DBG_STATS_MESSAGING, - STATS_RECEIVED_SHOWSCHEMA_NAME_COMMAND, - str(owner)+", "+str(name)) - else: - logger.debug(DBG_STATS_MESSAGING, - STATS_RECEIVED_SHOWSCHEMA_ALL_COMMAND) - self.update_modules() - schema = {} - schema_byname = {} - for mod in self.modules: - spec = self.modules[mod].get_statistics_spec() - schema_byname[mod] = {} - if spec: - schema[mod] = spec - for item in spec: - schema_byname[mod][item['item_name']] = item - if owner: - try: - if name: - return isc.config.create_answer(0, schema_byname[owner][name]) - else: - return isc.config.create_answer(0, schema[owner]) - except KeyError: - pass - else: - if name: - return isc.config.create_answer(1, "module name is not specified") - else: - return isc.config.create_answer(0, schema) - return isc.config.create_answer( - 1, "specified arguments are incorrect: " \ - + "owner: " + str(owner) + ", name: " + str(name)) - - def command_set(self, owner, data): + def command_set(self, args, stats_data={}): """ handle set command """ - errors = self.update_statistics_data(owner, **data) - if errors: - return isc.config.create_answer( - 1, "errors while setting statistics data: " \ - + ", ".join(errors)) - errors = self.update_statistics_data( - self.module_name, last_update_time=get_datetime() ) - if errors: - raise StatsError("stats spec file is incorrect: " - + ", ".join(errors)) - return isc.config.create_answer(0) + # 'args' must be dictionary type + self.stats_data.update(args['stats_data']) -if __name__ == "__main__": + # overwrite "stats.LastUpdateTime" + self.stats_data['stats.last_update_time'] = get_datetime() + + return create_answer(0) + + def command_remove(self, args, stats_item_name=''): + """ + handle remove command + """ + + # 'args' must be dictionary type + if args and args['stats_item_name'] in self.stats_data: + stats_item_name = args['stats_item_name'] + + logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_REMOVE_COMMAND, + stats_item_name) + + # just remove one item + self.stats_data.pop(stats_item_name) + + return create_answer(0) + + def command_show(self, args, stats_item_name=''): + """ + handle show command + """ + + # always overwrite 'report_time' and 'stats.timestamp' + # if "show" command invoked + self.stats_data['report_time'] = get_datetime() + self.stats_data['stats.timestamp'] = get_timestamp() + + # if with args + if args and args['stats_item_name'] in self.stats_data: + stats_item_name = args['stats_item_name'] + logger.debug(DBG_STATS_MESSAGING, + STATS_RECEIVED_SHOW_NAME_COMMAND, + stats_item_name) + return create_answer(0, {stats_item_name: self.stats_data[stats_item_name]}) + + logger.debug(DBG_STATS_MESSAGING, + STATS_RECEIVED_SHOW_ALL_COMMAND) + return create_answer(0, self.stats_data) + + def command_reset(self, args): + """ + handle reset command + """ + logger.debug(DBG_STATS_MESSAGING, + STATS_RECEIVED_RESET_COMMAND) + + # re-initialize internal variables + self.stats_data = self.initialize_data(self.stats_spec) + + # reset initial value + self.stats_data['stats.boot_time'] = self.boot_time + self.stats_data['stats.start_time'] = get_datetime() + self.stats_data['stats.last_update_time'] = get_datetime() + self.stats_data['stats.lname'] = self.session.lname + + return create_answer(0) + + def command_status(self, args): + """ + handle status command + """ + logger.debug(DBG_STATS_MESSAGING, STATS_RECEIVED_STATUS_COMMAND) + # just return "I'm alive." + return create_answer(0, "I'm alive.") + + def command_unknown(self, command, args): + """ + handle an unknown command + """ + logger.error(STATS_RECEIVED_UNKNOWN_COMMAND, command) + return create_answer(1, "Unknown command: '"+str(command)+"'") + + + def initialize_data(self, spec): + """ + initialize stats data + """ + def __get_init_val(spec): + if spec['item_type'] == 'null': + return None + elif spec['item_type'] == 'boolean': + return bool(spec.get('item_default', False)) + elif spec['item_type'] == 'string': + return str(spec.get('item_default', '')) + elif spec['item_type'] in set(['number', 'integer']): + return int(spec.get('item_default', 0)) + elif spec['item_type'] in set(['float', 'double', 'real']): + return float(spec.get('item_default', 0.0)) + elif spec['item_type'] in set(['list', 'array']): + return spec.get('item_default', + [ __get_init_val(s) for s in spec['list_item_spec'] ]) + elif spec['item_type'] in set(['map', 'object']): + return spec.get('item_default', + dict([ (s['item_name'], __get_init_val(s)) for s in spec['map_item_spec'] ]) ) + else: + return spec.get('item_default') + return dict([ (s['item_name'], __get_init_val(s)) for s in spec ]) + +def get_timestamp(): + """ + get current timestamp + """ + return time() + +def get_datetime(): + """ + get current datetime + """ + return strftime("%Y-%m-%dT%H:%M:%SZ", gmtime()) + +def main(session=None): try: parser = OptionParser() - parser.add_option( - "-v", "--verbose", dest="verbose", action="store_true", - help="display more about what is going on") + parser.add_option("-v", "--verbose", dest="verbose", action="store_true", + help="display more about what is going on") (options, args) = parser.parse_args() if options.verbose: isc.log.init("b10-stats", "DEBUG", 99) - stats = Stats() - stats.start() + subject = SessionSubject(session=session) + listener = CCSessionListener(subject) + subject.start() + while subject.running: + subject.check() + subject.stop() + except OptionValueError as ove: logger.fatal(STATS_BAD_OPTION_VALUE, ove) - sys.exit(1) except SessionError as se: logger.fatal(STATS_CC_SESSION_ERROR, se) - sys.exit(1) - except StatsError as se: - logger.fatal(STATS_START_ERROR, se) - sys.exit(1) except KeyboardInterrupt as kie: logger.info(STATS_STOPPED_BY_KEYBOARD) + +if __name__ == "__main__": + main() diff --git a/src/bin/stats/stats.spec b/src/bin/stats/stats.spec index e716b62279..635eb486a1 100644 --- a/src/bin/stats/stats.spec +++ b/src/bin/stats/stats.spec @@ -6,51 +6,18 @@ "commands": [ { "command_name": "status", - "command_description": "Show status of the stats daemon", - "command_args": [] - }, - { - "command_name": "shutdown", - "command_description": "Shut down the stats module", + "command_description": "identify whether stats module is alive or not", "command_args": [] }, { "command_name": "show", - "command_description": "Show the specified/all statistics data", + "command_description": "show the specified/all statistics data", "command_args": [ { - "item_name": "owner", + "item_name": "stats_item_name", "item_type": "string", "item_optional": true, - "item_default": "", - "item_description": "module name of the owner of the statistics data" - }, - { - "item_name": "name", - "item_type": "string", - "item_optional": true, - "item_default": "", - "item_description": "statistics item name of the owner" - } - ] - }, - { - "command_name": "showschema", - "command_description": "show the specified/all statistics shema", - "command_args": [ - { - "item_name": "owner", - "item_type": "string", - "item_optional": true, - "item_default": "", - "item_description": "module name of the owner of the statistics data" - }, - { - "item_name": "name", - "item_type": "string", - "item_optional": true, - "item_default": "", - "item_description": "statistics item name of the owner" + "item_default": "" } ] }, @@ -59,21 +26,35 @@ "command_description": "set the value of specified name in statistics data", "command_args": [ { - "item_name": "owner", - "item_type": "string", - "item_optional": false, - "item_default": "", - "item_description": "module name of the owner of the statistics data" - }, - { - "item_name": "data", + "item_name": "stats_data", "item_type": "map", "item_optional": false, "item_default": {}, - "item_description": "statistics data set of the owner", "map_item_spec": [] } ] + }, + { + "command_name": "remove", + "command_description": "remove the specified name from statistics data", + "command_args": [ + { + "item_name": "stats_item_name", + "item_type": "string", + "item_optional": false, + "item_default": "" + } + ] + }, + { + "command_name": "reset", + "command_description": "reset all statistics data to default values except for several constant names", + "command_args": [] + }, + { + "command_name": "shutdown", + "command_description": "Shut down the stats module", + "command_args": [] } ], "statistics": [ @@ -119,7 +100,7 @@ "item_default": "", "item_title": "Local Name", "item_description": "A localname of stats module given via CC protocol" - } + } ] } } diff --git a/src/bin/stats/stats_httpd.py.in b/src/bin/stats/stats_httpd.py.in old mode 100644 new mode 100755 index f8a09e5610..74298cf288 --- a/src/bin/stats/stats_httpd.py.in +++ b/src/bin/stats/stats_httpd.py.in @@ -57,6 +57,7 @@ else: BASE_LOCATION = "@datadir@" + os.sep + "@PACKAGE@" BASE_LOCATION = BASE_LOCATION.replace("${datarootdir}", DATAROOTDIR).replace("${prefix}", PREFIX) SPECFILE_LOCATION = BASE_LOCATION + os.sep + "stats-httpd.spec" +SCHEMA_SPECFILE_LOCATION = BASE_LOCATION + os.sep + "stats-schema.spec" XML_TEMPLATE_LOCATION = BASE_LOCATION + os.sep + "stats-httpd-xml.tpl" XSD_TEMPLATE_LOCATION = BASE_LOCATION + os.sep + "stats-httpd-xsd.tpl" XSL_TEMPLATE_LOCATION = BASE_LOCATION + os.sep + "stats-httpd-xsl.tpl" @@ -68,7 +69,7 @@ XSD_URL_PATH = '/bind10/statistics/xsd' XSL_URL_PATH = '/bind10/statistics/xsl' # TODO: This should be considered later. XSD_NAMESPACE = 'http://bind10.isc.org' + XSD_URL_PATH -DEFAULT_CONFIG = dict(version=0, listen_on=[('127.0.0.1', 8000)]) +DEFAULT_CONFIG = dict(listen_on=[('127.0.0.1', 8000)]) # Assign this process name isc.util.process.rename() @@ -160,6 +161,8 @@ class StatsHttpd: self.httpd = [] self.open_mccs() self.load_config() + self.load_templates() + self.open_httpd() def open_mccs(self): """Opens a ModuleCCSession object""" @@ -168,6 +171,10 @@ class StatsHttpd: self.mccs = isc.config.ModuleCCSession( SPECFILE_LOCATION, self.config_handler, self.command_handler) self.cc_session = self.mccs._session + # read spec file of stats module and subscribe 'Stats' + self.stats_module_spec = isc.config.module_spec_from_file(SCHEMA_SPECFILE_LOCATION) + self.stats_config_spec = self.stats_module_spec.get_config_spec() + self.stats_module_name = self.stats_module_spec.get_module_name() def close_mccs(self): """Closes a ModuleCCSession object""" @@ -201,41 +208,45 @@ class StatsHttpd: for addr in self.http_addrs: self.httpd.append(self._open_httpd(addr)) - def _open_httpd(self, server_address): - httpd = None + def _open_httpd(self, server_address, address_family=None): try: - # get address family for the server_address before - # creating HttpServer object - address_family = socket.getaddrinfo(*server_address)[0][0] - HttpServer.address_family = address_family + # try IPv6 at first + if address_family is not None: + HttpServer.address_family = address_family + elif socket.has_ipv6: + HttpServer.address_family = socket.AF_INET6 httpd = HttpServer( server_address, HttpHandler, self.xml_handler, self.xsd_handler, self.xsl_handler, self.write_log) - logger.info(STATHTTPD_STARTED, server_address[0], - server_address[1]) - return httpd except (socket.gaierror, socket.error, OverflowError, TypeError) as err: - if httpd: - httpd.server_close() - raise HttpServerError( - "Invalid address %s, port %s: %s: %s" % - (server_address[0], server_address[1], - err.__class__.__name__, err)) + # try IPv4 next + if HttpServer.address_family == socket.AF_INET6: + httpd = self._open_httpd(server_address, socket.AF_INET) + else: + raise HttpServerError( + "Invalid address %s, port %s: %s: %s" % + (server_address[0], server_address[1], + err.__class__.__name__, err)) + else: + logger.info(STATHTTPD_STARTED, server_address[0], + server_address[1]) + return httpd def close_httpd(self): """Closes sockets for HTTP""" - while len(self.httpd)>0: - ht = self.httpd.pop() + if len(self.httpd) == 0: + return + for ht in self.httpd: logger.info(STATHTTPD_CLOSING, ht.server_address[0], ht.server_address[1]) ht.server_close() + self.httpd = [] def start(self): """Starts StatsHttpd objects to run. Waiting for client requests by using select.select functions""" - self.open_httpd() self.mccs.start() self.running = True while self.running: @@ -299,9 +310,9 @@ class StatsHttpd: except HttpServerError as err: logger.error(STATHTTPD_SERVER_ERROR, err) # restore old config - self.load_config(old_config) - self.open_httpd() - return isc.config.ccsession.create_answer(1, str(err)) + self.config_handler(old_config) + return isc.config.ccsession.create_answer( + 1, "[b10-stats-httpd] %s" % err) else: return isc.config.ccsession.create_answer(0) @@ -330,7 +341,8 @@ class StatsHttpd: the data which obtains from it""" try: seq = self.cc_session.group_sendmsg( - isc.config.ccsession.create_command('show'), 'Stats') + isc.config.ccsession.create_command('show'), + self.stats_module_name) (answer, env) = self.cc_session.group_recvmsg(False, seq) if answer: (rcode, value) = isc.config.ccsession.parse_answer(answer) @@ -345,82 +357,34 @@ class StatsHttpd: raise StatsHttpdError("Stats module: %s" % str(value)) def get_stats_spec(self): - """Requests statistics data to the Stats daemon and returns - the data which obtains from it""" - try: - seq = self.cc_session.group_sendmsg( - isc.config.ccsession.create_command('showschema'), 'Stats') - (answer, env) = self.cc_session.group_recvmsg(False, seq) - if answer: - (rcode, value) = isc.config.ccsession.parse_answer(answer) - if rcode == 0: - return value - else: - raise StatsHttpdError("Stats module: %s" % str(value)) - except (isc.cc.session.SessionTimeout, - isc.cc.session.SessionError) as err: - raise StatsHttpdError("%s: %s" % - (err.__class__.__name__, err)) + """Just returns spec data""" + return self.stats_config_spec - def xml_handler(self): - """Handler which requests to Stats daemon to obtain statistics - data and returns the body of XML document""" - xml_list=[] - for (mod, spec) in self.get_stats_data().items(): - if not spec: continue - elem1 = xml.etree.ElementTree.Element(str(mod)) - for (k, v) in spec.items(): - elem2 = xml.etree.ElementTree.Element(str(k)) - elem2.text = str(v) - elem1.append(elem2) - # The coding conversion is tricky. xml..tostring() of Python 3.2 - # returns bytes (not string) regardless of the coding, while - # tostring() of Python 3.1 returns a string. To support both - # cases transparently, we first make sure tostring() returns - # bytes by specifying utf-8 and then convert the result to a - # plain string (code below assume it). - xml_list.append( - str(xml.etree.ElementTree.tostring(elem1, encoding='utf-8'), - encoding='us-ascii')) - xml_string = "".join(xml_list) - self.xml_body = self.open_template(XML_TEMPLATE_LOCATION).substitute( - xml_string=xml_string, - xsd_namespace=XSD_NAMESPACE, - xsd_url_path=XSD_URL_PATH, - xsl_url_path=XSL_URL_PATH) - assert self.xml_body is not None - return self.xml_body - - def xsd_handler(self): - """Handler which just returns the body of XSD document""" + def load_templates(self): + """Setup the bodies of XSD and XSL documents to be responds to + HTTP clients. Before that it also creates XML tag structures by + using xml.etree.ElementTree.Element class and substitutes + concrete strings with parameters embed in the string.Template + object.""" # for XSD xsd_root = xml.etree.ElementTree.Element("all") # started with "all" tag - for (mod, spec) in self.get_stats_spec().items(): - if not spec: continue - alltag = xml.etree.ElementTree.Element("all") - for item in spec: - element = xml.etree.ElementTree.Element( - "element", - dict( name=item["item_name"], - type=item["item_type"] if item["item_type"].lower() != 'real' else 'float', - minOccurs="1", - maxOccurs="1" ), - ) - annotation = xml.etree.ElementTree.Element("annotation") - appinfo = xml.etree.ElementTree.Element("appinfo") - documentation = xml.etree.ElementTree.Element("documentation") - appinfo.text = item["item_title"] - documentation.text = item["item_description"] - annotation.append(appinfo) - annotation.append(documentation) - element.append(annotation) - alltag.append(element) - - complextype = xml.etree.ElementTree.Element("complexType") - complextype.append(alltag) - mod_element = xml.etree.ElementTree.Element("element", { "name" : mod }) - mod_element.append(complextype) - xsd_root.append(mod_element) + for item in self.get_stats_spec(): + element = xml.etree.ElementTree.Element( + "element", + dict( name=item["item_name"], + type=item["item_type"] if item["item_type"].lower() != 'real' else 'float', + minOccurs="1", + maxOccurs="1" ), + ) + annotation = xml.etree.ElementTree.Element("annotation") + appinfo = xml.etree.ElementTree.Element("appinfo") + documentation = xml.etree.ElementTree.Element("documentation") + appinfo.text = item["item_title"] + documentation.text = item["item_description"] + annotation.append(appinfo) + annotation.append(documentation) + element.append(annotation) + xsd_root.append(element) # The coding conversion is tricky. xml..tostring() of Python 3.2 # returns bytes (not string) regardless of the coding, while # tostring() of Python 3.1 returns a string. To support both @@ -434,33 +398,25 @@ class StatsHttpd: xsd_namespace=XSD_NAMESPACE ) assert self.xsd_body is not None - return self.xsd_body - def xsl_handler(self): - """Handler which just returns the body of XSL document""" # for XSL xsd_root = xml.etree.ElementTree.Element( "xsl:template", dict(match="*")) # started with xml:template tag - for (mod, spec) in self.get_stats_spec().items(): - if not spec: continue - for item in spec: - tr = xml.etree.ElementTree.Element("tr") - td0 = xml.etree.ElementTree.Element("td") - td0.text = str(mod) - td1 = xml.etree.ElementTree.Element( - "td", { "class" : "title", - "title" : item["item_description"] }) - td1.text = item["item_title"] - td2 = xml.etree.ElementTree.Element("td") - xsl_valueof = xml.etree.ElementTree.Element( - "xsl:value-of", - dict(select=mod+'/'+item["item_name"])) - td2.append(xsl_valueof) - tr.append(td0) - tr.append(td1) - tr.append(td2) - xsd_root.append(tr) + for item in self.get_stats_spec(): + tr = xml.etree.ElementTree.Element("tr") + td1 = xml.etree.ElementTree.Element( + "td", { "class" : "title", + "title" : item["item_description"] }) + td1.text = item["item_title"] + td2 = xml.etree.ElementTree.Element("td") + xsl_valueof = xml.etree.ElementTree.Element( + "xsl:value-of", + dict(select=item["item_name"])) + td2.append(xsl_valueof) + tr.append(td1) + tr.append(td2) + xsd_root.append(tr) # The coding conversion is tricky. xml..tostring() of Python 3.2 # returns bytes (not string) regardless of the coding, while # tostring() of Python 3.1 returns a string. To support both @@ -473,15 +429,47 @@ class StatsHttpd: xsl_string=xsl_string, xsd_namespace=XSD_NAMESPACE) assert self.xsl_body is not None + + def xml_handler(self): + """Handler which requests to Stats daemon to obtain statistics + data and returns the body of XML document""" + xml_list=[] + for (k, v) in self.get_stats_data().items(): + (k, v) = (str(k), str(v)) + elem = xml.etree.ElementTree.Element(k) + elem.text = v + # The coding conversion is tricky. xml..tostring() of Python 3.2 + # returns bytes (not string) regardless of the coding, while + # tostring() of Python 3.1 returns a string. To support both + # cases transparently, we first make sure tostring() returns + # bytes by specifying utf-8 and then convert the result to a + # plain string (code below assume it). + xml_list.append( + str(xml.etree.ElementTree.tostring(elem, encoding='utf-8'), + encoding='us-ascii')) + xml_string = "".join(xml_list) + self.xml_body = self.open_template(XML_TEMPLATE_LOCATION).substitute( + xml_string=xml_string, + xsd_namespace=XSD_NAMESPACE, + xsd_url_path=XSD_URL_PATH, + xsl_url_path=XSL_URL_PATH) + assert self.xml_body is not None + return self.xml_body + + def xsd_handler(self): + """Handler which just returns the body of XSD document""" + return self.xsd_body + + def xsl_handler(self): + """Handler which just returns the body of XSL document""" return self.xsl_body def open_template(self, file_name): """It opens a template file, and it loads all lines to a string variable and returns string. Template object includes the variable. Limitation of a file size isn't needed there.""" - f = open(file_name, 'r') - lines = "".join(f.readlines()) - f.close() + lines = "".join( + open(file_name, 'r').readlines()) assert lines is not None return string.Template(lines) diff --git a/src/bin/stats/stats_messages.mes b/src/bin/stats/stats_messages.mes index cfffb3adb8..9ad07cf493 100644 --- a/src/bin/stats/stats_messages.mes +++ b/src/bin/stats/stats_messages.mes @@ -28,6 +28,16 @@ control bus. A likely problem is that the message bus daemon This debug message is printed when the stats module has received a configuration update from the configuration manager. +% STATS_RECEIVED_REMOVE_COMMAND received command to remove %1 +A remove command for the given name was sent to the stats module, and +the given statistics value will now be removed. It will not appear in +statistics reports until it appears in a statistics update from a +module again. + +% STATS_RECEIVED_RESET_COMMAND received command to reset all statistics +The stats module received a command to clear all collected statistics. +The data is cleared until it receives an update from the modules again. + % STATS_RECEIVED_SHOW_ALL_COMMAND received command to show all statistics The stats module received a command to show all statistics that it has collected. @@ -62,15 +72,4 @@ installation problem, where the specification file stats.spec is from a different version of BIND 10 than the stats module itself. Please check your installation. -% STATS_STARTING starting -The stats module will be now starting. -% STATS_RECEIVED_SHOWSCHEMA_ALL_COMMAND received command to show all statistics schema -The stats module received a command to show all statistics schemas of all modules. - -% STATS_RECEIVED_SHOWSCHEMA_NAME_COMMAND received command to show statistics schema for %1 -The stats module received a command to show the specified statistics schema of the specified module. - -% STATS_START_ERROR stats module error: %1 -An internal error occurred while starting the stats module. The stats -module will be now shutting down. diff --git a/src/bin/stats/tests/Makefile.am b/src/bin/stats/tests/Makefile.am index 368e90c700..dad6c48bbc 100644 --- a/src/bin/stats/tests/Makefile.am +++ b/src/bin/stats/tests/Makefile.am @@ -1,7 +1,8 @@ +SUBDIRS = isc http testdata PYCOVERAGE_RUN = @PYCOVERAGE_RUN@ PYTESTS = b10-stats_test.py b10-stats-httpd_test.py -EXTRA_DIST = $(PYTESTS) test_utils.py -CLEANFILES = test_utils.pyc +EXTRA_DIST = $(PYTESTS) fake_time.py fake_socket.py fake_select.py +CLEANFILES = fake_time.pyc fake_socket.pyc fake_select.pyc # If necessary (rare cases), explicitly specify paths to dynamic libraries # required by loadable python modules. @@ -13,16 +14,15 @@ endif # test using command-line arguments, so use check-local target instead of TESTS check-local: if ENABLE_PYTHON_COVERAGE - touch $(abs_top_srcdir)/.coverage + touch $(abs_top_srcdir)/.coverage rm -f .coverage ${LN_S} $(abs_top_srcdir)/.coverage .coverage endif for pytest in $(PYTESTS) ; do \ echo Running test: $$pytest ; \ $(LIBRARY_PATH_PLACEHOLDER) \ - env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/bin/stats:$(abs_top_builddir)/src/bin/stats/tests:$(abs_top_builddir)/src/bin/msgq:$(abs_top_builddir)/src/lib/python/isc/config \ + env PYTHONPATH=$(abs_top_srcdir)/src/lib/python:$(abs_top_builddir)/src/lib/python:$(abs_top_builddir)/src/bin/stats:$(abs_top_builddir)/src/bin/stats/tests \ B10_FROM_SOURCE=$(abs_top_srcdir) \ - CONFIG_TESTDATA_PATH=$(abs_top_srcdir)/src/lib/config/tests/testdata \ $(PYCOVERAGE_RUN) $(abs_srcdir)/$$pytest || exit ; \ done diff --git a/src/bin/stats/tests/b10-stats-httpd_test.py b/src/bin/stats/tests/b10-stats-httpd_test.py index 8c84277930..6d72dc2f38 100644 --- a/src/bin/stats/tests/b10-stats-httpd_test.py +++ b/src/bin/stats/tests/b10-stats-httpd_test.py @@ -13,251 +13,147 @@ # NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -""" -In each of these tests we start several virtual components. They are -not the real components, no external processes are started. They are -just simple mock objects running each in its own thread and pretending -to be bind10 modules. This helps testing the stats http server in a -close to real environment. -""" - import unittest import os -import imp -import socket -import errno -import select +import http.server import string -import time -import threading -import http.client -import xml.etree.ElementTree +import fake_select +import imp +import sys +import fake_socket + +import isc.cc -import isc import stats_httpd -import stats -from test_utils import BaseModules, ThreadingServerManager, MyStats, MyStatsHttpd, TIMEOUT_SEC - -# set test name for logger -isc.log.init("b10-stats-httpd_test") +stats_httpd.socket = fake_socket +stats_httpd.select = fake_select DUMMY_DATA = { - 'Boss' : { - "boot_time": "2011-03-04T11:59:06Z" - }, - 'Auth' : { - "queries.tcp": 2, - "queries.udp": 3 - }, - 'Stats' : { - "report_time": "2011-03-04T11:59:19Z", - "boot_time": "2011-03-04T11:59:06Z", - "last_update_time": "2011-03-04T11:59:07Z", - "lname": "4d70d40a_c@host", - "timestamp": 1299239959.560846 - } + "auth.queries.tcp": 10000, + "auth.queries.udp": 12000, + "bind10.boot_time": "2011-03-04T11:59:05Z", + "report_time": "2011-03-04T11:59:19Z", + "stats.boot_time": "2011-03-04T11:59:06Z", + "stats.last_update_time": "2011-03-04T11:59:07Z", + "stats.lname": "4d70d40a_c@host", + "stats.start_time": "2011-03-04T11:59:06Z", + "stats.timestamp": 1299239959.560846 } +def push_answer(stats_httpd): + stats_httpd.cc_session.group_sendmsg( + { 'result': + [ 0, DUMMY_DATA ] }, "Stats") + +def pull_query(stats_httpd): + (msg, env) = stats_httpd.cc_session.group_recvmsg() + if 'result' in msg: + (ret, arg) = isc.config.ccsession.parse_answer(msg) + else: + (ret, arg) = isc.config.ccsession.parse_command(msg) + return (ret, arg, env) + class TestHttpHandler(unittest.TestCase): """Tests for HttpHandler class""" def setUp(self): - self.base = BaseModules() - self.stats_server = ThreadingServerManager(MyStats) - self.stats = self.stats_server.server - self.stats_server.run() - - def tearDown(self): - self.stats_server.shutdown() - self.base.shutdown() + self.stats_httpd = stats_httpd.StatsHttpd() + self.assertTrue(type(self.stats_httpd.httpd) is list) + self.httpd = self.stats_httpd.httpd def test_do_GET(self): - (address, port) = ('127.0.0.1', 65450) - statshttpd_server = ThreadingServerManager(MyStatsHttpd) - self.stats_httpd = statshttpd_server.server - self.stats_httpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) - self.assertTrue(type(self.stats_httpd.httpd) is list) - self.assertEqual(len(self.stats_httpd.httpd), 0) - statshttpd_server.run() - client = http.client.HTTPConnection(address, port) - client._http_vsn_str = 'HTTP/1.0\n' - client.connect() + for ht in self.httpd: + self._test_do_GET(ht._handler) + + def _test_do_GET(self, handler): # URL is '/bind10/statistics/xml' - client.putrequest('GET', stats_httpd.XML_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.getheader("Content-type"), "text/xml") - self.assertTrue(int(response.getheader("Content-Length")) > 0) - self.assertEqual(response.status, 200) - root = xml.etree.ElementTree.parse(response).getroot() - self.assertTrue(root.tag.find('stats_data') > 0) - for (k,v) in root.attrib.items(): - if k.find('schemaLocation') > 0: - self.assertEqual(v, stats_httpd.XSD_NAMESPACE + ' ' + stats_httpd.XSD_URL_PATH) - for mod in DUMMY_DATA: - for (item, value) in DUMMY_DATA[mod].items(): - self.assertIsNotNone(root.find(mod + '/' + item)) + handler.path = stats_httpd.XML_URL_PATH + push_answer(self.stats_httpd) + handler.do_GET() + (ret, arg, env) = pull_query(self.stats_httpd) + self.assertEqual(ret, "show") + self.assertIsNone(arg) + self.assertTrue('group' in env) + self.assertEqual(env['group'], 'Stats') + self.assertEqual(handler.response.code, 200) + self.assertEqual(handler.response.headers["Content-type"], "text/xml") + self.assertTrue(handler.response.headers["Content-Length"] > 0) + self.assertTrue(handler.response.wrote_headers) + self.assertTrue(handler.response.body.find(stats_httpd.XSD_NAMESPACE)>0) + self.assertTrue(handler.response.body.find(stats_httpd.XSD_URL_PATH)>0) + for (k, v) in DUMMY_DATA.items(): + self.assertTrue(handler.response.body.find(str(k))>0) + self.assertTrue(handler.response.body.find(str(v))>0) # URL is '/bind10/statitics/xsd' - client.putrequest('GET', stats_httpd.XSD_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.getheader("Content-type"), "text/xml") - self.assertTrue(int(response.getheader("Content-Length")) > 0) - self.assertEqual(response.status, 200) - root = xml.etree.ElementTree.parse(response).getroot() - url_xmlschema = '{http://www.w3.org/2001/XMLSchema}' - tags = [ url_xmlschema + t for t in [ 'element', 'complexType', 'all', 'element' ] ] - xsdpath = '/'.join(tags) - self.assertTrue(root.tag.find('schema') > 0) - self.assertTrue(hasattr(root, 'attrib')) - self.assertTrue('targetNamespace' in root.attrib) - self.assertEqual(root.attrib['targetNamespace'], - stats_httpd.XSD_NAMESPACE) - for elm in root.findall(xsdpath): - self.assertIsNotNone(elm.attrib['name']) - self.assertTrue(elm.attrib['name'] in DUMMY_DATA) + handler.path = stats_httpd.XSD_URL_PATH + handler.do_GET() + self.assertEqual(handler.response.code, 200) + self.assertEqual(handler.response.headers["Content-type"], "text/xml") + self.assertTrue(handler.response.headers["Content-Length"] > 0) + self.assertTrue(handler.response.wrote_headers) + self.assertTrue(handler.response.body.find(stats_httpd.XSD_NAMESPACE)>0) + for (k, v) in DUMMY_DATA.items(): + self.assertTrue(handler.response.body.find(str(k))>0) # URL is '/bind10/statitics/xsl' - client.putrequest('GET', stats_httpd.XSL_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.getheader("Content-type"), "text/xml") - self.assertTrue(int(response.getheader("Content-Length")) > 0) - self.assertEqual(response.status, 200) - root = xml.etree.ElementTree.parse(response).getroot() - url_trans = '{http://www.w3.org/1999/XSL/Transform}' - url_xhtml = '{http://www.w3.org/1999/xhtml}' - xslpath = url_trans + 'template/' + url_xhtml + 'tr' - self.assertEqual(root.tag, url_trans + 'stylesheet') - for tr in root.findall(xslpath): - tds = tr.findall(url_xhtml + 'td') - self.assertIsNotNone(tds) - self.assertEqual(type(tds), list) - self.assertTrue(len(tds) > 2) - self.assertTrue(hasattr(tds[0], 'text')) - self.assertTrue(tds[0].text in DUMMY_DATA) - valueof = tds[2].find(url_trans + 'value-of') - self.assertIsNotNone(valueof) - self.assertTrue(hasattr(valueof, 'attrib')) - self.assertIsNotNone(valueof.attrib) - self.assertTrue('select' in valueof.attrib) - self.assertTrue(valueof.attrib['select'] in \ - [ tds[0].text+'/'+item for item in DUMMY_DATA[tds[0].text].keys() ]) + handler.path = stats_httpd.XSL_URL_PATH + handler.do_GET() + self.assertEqual(handler.response.code, 200) + self.assertEqual(handler.response.headers["Content-type"], "text/xml") + self.assertTrue(handler.response.headers["Content-Length"] > 0) + self.assertTrue(handler.response.wrote_headers) + self.assertTrue(handler.response.body.find(stats_httpd.XSD_NAMESPACE)>0) + for (k, v) in DUMMY_DATA.items(): + self.assertTrue(handler.response.body.find(str(k))>0) # 302 redirect - client._http_vsn_str = 'HTTP/1.1' - client.putrequest('GET', '/') - client.putheader('Host', address) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 302) - self.assertEqual(response.getheader('Location'), - "http://%s:%d%s" % (address, port, stats_httpd.XML_URL_PATH)) + handler.path = '/' + handler.headers = {'Host': 'my.host.domain'} + handler.do_GET() + self.assertEqual(handler.response.code, 302) + self.assertEqual(handler.response.headers["Location"], + "http://my.host.domain%s" % stats_httpd.XML_URL_PATH) - # # 404 NotFound - client._http_vsn_str = 'HTTP/1.0' - client.putrequest('GET', '/path/to/foo/bar') - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 404) + # 404 NotFound + handler.path = '/path/to/foo/bar' + handler.headers = {} + handler.do_GET() + self.assertEqual(handler.response.code, 404) - client.close() - statshttpd_server.shutdown() - - def test_do_GET_failed1(self): # failure case(connection with Stats is down) - (address, port) = ('127.0.0.1', 65451) - statshttpd_server = ThreadingServerManager(MyStatsHttpd) - statshttpd = statshttpd_server.server - statshttpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) - statshttpd_server.run() - self.assertTrue(self.stats_server.server.running) - self.stats_server.shutdown() - self.assertFalse(self.stats_server.server.running) - statshttpd.cc_session.set_timeout(milliseconds=TIMEOUT_SEC/1000) - client = http.client.HTTPConnection(address, port) - client.connect() + handler.path = stats_httpd.XML_URL_PATH + push_answer(self.stats_httpd) + self.assertFalse(self.stats_httpd.cc_session._socket._closed) + self.stats_httpd.cc_session._socket._closed = True + handler.do_GET() + self.stats_httpd.cc_session._socket._closed = False + self.assertEqual(handler.response.code, 500) + self.stats_httpd.cc_session._clear_queues() - # request XML - client.putrequest('GET', stats_httpd.XML_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 500) - - # request XSD - client.putrequest('GET', stats_httpd.XSD_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 500) - - # request XSL - client.putrequest('GET', stats_httpd.XSL_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 500) - - client.close() - statshttpd_server.shutdown() - - def test_do_GET_failed2(self): - # failure case(connection with Stats is down) - (address, port) = ('127.0.0.1', 65452) - statshttpd_server = ThreadingServerManager(MyStatsHttpd) - self.stats_httpd = statshttpd_server.server - self.stats_httpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) - statshttpd_server.run() - self.stats.mccs.set_command_handler( - lambda cmd, args: \ - isc.config.ccsession.create_answer(1, "I have an error.") - ) - client = http.client.HTTPConnection(address, port) - client.connect() - - # request XML - client.putrequest('GET', stats_httpd.XML_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 500) - - # request XSD - client.putrequest('GET', stats_httpd.XSD_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 500) - - # request XSL - client.putrequest('GET', stats_httpd.XSL_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 500) - - client.close() - statshttpd_server.shutdown() + # failure case(Stats module returns err) + handler.path = stats_httpd.XML_URL_PATH + self.stats_httpd.cc_session.group_sendmsg( + { 'result': [ 1, "I have an error." ] }, "Stats") + self.assertFalse(self.stats_httpd.cc_session._socket._closed) + self.stats_httpd.cc_session._socket._closed = False + handler.do_GET() + self.assertEqual(handler.response.code, 500) + self.stats_httpd.cc_session._clear_queues() def test_do_HEAD(self): - (address, port) = ('127.0.0.1', 65453) - statshttpd_server = ThreadingServerManager(MyStatsHttpd) - self.stats_httpd = statshttpd_server.server - self.stats_httpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) - statshttpd_server.run() - client = http.client.HTTPConnection(address, port) - client.connect() - client.putrequest('HEAD', stats_httpd.XML_URL_PATH) - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 200) + for ht in self.httpd: + self._test_do_HEAD(ht._handler) - client.putrequest('HEAD', '/path/to/foo/bar') - client.endheaders() - response = client.getresponse() - self.assertEqual(response.status, 404) - client.close() - statshttpd_server.shutdown() + def _test_do_HEAD(self, handler): + handler.path = '/path/to/foo/bar' + handler.do_HEAD() + self.assertEqual(handler.response.code, 404) class TestHttpServerError(unittest.TestCase): """Tests for HttpServerError exception""" + def test_raises(self): try: raise stats_httpd.HttpServerError('Nothing') @@ -266,16 +162,17 @@ class TestHttpServerError(unittest.TestCase): class TestHttpServer(unittest.TestCase): """Tests for HttpServer class""" - def setUp(self): - self.base = BaseModules() - - def tearDown(self): - self.base.shutdown() def test_httpserver(self): - statshttpd = stats_httpd.StatsHttpd() - self.assertEqual(type(statshttpd.httpd), list) - self.assertEqual(len(statshttpd.httpd), 0) + self.stats_httpd = stats_httpd.StatsHttpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(ht.server_address in self.stats_httpd.http_addrs) + self.assertEqual(ht.xml_handler, self.stats_httpd.xml_handler) + self.assertEqual(ht.xsd_handler, self.stats_httpd.xsd_handler) + self.assertEqual(ht.xsl_handler, self.stats_httpd.xsl_handler) + self.assertEqual(ht.log_writer, self.stats_httpd.write_log) + self.assertTrue(isinstance(ht._handler, stats_httpd.HttpHandler)) + self.assertTrue(isinstance(ht.socket, fake_socket.socket)) class TestStatsHttpdError(unittest.TestCase): """Tests for StatsHttpdError exception""" @@ -290,173 +187,130 @@ class TestStatsHttpd(unittest.TestCase): """Tests for StatsHttpd class""" def setUp(self): - self.base = BaseModules() - self.stats_server = ThreadingServerManager(MyStats) - self.stats = self.stats_server.server - self.stats_server.run() + fake_socket._CLOSED = False + fake_socket.has_ipv6 = True self.stats_httpd = stats_httpd.StatsHttpd() - # checking IPv6 enabled on this platform - self.ipv6_enabled = True - try: - sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) - sock.bind(("::1",8000)) - sock.close() - except socket.error: - self.ipv6_enabled = False - def tearDown(self): self.stats_httpd.stop() - self.stats_server.shutdown() - self.base.shutdown() def test_init(self): - self.assertEqual(self.stats_httpd.running, False) - self.assertEqual(self.stats_httpd.poll_intval, 0.5) - self.assertEqual(self.stats_httpd.httpd, []) - self.assertEqual(type(self.stats_httpd.mccs), isc.config.ModuleCCSession) - self.assertEqual(type(self.stats_httpd.cc_session), isc.cc.Session) - self.assertEqual(len(self.stats_httpd.config), 2) - self.assertTrue('listen_on' in self.stats_httpd.config) - self.assertEqual(len(self.stats_httpd.config['listen_on']), 1) - self.assertTrue('address' in self.stats_httpd.config['listen_on'][0]) - self.assertTrue('port' in self.stats_httpd.config['listen_on'][0]) - self.assertTrue(('127.0.0.1', 8000) in set(self.stats_httpd.http_addrs)) - - def test_openclose_mccs(self): - statshttpd = stats_httpd.StatsHttpd() - statshttpd.close_mccs() - self.assertEqual(statshttpd.mccs, None) - statshttpd.open_mccs() - self.assertIsNotNone(statshttpd.mccs) - statshttpd.mccs = None - self.assertEqual(statshttpd.mccs, None) - self.assertEqual(statshttpd.close_mccs(), None) + self.assertFalse(self.stats_httpd.mccs.get_socket()._closed) + self.assertEqual(self.stats_httpd.mccs.get_socket().fileno(), + id(self.stats_httpd.mccs.get_socket())) + for ht in self.stats_httpd.httpd: + self.assertFalse(ht.socket._closed) + self.assertEqual(ht.socket.fileno(), id(ht.socket)) + fake_socket._CLOSED = True + self.assertRaises(isc.cc.session.SessionError, + stats_httpd.StatsHttpd) + fake_socket._CLOSED = False def test_mccs(self): - self.assertIsNotNone(self.stats_httpd.mccs.get_socket()) + self.stats_httpd.open_mccs() self.assertTrue( - isinstance(self.stats_httpd.mccs.get_socket(), socket.socket)) + isinstance(self.stats_httpd.mccs.get_socket(), fake_socket.socket)) self.assertTrue( isinstance(self.stats_httpd.cc_session, isc.cc.session.Session)) - self.statistics_spec = self.stats_httpd.get_stats_spec() - for mod in DUMMY_DATA: - self.assertTrue(mod in self.statistics_spec) - for cfg in self.statistics_spec[mod]: - self.assertTrue('item_name' in cfg) - self.assertTrue(cfg['item_name'] in DUMMY_DATA[mod]) - self.assertTrue(len(self.statistics_spec[mod]), len(DUMMY_DATA[mod])) - self.stats_httpd.close_mccs() - self.assertIsNone(self.stats_httpd.mccs) + self.assertTrue( + isinstance(self.stats_httpd.stats_module_spec, isc.config.ModuleSpec)) + for cfg in self.stats_httpd.stats_config_spec: + self.assertTrue('item_name' in cfg) + self.assertTrue(cfg['item_name'] in DUMMY_DATA) + self.assertTrue(len(self.stats_httpd.stats_config_spec), len(DUMMY_DATA)) + + def test_load_config(self): + self.stats_httpd.load_config() + self.assertTrue(('127.0.0.1', 8000) in set(self.stats_httpd.http_addrs)) def test_httpd(self): # dual stack (addresses is ipv4 and ipv6) - if self.ipv6_enabled: - self.assertTrue(('127.0.0.1', 8000) in set(self.stats_httpd.http_addrs)) - self.stats_httpd.http_addrs = [ ('::1', 8000), ('127.0.0.1', 8000) ] - self.assertTrue( - stats_httpd.HttpServer.address_family in set([socket.AF_INET, socket.AF_INET6])) - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, socket.socket)) - self.stats_httpd.close_httpd() + fake_socket.has_ipv6 = True + self.assertTrue(('127.0.0.1', 8000) in set(self.stats_httpd.http_addrs)) + self.stats_httpd.http_addrs = [ ('::1', 8000), ('127.0.0.1', 8000) ] + self.assertTrue( + stats_httpd.HttpServer.address_family in set([fake_socket.AF_INET, fake_socket.AF_INET6])) + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, fake_socket.socket)) + self.stats_httpd.close_httpd() # dual stack (address is ipv6) - if self.ipv6_enabled: - self.stats_httpd.http_addrs = [ ('::1', 8000) ] - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, socket.socket)) - self.stats_httpd.close_httpd() + fake_socket.has_ipv6 = True + self.stats_httpd.http_addrs = [ ('::1', 8000) ] + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, fake_socket.socket)) + self.stats_httpd.close_httpd() # dual stack (address is ipv4) - if self.ipv6_enabled: - self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, socket.socket)) - self.stats_httpd.close_httpd() + fake_socket.has_ipv6 = True + self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, fake_socket.socket)) + self.stats_httpd.close_httpd() # only-ipv4 single stack - if not self.ipv6_enabled: - self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] - self.stats_httpd.open_httpd() - for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, socket.socket)) - self.stats_httpd.close_httpd() + fake_socket.has_ipv6 = False + self.stats_httpd.http_addrs = [ ('127.0.0.1', 8000) ] + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, fake_socket.socket)) + self.stats_httpd.close_httpd() # only-ipv4 single stack (force set ipv6 ) - if not self.ipv6_enabled: - self.stats_httpd.http_addrs = [ ('::1', 8000) ] - self.assertRaises(stats_httpd.HttpServerError, - self.stats_httpd.open_httpd) + fake_socket.has_ipv6 = False + self.stats_httpd.http_addrs = [ ('::1', 8000) ] + self.assertRaises(stats_httpd.HttpServerError, + self.stats_httpd.open_httpd) # hostname self.stats_httpd.http_addrs = [ ('localhost', 8000) ] self.stats_httpd.open_httpd() for ht in self.stats_httpd.httpd: - self.assertTrue(isinstance(ht.socket, socket.socket)) + self.assertTrue(isinstance(ht.socket, fake_socket.socket)) self.stats_httpd.close_httpd() self.stats_httpd.http_addrs = [ ('my.host.domain', 8000) ] - self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) - self.assertEqual(type(self.stats_httpd.httpd), list) - self.assertEqual(len(self.stats_httpd.httpd), 0) + self.stats_httpd.open_httpd() + for ht in self.stats_httpd.httpd: + self.assertTrue(isinstance(ht.socket, fake_socket.socket)) self.stats_httpd.close_httpd() # over flow of port number self.stats_httpd.http_addrs = [ ('', 80000) ] self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) - # negative self.stats_httpd.http_addrs = [ ('', -8000) ] self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) - # alphabet self.stats_httpd.http_addrs = [ ('', 'ABCDE') ] self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) - # Address already in use - self.statshttpd_server = ThreadingServerManager(MyStatsHttpd) - self.statshttpd_server.server.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65454 }]}) - self.statshttpd_server.run() - self.stats_httpd.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65454 }]}) - self.assertRaises(stats_httpd.HttpServerError, self.stats_httpd.open_httpd) - self.statshttpd_server.shutdown() - - def test_running(self): - self.assertFalse(self.stats_httpd.running) - self.statshttpd_server = ThreadingServerManager(MyStatsHttpd) - self.stats_httpd = self.statshttpd_server.server - self.stats_httpd.load_config({'listen_on' : [{ 'address': '127.0.0.1', 'port' : 65455 }]}) - self.statshttpd_server.run() - self.assertTrue(self.stats_httpd.running) - self.statshttpd_server.shutdown() - self.assertFalse(self.stats_httpd.running) - - # failure case + def test_start(self): + self.stats_httpd.cc_session.group_sendmsg( + { 'command': [ "shutdown" ] }, "StatsHttpd") + self.stats_httpd.start() self.stats_httpd = stats_httpd.StatsHttpd() - self.stats_httpd.cc_session.close() self.assertRaises( - isc.cc.session.SessionError, self.stats_httpd.start) + fake_select.error, self.stats_httpd.start) - def test_select_failure(self): - def raise_select_except(*args): - raise select.error('dummy error') - def raise_select_except_with_errno(*args): - raise select.error(errno.EINTR) - (address, port) = ('127.0.0.1', 65456) - stats_httpd.select.select = raise_select_except - statshttpd = stats_httpd.StatsHttpd() - statshttpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) - self.assertRaises(select.error, statshttpd.start) - statshttpd.stop() - stats_httpd.select.select = raise_select_except_with_errno - statshttpd_server = ThreadingServerManager(MyStatsHttpd) - statshttpd = statshttpd_server.server - statshttpd.load_config({'listen_on' : [{ 'address': address, 'port' : port }]}) - statshttpd_server.run() - statshttpd_server.shutdown() + def test_stop(self): + # success case + fake_socket._CLOSED = False + self.stats_httpd.stop() + self.assertFalse(self.stats_httpd.running) + self.assertIsNone(self.stats_httpd.mccs) + for ht in self.stats_httpd.httpd: + self.assertTrue(ht.socket._closed) + self.assertTrue(self.stats_httpd.cc_session._socket._closed) + # failure case + self.stats_httpd.cc_session._socket._closed = False + self.stats_httpd.open_mccs() + self.stats_httpd.cc_session._socket._closed = True + self.stats_httpd.stop() # No excetion raises + self.stats_httpd.cc_session._socket._closed = False def test_open_template(self): # successful conditions @@ -509,40 +363,38 @@ class TestStatsHttpd(unittest.TestCase): self.assertEqual( self.stats_httpd.config_handler(dict(_UNKNOWN_KEY_=None)), isc.config.ccsession.create_answer( - 1, "Unknown known config: _UNKNOWN_KEY_")) - + 1, "Unknown known config: _UNKNOWN_KEY_")) self.assertEqual( self.stats_httpd.config_handler( - dict(listen_on=[dict(address="127.0.0.1",port=8000)])), + dict(listen_on=[dict(address="::2",port=8000)])), isc.config.ccsession.create_answer(0)) self.assertTrue("listen_on" in self.stats_httpd.config) for addr in self.stats_httpd.config["listen_on"]: self.assertTrue("address" in addr) self.assertTrue("port" in addr) - self.assertTrue(addr["address"] == "127.0.0.1") + self.assertTrue(addr["address"] == "::2") self.assertTrue(addr["port"] == 8000) - if self.ipv6_enabled: - self.assertEqual( - self.stats_httpd.config_handler( - dict(listen_on=[dict(address="::1",port=8000)])), - isc.config.ccsession.create_answer(0)) - self.assertTrue("listen_on" in self.stats_httpd.config) - for addr in self.stats_httpd.config["listen_on"]: - self.assertTrue("address" in addr) - self.assertTrue("port" in addr) - self.assertTrue(addr["address"] == "::1") - self.assertTrue(addr["port"] == 8000) - self.assertEqual( self.stats_httpd.config_handler( - dict(listen_on=[dict(address="127.0.0.1",port=54321)])), + dict(listen_on=[dict(address="::1",port=80)])), isc.config.ccsession.create_answer(0)) self.assertTrue("listen_on" in self.stats_httpd.config) for addr in self.stats_httpd.config["listen_on"]: self.assertTrue("address" in addr) self.assertTrue("port" in addr) - self.assertTrue(addr["address"] == "127.0.0.1") + self.assertTrue(addr["address"] == "::1") + self.assertTrue(addr["port"] == 80) + + self.assertEqual( + self.stats_httpd.config_handler( + dict(listen_on=[dict(address="1.2.3.4",port=54321)])), + isc.config.ccsession.create_answer(0)) + self.assertTrue("listen_on" in self.stats_httpd.config) + for addr in self.stats_httpd.config["listen_on"]: + self.assertTrue("address" in addr) + self.assertTrue("port" in addr) + self.assertTrue(addr["address"] == "1.2.3.4") self.assertTrue(addr["port"] == 54321) (ret, arg) = isc.config.ccsession.parse_answer( self.stats_httpd.config_handler( @@ -552,11 +404,10 @@ class TestStatsHttpd(unittest.TestCase): def test_xml_handler(self): orig_get_stats_data = stats_httpd.StatsHttpd.get_stats_data - stats_httpd.StatsHttpd.get_stats_data = lambda x: \ - { 'Dummy' : { 'foo':'bar' } } + stats_httpd.StatsHttpd.get_stats_data = lambda x: {'foo':'bar'} xml_body1 = stats_httpd.StatsHttpd().open_template( stats_httpd.XML_TEMPLATE_LOCATION).substitute( - xml_string='bar', + xml_string='bar', xsd_namespace=stats_httpd.XSD_NAMESPACE, xsd_url_path=stats_httpd.XSD_URL_PATH, xsl_url_path=stats_httpd.XSL_URL_PATH) @@ -564,8 +415,7 @@ class TestStatsHttpd(unittest.TestCase): self.assertEqual(type(xml_body1), str) self.assertEqual(type(xml_body2), str) self.assertEqual(xml_body1, xml_body2) - stats_httpd.StatsHttpd.get_stats_data = lambda x: \ - { 'Dummy' : {'bar':'foo'} } + stats_httpd.StatsHttpd.get_stats_data = lambda x: {'bar':'foo'} xml_body2 = stats_httpd.StatsHttpd().xml_handler() self.assertNotEqual(xml_body1, xml_body2) stats_httpd.StatsHttpd.get_stats_data = orig_get_stats_data @@ -573,41 +423,35 @@ class TestStatsHttpd(unittest.TestCase): def test_xsd_handler(self): orig_get_stats_spec = stats_httpd.StatsHttpd.get_stats_spec stats_httpd.StatsHttpd.get_stats_spec = lambda x: \ - { "Dummy" : - [{ - "item_name": "foo", - "item_type": "string", - "item_optional": False, - "item_default": "bar", - "item_description": "foo is bar", - "item_title": "Foo" - }] - } + [{ + "item_name": "foo", + "item_type": "string", + "item_optional": False, + "item_default": "bar", + "item_description": "foo is bar", + "item_title": "Foo" + }] xsd_body1 = stats_httpd.StatsHttpd().open_template( stats_httpd.XSD_TEMPLATE_LOCATION).substitute( - xsd_string=\ - '' \ + xsd_string='' \ + '' \ + 'Foo' \ + 'foo is bar' \ - + '' \ - + '', + + '', xsd_namespace=stats_httpd.XSD_NAMESPACE) xsd_body2 = stats_httpd.StatsHttpd().xsd_handler() self.assertEqual(type(xsd_body1), str) self.assertEqual(type(xsd_body2), str) self.assertEqual(xsd_body1, xsd_body2) stats_httpd.StatsHttpd.get_stats_spec = lambda x: \ - { "Dummy" : - [{ - "item_name": "bar", - "item_type": "string", - "item_optional": False, - "item_default": "foo", - "item_description": "bar is foo", - "item_title": "bar" - }] - } + [{ + "item_name": "bar", + "item_type": "string", + "item_optional": False, + "item_default": "foo", + "item_description": "bar is foo", + "item_title": "bar" + }] xsd_body2 = stats_httpd.StatsHttpd().xsd_handler() self.assertNotEqual(xsd_body1, xsd_body2) stats_httpd.StatsHttpd.get_stats_spec = orig_get_stats_spec @@ -615,22 +459,19 @@ class TestStatsHttpd(unittest.TestCase): def test_xsl_handler(self): orig_get_stats_spec = stats_httpd.StatsHttpd.get_stats_spec stats_httpd.StatsHttpd.get_stats_spec = lambda x: \ - { "Dummy" : - [{ - "item_name": "foo", - "item_type": "string", - "item_optional": False, - "item_default": "bar", - "item_description": "foo is bar", - "item_title": "Foo" - }] - } + [{ + "item_name": "foo", + "item_type": "string", + "item_optional": False, + "item_default": "bar", + "item_description": "foo is bar", + "item_title": "Foo" + }] xsl_body1 = stats_httpd.StatsHttpd().open_template( stats_httpd.XSL_TEMPLATE_LOCATION).substitute( xsl_string='' \ - + '' \ + '' \ - + '' \ + + '' \ + '', xsd_namespace=stats_httpd.XSD_NAMESPACE) xsl_body2 = stats_httpd.StatsHttpd().xsl_handler() @@ -638,16 +479,14 @@ class TestStatsHttpd(unittest.TestCase): self.assertEqual(type(xsl_body2), str) self.assertEqual(xsl_body1, xsl_body2) stats_httpd.StatsHttpd.get_stats_spec = lambda x: \ - { "Dummy" : - [{ - "item_name": "bar", - "item_type": "string", - "item_optional": False, - "item_default": "foo", - "item_description": "bar is foo", - "item_title": "bar" - }] - } + [{ + "item_name": "bar", + "item_type": "string", + "item_optional": False, + "item_default": "foo", + "item_description": "bar is foo", + "item_title": "bar" + }] xsl_body2 = stats_httpd.StatsHttpd().xsl_handler() self.assertNotEqual(xsl_body1, xsl_body2) stats_httpd.StatsHttpd.get_stats_spec = orig_get_stats_spec @@ -661,6 +500,8 @@ class TestStatsHttpd(unittest.TestCase): imp.reload(stats_httpd) os.environ["B10_FROM_SOURCE"] = tmppath imp.reload(stats_httpd) + stats_httpd.socket = fake_socket + stats_httpd.select = fake_select if __name__ == "__main__": unittest.main() diff --git a/src/bin/stats/tests/b10-stats_test.py b/src/bin/stats/tests/b10-stats_test.py index 7cf4f7ede0..a42c81d136 100644 --- a/src/bin/stats/tests/b10-stats_test.py +++ b/src/bin/stats/tests/b10-stats_test.py @@ -13,582 +13,649 @@ # NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION # WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -""" -In each of these tests we start several virtual components. They are -not the real components, no external processes are started. They are -just simple mock objects running each in its own thread and pretending -to be bind10 modules. This helps testing the stats module in a close -to real environment. -""" - -import unittest +# +# Tests for the stats module +# import os -import threading -import io +import sys import time +import unittest import imp - +from isc.cc.session import Session, SessionError +from isc.config.ccsession import ModuleCCSession, ModuleCCSessionError +from fake_time import time, strftime, gmtime import stats -import isc.cc.session -from test_utils import BaseModules, ThreadingServerManager, MyStats, send_command, TIMEOUT_SEC +stats.time = time +stats.strftime = strftime +stats.gmtime = gmtime +from stats import SessionSubject, CCSessionListener, get_timestamp, get_datetime +from fake_time import _TEST_TIME_SECS, _TEST_TIME_STRF -# set test name for logger -isc.log.init("b10-stats_test") - -class TestUtilties(unittest.TestCase): - items = [ - { 'item_name': 'test_int1', 'item_type': 'integer', 'item_default': 12345 }, - { 'item_name': 'test_real1', 'item_type': 'real', 'item_default': 12345.6789 }, - { 'item_name': 'test_bool1', 'item_type': 'boolean', 'item_default': True }, - { 'item_name': 'test_str1', 'item_type': 'string', 'item_default': 'ABCD' }, - { 'item_name': 'test_list1', 'item_type': 'list', 'item_default': [1,2,3], - 'list_item_spec' : [ { 'item_name': 'one', 'item_type': 'integer' }, - { 'item_name': 'two', 'item_type': 'integer' }, - { 'item_name': 'three', 'item_type': 'integer' } ] }, - { 'item_name': 'test_map1', 'item_type': 'map', 'item_default': {'a':1,'b':2,'c':3}, - 'map_item_spec' : [ { 'item_name': 'a', 'item_type': 'integer'}, - { 'item_name': 'b', 'item_type': 'integer'}, - { 'item_name': 'c', 'item_type': 'integer'} ] }, - { 'item_name': 'test_int2', 'item_type': 'integer' }, - { 'item_name': 'test_real2', 'item_type': 'real' }, - { 'item_name': 'test_bool2', 'item_type': 'boolean' }, - { 'item_name': 'test_str2', 'item_type': 'string' }, - { 'item_name': 'test_list2', 'item_type': 'list', - 'list_item_spec' : [ { 'item_name': 'one', 'item_type': 'integer' }, - { 'item_name': 'two', 'item_type': 'integer' }, - { 'item_name': 'three', 'item_type': 'integer' } ] }, - { 'item_name': 'test_map2', 'item_type': 'map', - 'map_item_spec' : [ { 'item_name': 'A', 'item_type': 'integer'}, - { 'item_name': 'B', 'item_type': 'integer'}, - { 'item_name': 'C', 'item_type': 'integer'} ] }, - { 'item_name': 'test_none', 'item_type': 'none' } - ] - - def setUp(self): - self.const_timestamp = 1308730448.965706 - self.const_timetuple = (2011, 6, 22, 8, 14, 8, 2, 173, 0) - self.const_datetime = '2011-06-22T08:14:08Z' - stats.time = lambda : self.const_timestamp - stats.gmtime = lambda : self.const_timetuple - - def test_get_spec_defaults(self): - self.assertEqual( - stats.get_spec_defaults(self.items), { - 'test_int1' : 12345 , - 'test_real1' : 12345.6789 , - 'test_bool1' : True , - 'test_str1' : 'ABCD' , - 'test_list1' : [1,2,3] , - 'test_map1' : {'a':1,'b':2,'c':3}, - 'test_int2' : 0 , - 'test_real2' : 0.0, - 'test_bool2' : False, - 'test_str2' : "", - 'test_list2' : [0,0,0], - 'test_map2' : { 'A' : 0, 'B' : 0, 'C' : 0 }, - 'test_none' : None }) - self.assertEqual(stats.get_spec_defaults(None), {}) - self.assertRaises(KeyError, stats.get_spec_defaults, [{'item_name':'Foo'}]) - - def test_get_timestamp(self): - self.assertEqual(stats.get_timestamp(), self.const_timestamp) - - def test_get_datetime(self): - self.assertEqual(stats.get_datetime(), self.const_datetime) - self.assertNotEqual(stats.get_datetime( - (2011, 6, 22, 8, 23, 40, 2, 173, 0)), self.const_datetime) - -class TestCallback(unittest.TestCase): - def setUp(self): - self.dummy_func = lambda *x, **y : (x, y) - self.dummy_args = (1,2,3) - self.dummy_kwargs = {'a':1,'b':2,'c':3} - self.cback1 = stats.Callback( - command=self.dummy_func, - args=self.dummy_args, - kwargs=self.dummy_kwargs - ) - self.cback2 = stats.Callback( - args=self.dummy_args, - kwargs=self.dummy_kwargs - ) - self.cback3 = stats.Callback( - command=self.dummy_func, - kwargs=self.dummy_kwargs - ) - self.cback4 = stats.Callback( - command=self.dummy_func, - args=self.dummy_args - ) - - def test_init(self): - self.assertEqual((self.cback1.command, self.cback1.args, self.cback1.kwargs), - (self.dummy_func, self.dummy_args, self.dummy_kwargs)) - self.assertEqual((self.cback2.command, self.cback2.args, self.cback2.kwargs), - (None, self.dummy_args, self.dummy_kwargs)) - self.assertEqual((self.cback3.command, self.cback3.args, self.cback3.kwargs), - (self.dummy_func, (), self.dummy_kwargs)) - self.assertEqual((self.cback4.command, self.cback4.args, self.cback4.kwargs), - (self.dummy_func, self.dummy_args, {})) - - def test_call(self): - self.assertEqual(self.cback1(), (self.dummy_args, self.dummy_kwargs)) - self.assertEqual(self.cback1(100, 200), ((100, 200), self.dummy_kwargs)) - self.assertEqual(self.cback1(a=100, b=200), (self.dummy_args, {'a':100, 'b':200})) - self.assertEqual(self.cback2(), None) - self.assertEqual(self.cback3(), ((), self.dummy_kwargs)) - self.assertEqual(self.cback3(100, 200), ((100, 200), self.dummy_kwargs)) - self.assertEqual(self.cback3(a=100, b=200), ((), {'a':100, 'b':200})) - self.assertEqual(self.cback4(), (self.dummy_args, {})) - self.assertEqual(self.cback4(100, 200), ((100, 200), {})) - self.assertEqual(self.cback4(a=100, b=200), (self.dummy_args, {'a':100, 'b':200})) +if "B10_FROM_SOURCE" in os.environ: + TEST_SPECFILE_LOCATION = os.environ["B10_FROM_SOURCE"] +\ + "/src/bin/stats/tests/testdata/stats_test.spec" +else: + TEST_SPECFILE_LOCATION = "./testdata/stats_test.spec" class TestStats(unittest.TestCase): + def setUp(self): - self.base = BaseModules() - self.stats = stats.Stats() - self.const_timestamp = 1308730448.965706 - self.const_datetime = '2011-06-22T08:14:08Z' - self.const_default_datetime = '1970-01-01T00:00:00Z' + self.session = Session() + self.subject = SessionSubject(session=self.session) + self.listener = CCSessionListener(self.subject) + self.stats_spec = self.listener.cc_session.get_module_spec().get_config_spec() + self.module_name = self.listener.cc_session.get_module_spec().get_module_name() + self.stats_data = { + 'report_time' : get_datetime(), + 'bind10.boot_time' : "1970-01-01T00:00:00Z", + 'stats.timestamp' : get_timestamp(), + 'stats.lname' : self.session.lname, + 'auth.queries.tcp': 0, + 'auth.queries.udp': 0, + "stats.boot_time": get_datetime(), + "stats.start_time": get_datetime(), + "stats.last_update_time": get_datetime() + } + # check starting + self.assertFalse(self.subject.running) + self.subject.start() + self.assertTrue(self.subject.running) + self.assertEqual(len(self.session.message_queue), 0) + self.assertEqual(self.module_name, 'Stats') def tearDown(self): - self.base.shutdown() + # check closing + self.subject.stop() + self.assertFalse(self.subject.running) + self.subject.detach(self.listener) + self.listener.stop() + self.session.close() - def test_init(self): - self.assertEqual(self.stats.module_name, 'Stats') - self.assertFalse(self.stats.running) - self.assertTrue('command_show' in self.stats.callbacks) - self.assertTrue('command_status' in self.stats.callbacks) - self.assertTrue('command_shutdown' in self.stats.callbacks) - self.assertTrue('command_show' in self.stats.callbacks) - self.assertTrue('command_showschema' in self.stats.callbacks) - self.assertTrue('command_set' in self.stats.callbacks) + def test_local_func(self): + """ + Test for local function + + """ + # test for result_ok + self.assertEqual(type(result_ok()), dict) + self.assertEqual(result_ok(), {'result': [0]}) + self.assertEqual(result_ok(1), {'result': [1]}) + self.assertEqual(result_ok(0,'OK'), {'result': [0, 'OK']}) + self.assertEqual(result_ok(1,'Not good'), {'result': [1, 'Not good']}) + self.assertEqual(result_ok(None,"It's None"), {'result': [None, "It's None"]}) + self.assertNotEqual(result_ok(), {'RESULT': [0]}) - def test_init_undefcmd(self): - spec_str = """\ -{ - "module_spec": { - "module_name": "Stats", - "module_description": "Stats daemon", - "config_data": [], - "commands": [ - { - "command_name": "_undef_command_", - "command_description": "a undefined command in stats", - "command_args": [] - } - ], - "statistics": [] - } -} -""" - orig_spec_location = stats.SPECFILE_LOCATION - stats.SPECFILE_LOCATION = io.StringIO(spec_str) - self.assertRaises(stats.StatsError, stats.Stats) - stats.SPECFILE_LOCATION = orig_spec_location + # test for get_timestamp + self.assertEqual(get_timestamp(), _TEST_TIME_SECS) - def test_start(self): - # start without err - statsserver = ThreadingServerManager(MyStats) - statsd = statsserver.server - self.assertFalse(statsd.running) - statsserver.run() - self.assertTrue(statsd.running) - statsserver.shutdown() - self.assertFalse(statsd.running) + # test for get_datetime + self.assertEqual(get_datetime(), _TEST_TIME_STRF) - # start with err - statsd = stats.Stats() - statsd.update_statistics_data = lambda x,**y: ['an error'] - self.assertRaises(stats.StatsError, statsd.start) + def test_show_command(self): + """ + Test for show command + + """ + # test show command without arg + self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + # ignore under 0.9 seconds + self.assertEqual(result_ok(0, self.stats_data), result_data) + self.assertEqual(len(self.session.message_queue), 0) - def test_handlers(self): - # config_handler - self.assertEqual(self.stats.config_handler({'foo':'bar'}), - isc.config.create_answer(0)) + # test show command with arg + self.session.group_sendmsg({"command": [ "show", {"stats_item_name": "stats.lname"}]}, "Stats") + self.assertEqual(len(self.subject.session.message_queue), 1) + self.subject.check() + result_data = self.subject.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'stats.lname': self.stats_data['stats.lname']}), + result_data) + self.assertEqual(len(self.subject.session.message_queue), 0) - # command_handler - statsserver = ThreadingServerManager(MyStats) - statsserver.run() + # test show command with arg which has wrong name + self.session.group_sendmsg({"command": [ "show", {"stats_item_name": "stats.dummy"}]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + # ignore under 0.9 seconds + self.assertEqual(result_ok(0, self.stats_data), result_data) + self.assertEqual(len(self.session.message_queue), 0) + + def test_set_command(self): + """ + Test for set command + + """ + # test set command + self.stats_data['auth.queries.udp'] = 54321 + self.assertEqual(self.stats_data['auth.queries.udp'], 54321) + self.assertEqual(self.stats_data['auth.queries.tcp'], 0) + self.session.group_sendmsg({ "command": [ + "set", { + 'stats_data': {'auth.queries.udp': 54321 } + } ] }, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # test show command + self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, self.stats_data), result_data) + self.assertEqual(len(self.session.message_queue), 0) + + # test set command 2 + self.stats_data['auth.queries.udp'] = 0 + self.assertEqual(self.stats_data['auth.queries.udp'], 0) + self.assertEqual(self.stats_data['auth.queries.tcp'], 0) + self.session.group_sendmsg({ "command": [ "set", {'stats_data': {'auth.queries.udp': 0}} ]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # test show command 2 + self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, self.stats_data), result_data) + self.assertEqual(len(self.session.message_queue), 0) + + # test set command 3 + self.stats_data['auth.queries.tcp'] = 54322 + self.assertEqual(self.stats_data['auth.queries.udp'], 0) + self.assertEqual(self.stats_data['auth.queries.tcp'], 54322) + self.session.group_sendmsg({ "command": [ + "set", { + 'stats_data': {'auth.queries.tcp': 54322 } + } ] }, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # test show command 3 + self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, self.stats_data), result_data) + self.assertEqual(len(self.session.message_queue), 0) + + def test_remove_command(self): + """ + Test for remove command + + """ + self.session.group_sendmsg({"command": + [ "remove", {"stats_item_name": 'bind10.boot_time' }]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + self.assertEqual(self.stats_data.pop('bind10.boot_time'), "1970-01-01T00:00:00Z") + self.assertFalse('bind10.boot_time' in self.stats_data) + + # test show command with arg + self.session.group_sendmsg({"command": + [ "show", {"stats_item_name": 'bind10.boot_time'}]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertFalse('bind10.boot_time' in result_data['result'][1]) + self.assertEqual(result_ok(0, self.stats_data), result_data) + self.assertEqual(len(self.session.message_queue), 0) + + def test_reset_command(self): + """ + Test for reset command + + """ + self.session.group_sendmsg({"command": [ "reset" ] }, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # test show command + self.session.group_sendmsg({"command": [ "show" ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, self.stats_data), result_data) + self.assertEqual(len(self.session.message_queue), 0) + + def test_status_command(self): + """ + Test for status command + + """ + self.session.group_sendmsg({"command": [ "status" ] }, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(0, "I'm alive."), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + def test_unknown_command(self): + """ + Test for unknown command + + """ + self.session.group_sendmsg({"command": [ "hoge", None ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(1, "Unknown command: 'hoge'"), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + def test_shutdown_command(self): + """ + Test for shutdown command + + """ + self.session.group_sendmsg({"command": [ "shutdown", None ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.assertTrue(self.subject.running) + self.subject.check() + self.assertFalse(self.subject.running) + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + + def test_some_commands(self): + """ + Test for some commands in a row + + """ + # test set command + self.stats_data['bind10.boot_time'] = '2010-08-02T14:47:56Z' + self.assertEqual(self.stats_data['bind10.boot_time'], '2010-08-02T14:47:56Z') + self.session.group_sendmsg({ "command": [ + "set", { + 'stats_data': {'bind10.boot_time': '2010-08-02T14:47:56Z' } + }]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # check its value + self.session.group_sendmsg({ "command": [ + "show", { 'stats_item_name': 'bind10.boot_time' } + ] }, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'bind10.boot_time': '2010-08-02T14:47:56Z'}), + result_data) + self.assertEqual(result_ok(0, {'bind10.boot_time': self.stats_data['bind10.boot_time']}), + result_data) + self.assertEqual(len(self.session.message_queue), 0) + + # test set command 2nd + self.stats_data['auth.queries.udp'] = 98765 + self.assertEqual(self.stats_data['auth.queries.udp'], 98765) + self.session.group_sendmsg({ "command": [ + "set", { 'stats_data': { + 'auth.queries.udp': + self.stats_data['auth.queries.udp'] + } } + ] }, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # check its value + self.session.group_sendmsg({"command": [ + "show", {'stats_item_name': 'auth.queries.udp'} + ] }, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'auth.queries.udp': 98765}), + result_data) + self.assertEqual(result_ok(0, {'auth.queries.udp': self.stats_data['auth.queries.udp']}), + result_data) + self.assertEqual(len(self.session.message_queue), 0) + + # test set command 3 + self.stats_data['auth.queries.tcp'] = 4321 + self.session.group_sendmsg({"command": [ + "set", + {'stats_data': {'auth.queries.tcp': 4321 }} ]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # check value + self.session.group_sendmsg({"command": [ "show", {'stats_item_name': 'auth.queries.tcp'} ]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'auth.queries.tcp': 4321}), + result_data) + self.assertEqual(result_ok(0, {'auth.queries.tcp': self.stats_data['auth.queries.tcp']}), + result_data) + self.assertEqual(len(self.session.message_queue), 0) + + self.session.group_sendmsg({"command": [ "show", {'stats_item_name': 'auth.queries.udp'} ]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'auth.queries.udp': 98765}), + result_data) + self.assertEqual(result_ok(0, {'auth.queries.udp': self.stats_data['auth.queries.udp']}), + result_data) + self.assertEqual(len(self.session.message_queue), 0) + + # test set command 4 + self.stats_data['auth.queries.tcp'] = 67890 + self.session.group_sendmsg({"command": [ + "set", {'stats_data': {'auth.queries.tcp': 67890 }} ]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # test show command for all values + self.session.group_sendmsg({"command": [ "show", None ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, self.stats_data), result_data) + self.assertEqual(len(self.session.message_queue), 0) + + def test_some_commands2(self): + """ + Test for some commands in a row using list-type value + + """ + self.stats_data['listtype'] = [1, 2, 3] + self.assertEqual(self.stats_data['listtype'], [1, 2, 3]) + self.session.group_sendmsg({ "command": [ + "set", {'stats_data': {'listtype': [1, 2, 3] }} + ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # check its value + self.session.group_sendmsg({ "command": [ + "show", { 'stats_item_name': 'listtype'} + ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'listtype': [1, 2, 3]}), + result_data) + self.assertEqual(result_ok(0, {'listtype': self.stats_data['listtype']}), + result_data) + self.assertEqual(len(self.session.message_queue), 0) + + # test set list-type value + self.assertEqual(self.stats_data['listtype'], [1, 2, 3]) + self.session.group_sendmsg({"command": [ + "set", {'stats_data': {'listtype': [3, 2, 1, 0] }} + ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # check its value + self.session.group_sendmsg({ "command": [ + "show", { 'stats_item_name': 'listtype' } + ] }, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'listtype': [3, 2, 1, 0]}), + result_data) + self.assertEqual(len(self.session.message_queue), 0) + + def test_some_commands3(self): + """ + Test for some commands in a row using dictionary-type value + + """ + self.stats_data['dicttype'] = {"a": 1, "b": 2, "c": 3} + self.assertEqual(self.stats_data['dicttype'], {"a": 1, "b": 2, "c": 3}) + self.session.group_sendmsg({ "command": [ + "set", { + 'stats_data': {'dicttype': {"a": 1, "b": 2, "c": 3} } + }]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # check its value + self.session.group_sendmsg({ "command": [ "show", { 'stats_item_name': 'dicttype' } ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'dicttype': {"a": 1, "b": 2, "c": 3}}), + result_data) + self.assertEqual(result_ok(0, {'dicttype': self.stats_data['dicttype']}), + result_data) + self.assertEqual(len(self.session.message_queue), 0) + + # test set list-type value + self.assertEqual(self.stats_data['dicttype'], {"a": 1, "b": 2, "c": 3}) + self.session.group_sendmsg({"command": [ + "set", {'stats_data': {'dicttype': {"a": 3, "b": 2, "c": 1, "d": 0} }} ]}, + "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + self.assertEqual(len(self.session.message_queue), 0) + + # check its value + self.session.group_sendmsg({ "command": [ "show", { 'stats_item_name': 'dicttype' }]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + result_data = self.session.get_message("Stats", None) + self.assertEqual(result_ok(0, {'dicttype': {"a": 3, "b": 2, "c": 1, "d": 0} }), + result_data) + self.assertEqual(len(self.session.message_queue), 0) + + def test_config_update(self): + """ + Test for config update + + """ + # test show command without arg + self.session.group_sendmsg({"command": [ "config_update", {"x-version":999} ]}, "Stats") + self.assertEqual(len(self.session.message_queue), 1) + self.subject.check() + self.assertEqual(result_ok(), + self.session.get_message("Stats", None)) + + def test_for_boss(self): + last_queue = self.session.old_message_queue.pop() self.assertEqual( - send_command( - 'show', 'Stats', - params={ 'owner' : 'Boss', - 'name' : 'boot_time' }), - (0, self.const_datetime)) + last_queue.msg, {'command': ['sendstats']}) self.assertEqual( - send_command( - 'set', 'Stats', - params={ 'owner' : 'Boss', - 'data' : { 'boot_time' : self.const_datetime } }), - (0, None)) - self.assertEqual( - send_command( - 'show', 'Stats', - params={ 'owner' : 'Boss', - 'name' : 'boot_time' }), - (0, self.const_datetime)) - self.assertEqual( - send_command('status', 'Stats'), - (0, "Stats is up. (PID " + str(os.getpid()) + ")")) + last_queue.env['group'], 'Boss') - (rcode, value) = send_command('show', 'Stats') - self.assertEqual(rcode, 0) - self.assertEqual(len(value), 3) - self.assertTrue('Boss' in value) - self.assertTrue('Stats' in value) - self.assertTrue('Auth' in value) - self.assertEqual(len(value['Stats']), 5) - self.assertEqual(len(value['Boss']), 1) - self.assertTrue('boot_time' in value['Boss']) - self.assertEqual(value['Boss']['boot_time'], self.const_datetime) - self.assertTrue('report_time' in value['Stats']) - self.assertTrue('boot_time' in value['Stats']) - self.assertTrue('last_update_time' in value['Stats']) - self.assertTrue('timestamp' in value['Stats']) - self.assertTrue('lname' in value['Stats']) - (rcode, value) = send_command('showschema', 'Stats') - self.assertEqual(rcode, 0) - self.assertEqual(len(value), 3) - self.assertTrue('Boss' in value) - self.assertTrue('Stats' in value) - self.assertTrue('Auth' in value) - self.assertEqual(len(value['Stats']), 5) - self.assertEqual(len(value['Boss']), 1) - for item in value['Boss']: - self.assertTrue(len(item) == 7) - self.assertTrue('item_name' in item) - self.assertTrue('item_type' in item) - self.assertTrue('item_optional' in item) - self.assertTrue('item_default' in item) - self.assertTrue('item_title' in item) - self.assertTrue('item_description' in item) - self.assertTrue('item_format' in item) - for item in value['Stats']: - self.assertTrue(len(item) == 6 or len(item) == 7) - self.assertTrue('item_name' in item) - self.assertTrue('item_type' in item) - self.assertTrue('item_optional' in item) - self.assertTrue('item_default' in item) - self.assertTrue('item_title' in item) - self.assertTrue('item_description' in item) - if len(item) == 7: - self.assertTrue('item_format' in item) +class TestStats2(unittest.TestCase): - self.assertEqual( - send_command('__UNKNOWN__', 'Stats'), - (1, "Unknown command: '__UNKNOWN__'")) + def setUp(self): + self.session = Session() + self.subject = SessionSubject(session=self.session) + self.listener = CCSessionListener(self.subject) + self.module_name = self.listener.cc_session.get_module_spec().get_module_name() + # check starting + self.assertFalse(self.subject.running) + self.subject.start() + self.assertTrue(self.subject.running) + self.assertEqual(len(self.session.message_queue), 0) + self.assertEqual(self.module_name, 'Stats') - statsserver.shutdown() + def tearDown(self): + # check closing + self.subject.stop() + self.assertFalse(self.subject.running) + self.subject.detach(self.listener) + self.listener.stop() - def test_update_modules(self): - self.assertEqual(len(self.stats.modules), 0) - self.stats.update_modules() - self.assertTrue('Stats' in self.stats.modules) - self.assertTrue('Boss' in self.stats.modules) - self.assertFalse('Dummy' in self.stats.modules) - my_statistics_data = stats.get_spec_defaults(self.stats.modules['Stats'].get_statistics_spec()) - self.assertTrue('report_time' in my_statistics_data) - self.assertTrue('boot_time' in my_statistics_data) - self.assertTrue('last_update_time' in my_statistics_data) - self.assertTrue('timestamp' in my_statistics_data) - self.assertTrue('lname' in my_statistics_data) - self.assertEqual(my_statistics_data['report_time'], self.const_default_datetime) - self.assertEqual(my_statistics_data['boot_time'], self.const_default_datetime) - self.assertEqual(my_statistics_data['last_update_time'], self.const_default_datetime) - self.assertEqual(my_statistics_data['timestamp'], 0.0) - self.assertEqual(my_statistics_data['lname'], "") - my_statistics_data = stats.get_spec_defaults(self.stats.modules['Boss'].get_statistics_spec()) - self.assertTrue('boot_time' in my_statistics_data) - self.assertEqual(my_statistics_data['boot_time'], self.const_default_datetime) - orig_parse_answer = stats.isc.config.ccsession.parse_answer - stats.isc.config.ccsession.parse_answer = lambda x: (99, 'error') - self.assertRaises(stats.StatsError, self.stats.update_modules) - stats.isc.config.ccsession.parse_answer = orig_parse_answer + def test_specfile(self): + """ + Test for specfile + + """ + if "B10_FROM_SOURCE" in os.environ: + self.assertEqual(stats.SPECFILE_LOCATION, + os.environ["B10_FROM_SOURCE"] + os.sep + \ + "src" + os.sep + "bin" + os.sep + "stats" + \ + os.sep + "stats.spec") + self.assertEqual(stats.SCHEMA_SPECFILE_LOCATION, + os.environ["B10_FROM_SOURCE"] + os.sep + \ + "src" + os.sep + "bin" + os.sep + "stats" + \ + os.sep + "stats-schema.spec") + imp.reload(stats) + # change path of SPECFILE_LOCATION + stats.SPECFILE_LOCATION = TEST_SPECFILE_LOCATION + stats.SCHEMA_SPECFILE_LOCATION = TEST_SPECFILE_LOCATION + self.assertEqual(stats.SPECFILE_LOCATION, TEST_SPECFILE_LOCATION) + self.subject = stats.SessionSubject(session=self.session) + self.session = self.subject.session + self.listener = stats.CCSessionListener(self.subject) - def test_get_statistics_data(self): - my_statistics_data = self.stats.get_statistics_data() - self.assertTrue('Stats' in my_statistics_data) - self.assertTrue('Boss' in my_statistics_data) - my_statistics_data = self.stats.get_statistics_data(owner='Stats') - self.assertTrue('report_time' in my_statistics_data) - self.assertTrue('boot_time' in my_statistics_data) - self.assertTrue('last_update_time' in my_statistics_data) - self.assertTrue('timestamp' in my_statistics_data) - self.assertTrue('lname' in my_statistics_data) - self.assertRaises(stats.StatsError, self.stats.get_statistics_data, owner='Foo') - my_statistics_data = self.stats.get_statistics_data(owner='Stats') - self.assertTrue('boot_time' in my_statistics_data) - my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='report_time') - self.assertEqual(my_statistics_data, self.const_default_datetime) - my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='boot_time') - self.assertEqual(my_statistics_data, self.const_default_datetime) - my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='last_update_time') - self.assertEqual(my_statistics_data, self.const_default_datetime) - my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='timestamp') - self.assertEqual(my_statistics_data, 0.0) - my_statistics_data = self.stats.get_statistics_data(owner='Stats', name='lname') - self.assertEqual(my_statistics_data, '') - self.assertRaises(stats.StatsError, self.stats.get_statistics_data, - owner='Stats', name='Bar') - self.assertRaises(stats.StatsError, self.stats.get_statistics_data, - owner='Foo', name='Bar') - self.assertRaises(stats.StatsError, self.stats.get_statistics_data, - name='Bar') + self.assertEqual(self.listener.stats_spec, []) + self.assertEqual(self.listener.stats_data, {}) - def test_update_statistics_data(self): - self.stats.update_statistics_data(owner='Stats', lname='foo@bar') - self.assertTrue('Stats' in self.stats.statistics_data) - my_statistics_data = self.stats.statistics_data['Stats'] - self.assertEqual(my_statistics_data['lname'], 'foo@bar') - self.stats.update_statistics_data(owner='Stats', last_update_time=self.const_datetime) - self.assertTrue('Stats' in self.stats.statistics_data) - my_statistics_data = self.stats.statistics_data['Stats'] - self.assertEqual(my_statistics_data['last_update_time'], self.const_datetime) - self.assertEqual(self.stats.update_statistics_data(owner='Stats', lname=0.0), - ['0.0 should be a string']) - self.assertEqual(self.stats.update_statistics_data(owner='Dummy', foo='bar'), - ['unknown module name: Dummy']) + self.assertEqual(self.listener.commands_spec, [ + { + "command_name": "status", + "command_description": "identify whether stats module is alive or not", + "command_args": [] + }, + { + "command_name": "the_dummy", + "command_description": "this is for testing", + "command_args": [] + }]) - def test_commands(self): - # status - self.assertEqual(self.stats.command_status(), - isc.config.create_answer( - 0, "Stats is up. (PID " + str(os.getpid()) + ")")) + def test_func_initialize_data(self): + """ + Test for initialize_data function + + """ + # prepare for sample data set + stats_spec = [ + { + "item_name": "none_sample", + "item_type": "null", + "item_default": "None" + }, + { + "item_name": "boolean_sample", + "item_type": "boolean", + "item_default": True + }, + { + "item_name": "string_sample", + "item_type": "string", + "item_default": "A something" + }, + { + "item_name": "int_sample", + "item_type": "integer", + "item_default": 9999999 + }, + { + "item_name": "real_sample", + "item_type": "real", + "item_default": 0.0009 + }, + { + "item_name": "list_sample", + "item_type": "list", + "item_default": [0, 1, 2, 3, 4], + "list_item_spec": [] + }, + { + "item_name": "map_sample", + "item_type": "map", + "item_default": {'name':'value'}, + "map_item_spec": [] + }, + { + "item_name": "other_sample", + "item_type": "__unknown__", + "item_default": "__unknown__" + } + ] + # data for comparison + stats_data = { + 'none_sample': None, + 'boolean_sample': True, + 'string_sample': 'A something', + 'int_sample': 9999999, + 'real_sample': 0.0009, + 'list_sample': [0, 1, 2, 3, 4], + 'map_sample': {'name':'value'}, + 'other_sample': '__unknown__' + } + self.assertEqual(self.listener.initialize_data(stats_spec), stats_data) - # shutdown - self.stats.running = True - self.assertEqual(self.stats.command_shutdown(), - isc.config.create_answer(0)) - self.assertFalse(self.stats.running) + def test_func_main(self): + # explicitly make failed + self.session.close() + stats.main(session=self.session) - def test_command_show(self): - self.assertEqual(self.stats.command_show(owner='Foo', name=None), - isc.config.create_answer( - 1, "specified arguments are incorrect: owner: Foo, name: None")) - self.assertEqual(self.stats.command_show(owner='Foo', name='_bar_'), - isc.config.create_answer( - 1, "specified arguments are incorrect: owner: Foo, name: _bar_")) - self.assertEqual(self.stats.command_show(owner='Foo', name='bar'), - isc.config.create_answer( - 1, "specified arguments are incorrect: owner: Foo, name: bar")) - self.assertEqual(self.stats.command_show(owner='Auth'), - isc.config.create_answer( - 0, {'queries.tcp': 0, 'queries.udp': 0})) - self.assertEqual(self.stats.command_show(owner='Auth', name='queries.udp'), - isc.config.create_answer( - 0, 0)) - orig_get_timestamp = stats.get_timestamp - orig_get_datetime = stats.get_datetime - stats.get_timestamp = lambda : self.const_timestamp - stats.get_datetime = lambda : self.const_datetime - self.assertEqual(stats.get_timestamp(), self.const_timestamp) - self.assertEqual(stats.get_datetime(), self.const_datetime) - self.assertEqual(self.stats.command_show(owner='Stats', name='report_time'), \ - isc.config.create_answer(0, self.const_datetime)) - self.assertEqual(self.stats.statistics_data['Stats']['timestamp'], self.const_timestamp) - self.assertEqual(self.stats.statistics_data['Stats']['boot_time'], self.const_default_datetime) - stats.get_timestamp = orig_get_timestamp - stats.get_datetime = orig_get_datetime - self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( - { "module_name": self.stats.module_name, - "statistics": [] } ) - self.assertRaises( - stats.StatsError, self.stats.command_show, owner='Foo', name='bar') - - def test_command_showchema(self): - (rcode, value) = isc.config.ccsession.parse_answer( - self.stats.command_showschema()) - self.assertEqual(rcode, 0) - self.assertEqual(len(value), 3) - self.assertTrue('Stats' in value) - self.assertTrue('Boss' in value) - self.assertTrue('Auth' in value) - self.assertFalse('__Dummy__' in value) - schema = value['Stats'] - self.assertEqual(len(schema), 5) - for item in schema: - self.assertTrue(len(item) == 6 or len(item) == 7) - self.assertTrue('item_name' in item) - self.assertTrue('item_type' in item) - self.assertTrue('item_optional' in item) - self.assertTrue('item_default' in item) - self.assertTrue('item_title' in item) - self.assertTrue('item_description' in item) - if len(item) == 7: - self.assertTrue('item_format' in item) - - schema = value['Boss'] - self.assertEqual(len(schema), 1) - for item in schema: - self.assertTrue(len(item) == 7) - self.assertTrue('item_name' in item) - self.assertTrue('item_type' in item) - self.assertTrue('item_optional' in item) - self.assertTrue('item_default' in item) - self.assertTrue('item_title' in item) - self.assertTrue('item_description' in item) - self.assertTrue('item_format' in item) - - schema = value['Auth'] - self.assertEqual(len(schema), 2) - for item in schema: - self.assertTrue(len(item) == 6) - self.assertTrue('item_name' in item) - self.assertTrue('item_type' in item) - self.assertTrue('item_optional' in item) - self.assertTrue('item_default' in item) - self.assertTrue('item_title' in item) - self.assertTrue('item_description' in item) - - (rcode, value) = isc.config.ccsession.parse_answer( - self.stats.command_showschema(owner='Stats')) - self.assertEqual(rcode, 0) - self.assertFalse('Stats' in value) - self.assertFalse('Boss' in value) - self.assertFalse('Auth' in value) - for item in value: - self.assertTrue(len(item) == 6 or len(item) == 7) - self.assertTrue('item_name' in item) - self.assertTrue('item_type' in item) - self.assertTrue('item_optional' in item) - self.assertTrue('item_default' in item) - self.assertTrue('item_title' in item) - self.assertTrue('item_description' in item) - if len(item) == 7: - self.assertTrue('item_format' in item) - - (rcode, value) = isc.config.ccsession.parse_answer( - self.stats.command_showschema(owner='Stats', name='report_time')) - self.assertEqual(rcode, 0) - self.assertFalse('Stats' in value) - self.assertFalse('Boss' in value) - self.assertFalse('Auth' in value) - self.assertTrue(len(value) == 7) - self.assertTrue('item_name' in value) - self.assertTrue('item_type' in value) - self.assertTrue('item_optional' in value) - self.assertTrue('item_default' in value) - self.assertTrue('item_title' in value) - self.assertTrue('item_description' in value) - self.assertTrue('item_format' in value) - self.assertEqual(value['item_name'], 'report_time') - self.assertEqual(value['item_format'], 'date-time') - - self.assertEqual(self.stats.command_showschema(owner='Foo'), - isc.config.create_answer( - 1, "specified arguments are incorrect: owner: Foo, name: None")) - self.assertEqual(self.stats.command_showschema(owner='Foo', name='bar'), - isc.config.create_answer( - 1, "specified arguments are incorrect: owner: Foo, name: bar")) - self.assertEqual(self.stats.command_showschema(owner='Auth'), - isc.config.create_answer( - 0, [{ - "item_default": 0, - "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially", - "item_name": "queries.tcp", - "item_optional": False, - "item_title": "Queries TCP", - "item_type": "integer" - }, - { - "item_default": 0, - "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially", - "item_name": "queries.udp", - "item_optional": False, - "item_title": "Queries UDP", - "item_type": "integer" - }])) - self.assertEqual(self.stats.command_showschema(owner='Auth', name='queries.tcp'), - isc.config.create_answer( - 0, { - "item_default": 0, - "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially", - "item_name": "queries.tcp", - "item_optional": False, - "item_title": "Queries TCP", - "item_type": "integer" - })) - - self.assertEqual(self.stats.command_showschema(owner='Stats', name='bar'), - isc.config.create_answer( - 1, "specified arguments are incorrect: owner: Stats, name: bar")) - self.assertEqual(self.stats.command_showschema(name='bar'), - isc.config.create_answer( - 1, "module name is not specified")) - - def test_command_set(self): - orig_get_datetime = stats.get_datetime - stats.get_datetime = lambda : self.const_datetime - (rcode, value) = isc.config.ccsession.parse_answer( - self.stats.command_set(owner='Boss', - data={ 'boot_time' : self.const_datetime })) - stats.get_datetime = orig_get_datetime - self.assertEqual(rcode, 0) - self.assertTrue(value is None) - self.assertEqual(self.stats.statistics_data['Boss']['boot_time'], - self.const_datetime) - self.assertEqual(self.stats.statistics_data['Stats']['last_update_time'], - self.const_datetime) - self.assertEqual(self.stats.command_set(owner='Stats', - data={ 'lname' : 'foo@bar' }), - isc.config.create_answer(0, None)) - self.stats.statistics_data['Stats'] = {} - self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( - { "module_name": self.stats.module_name, - "statistics": [] } ) - self.assertEqual(self.stats.command_set(owner='Stats', - data={ 'lname' : '_foo_@_bar_' }), - isc.config.create_answer( - 1, - "errors while setting statistics data: unknown item lname")) - self.stats.statistics_data['Stats'] = {} - self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( - { "module_name": self.stats.module_name } ) - self.assertEqual(self.stats.command_set(owner='Stats', - data={ 'lname' : '_foo_@_bar_' }), - isc.config.create_answer( - 1, - "errors while setting statistics data: No statistics specification")) - self.stats.statistics_data['Stats'] = {} - self.stats.mccs.specification = isc.config.module_spec.ModuleSpec( - { "module_name": self.stats.module_name, - "statistics": [ - { - "item_name": "dummy", - "item_type": "string", - "item_optional": False, - "item_default": "", - "item_title": "Local Name", - "item_description": "brabra" - } ] } ) - self.assertRaises(stats.StatsError, - self.stats.command_set, owner='Stats', data={ 'dummy' : '_xxxx_yyyy_zzz_' }) - -class TestOSEnv(unittest.TestCase): def test_osenv(self): """ - test for the environ variable "B10_FROM_SOURCE" - "B10_FROM_SOURCE" is set in Makefile + test for not having environ "B10_FROM_SOURCE" """ - # test case having B10_FROM_SOURCE - self.assertTrue("B10_FROM_SOURCE" in os.environ) - self.assertEqual(stats.SPECFILE_LOCATION, \ - os.environ["B10_FROM_SOURCE"] + os.sep + \ - "src" + os.sep + "bin" + os.sep + "stats" + \ - os.sep + "stats.spec") - # test case not having B10_FROM_SOURCE - path = os.environ["B10_FROM_SOURCE"] - os.environ.pop("B10_FROM_SOURCE") - self.assertFalse("B10_FROM_SOURCE" in os.environ) - # import stats again - imp.reload(stats) - # revert the changes - os.environ["B10_FROM_SOURCE"] = path - imp.reload(stats) + if "B10_FROM_SOURCE" in os.environ: + path = os.environ["B10_FROM_SOURCE"] + os.environ.pop("B10_FROM_SOURCE") + imp.reload(stats) + os.environ["B10_FROM_SOURCE"] = path + imp.reload(stats) -def test_main(): - unittest.main() +def result_ok(*args): + if args: + return { 'result': list(args) } + else: + return { 'result': [ 0 ] } if __name__ == "__main__": - test_main() + unittest.main() diff --git a/src/bin/stats/tests/fake_select.py b/src/bin/stats/tests/fake_select.py new file mode 100644 index 0000000000..ca0ca82619 --- /dev/null +++ b/src/bin/stats/tests/fake_select.py @@ -0,0 +1,43 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" +A mock-up module of select + +*** NOTE *** +It is only for testing stats_httpd module and not reusable for +external module. +""" + +import fake_socket +import errno + +class error(Exception): + pass + +def select(rlst, wlst, xlst, timeout): + if type(timeout) != int and type(timeout) != float: + raise TypeError("Error: %s must be integer or float" + % timeout.__class__.__name__) + for s in rlst + wlst + xlst: + if type(s) != fake_socket.socket: + raise TypeError("Error: %s must be a dummy socket" + % s.__class__.__name__) + s._called = s._called + 1 + if s._called > 3: + raise error("Something is happened!") + elif s._called > 2: + raise error(errno.EINTR) + return (rlst, wlst, xlst) diff --git a/src/bin/stats/tests/fake_socket.py b/src/bin/stats/tests/fake_socket.py new file mode 100644 index 0000000000..4e3a4581a5 --- /dev/null +++ b/src/bin/stats/tests/fake_socket.py @@ -0,0 +1,70 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" +A mock-up module of socket + +*** NOTE *** +It is only for testing stats_httpd module and not reusable for +external module. +""" + +import re + +AF_INET = 'AF_INET' +AF_INET6 = 'AF_INET6' +_ADDRFAMILY = AF_INET +has_ipv6 = True +_CLOSED = False + +class gaierror(Exception): + pass + +class error(Exception): + pass + +class socket: + + def __init__(self, family=None): + if family is None: + self.address_family = _ADDRFAMILY + else: + self.address_family = family + self._closed = _CLOSED + if self._closed: + raise error('socket is already closed!') + self._called = 0 + + def close(self): + self._closed = True + + def fileno(self): + return id(self) + + def bind(self, server_class): + (self.server_address, self.server_port) = server_class + if self.address_family not in set([AF_INET, AF_INET6]): + raise error("Address family not supported by protocol: %s" % self.address_family) + if self.address_family == AF_INET6 and not has_ipv6: + raise error("Address family not supported in this machine: %s has_ipv6: %s" + % (self.address_family, str(has_ipv6))) + if self.address_family == AF_INET and re.search(':', self.server_address) is not None: + raise gaierror("Address family for hostname not supported : %s %s" % (self.server_address, self.address_family)) + if self.address_family == AF_INET6 and re.search(':', self.server_address) is None: + raise error("Cannot assign requested address : %s" % str(self.server_address)) + if type(self.server_port) is not int: + raise TypeError("an integer is required: %s" % str(self.server_port)) + if self.server_port < 0 or self.server_port > 65535: + raise OverflowError("port number must be 0-65535.: %s" % str(self.server_port)) diff --git a/src/bin/stats/tests/fake_time.py b/src/bin/stats/tests/fake_time.py new file mode 100644 index 0000000000..65e02371d6 --- /dev/null +++ b/src/bin/stats/tests/fake_time.py @@ -0,0 +1,47 @@ +# Copyright (C) 2010 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +__version__ = "$Revision$" + +# This is a dummy time class against a Python standard time class. +# It is just testing use only. +# Other methods which time class has is not implemented. +# (This class isn't orderloaded for time class.) + +# These variables are constant. These are example. +_TEST_TIME_SECS = 1283364938.229088 +_TEST_TIME_STRF = '2010-09-01T18:15:38Z' + +def time(): + """ + This is a dummy time() method against time.time() + """ + # return float constant value + return _TEST_TIME_SECS + +def gmtime(): + """ + This is a dummy gmtime() method against time.gmtime() + """ + # always return nothing + return None + +def strftime(*arg): + """ + This is a dummy gmtime() method against time.gmtime() + """ + return _TEST_TIME_STRF + + diff --git a/src/bin/stats/tests/http/Makefile.am b/src/bin/stats/tests/http/Makefile.am new file mode 100644 index 0000000000..79263a98b4 --- /dev/null +++ b/src/bin/stats/tests/http/Makefile.am @@ -0,0 +1,6 @@ +EXTRA_DIST = __init__.py server.py +CLEANFILES = __init__.pyc server.pyc +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/http/__init__.py b/src/bin/stats/tests/http/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/bin/stats/tests/http/server.py b/src/bin/stats/tests/http/server.py new file mode 100644 index 0000000000..70ed6faa30 --- /dev/null +++ b/src/bin/stats/tests/http/server.py @@ -0,0 +1,96 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" +A mock-up module of http.server + +*** NOTE *** +It is only for testing stats_httpd module and not reusable for +external module. +""" + +import fake_socket + +class DummyHttpResponse: + def __init__(self, path): + self.path = path + self.headers={} + self.log = "" + + def _write_log(self, msg): + self.log = self.log + msg + +class HTTPServer: + """ + A mock-up class of http.server.HTTPServer + """ + address_family = fake_socket.AF_INET + def __init__(self, server_class, handler_class): + self.socket = fake_socket.socket(self.address_family) + self.server_class = server_class + self.socket.bind(self.server_class) + self._handler = handler_class(None, None, self) + + def handle_request(self): + pass + + def server_close(self): + self.socket.close() + +class BaseHTTPRequestHandler: + """ + A mock-up class of http.server.BaseHTTPRequestHandler + """ + + def __init__(self, request, client_address, server): + self.path = "/path/to" + self.headers = {} + self.server = server + self.response = DummyHttpResponse(path=self.path) + self.response.write = self._write + self.wfile = self.response + + def send_response(self, code=0): + if self.path != self.response.path: + self.response = DummyHttpResponse(path=self.path) + self.response.code = code + + def send_header(self, key, value): + if self.path != self.response.path: + self.response = DummyHttpResponse(path=self.path) + self.response.headers[key] = value + + def end_headers(self): + if self.path != self.response.path: + self.response = DummyHttpResponse(path=self.path) + self.response.wrote_headers = True + + def send_error(self, code, message=None): + if self.path != self.response.path: + self.response = DummyHttpResponse(path=self.path) + self.response.code = code + self.response.body = message + + def address_string(self): + return 'dummyhost' + + def log_date_time_string(self): + return '[DD/MM/YYYY HH:MI:SS]' + + def _write(self, obj): + if self.path != self.response.path: + self.response = DummyHttpResponse(path=self.path) + self.response.body = obj.decode() + diff --git a/src/bin/stats/tests/isc/Makefile.am b/src/bin/stats/tests/isc/Makefile.am new file mode 100644 index 0000000000..d31395d404 --- /dev/null +++ b/src/bin/stats/tests/isc/Makefile.am @@ -0,0 +1,8 @@ +SUBDIRS = cc config util log +EXTRA_DIST = __init__.py +CLEANFILES = __init__.pyc + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/__init__.py b/src/bin/stats/tests/isc/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/bin/stats/tests/isc/cc/Makefile.am b/src/bin/stats/tests/isc/cc/Makefile.am new file mode 100644 index 0000000000..67323b5f1b --- /dev/null +++ b/src/bin/stats/tests/isc/cc/Makefile.am @@ -0,0 +1,7 @@ +EXTRA_DIST = __init__.py session.py +CLEANFILES = __init__.pyc session.pyc + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/cc/__init__.py b/src/bin/stats/tests/isc/cc/__init__.py new file mode 100644 index 0000000000..9a3eaf6185 --- /dev/null +++ b/src/bin/stats/tests/isc/cc/__init__.py @@ -0,0 +1 @@ +from isc.cc.session import * diff --git a/src/bin/stats/tests/isc/cc/session.py b/src/bin/stats/tests/isc/cc/session.py new file mode 100644 index 0000000000..e16d6a9abc --- /dev/null +++ b/src/bin/stats/tests/isc/cc/session.py @@ -0,0 +1,148 @@ +# Copyright (C) 2010,2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" +A mock-up module of isc.cc.session + +*** NOTE *** +It is only for testing stats_httpd module and not reusable for +external module. +""" + +import sys +import fake_socket + +# set a dummy lname +_TEST_LNAME = '123abc@xxxx' + +class Queue(): + def __init__(self, msg=None, env={}): + self.msg = msg + self.env = env + + def dump(self): + return { 'msg': self.msg, 'env': self.env } + +class SessionError(Exception): + pass + +class SessionTimeout(Exception): + pass + +class Session: + def __init__(self, socket_file=None, verbose=False): + self._lname = _TEST_LNAME + self.message_queue = [] + self.old_message_queue = [] + try: + self._socket = fake_socket.socket() + except fake_socket.error as se: + raise SessionError(se) + self.verbose = verbose + + @property + def lname(self): + return self._lname + + def close(self): + self._socket.close() + + def _clear_queues(self): + while len(self.message_queue) > 0: + self.dequeue() + + def _next_sequence(self, que=None): + return len(self.message_queue) + + def enqueue(self, msg=None, env={}): + if self._socket._closed: + raise SessionError("Session has been closed.") + seq = self._next_sequence() + env.update({"seq": 0}) # fixed here + que = Queue(msg=msg, env=env) + self.message_queue.append(que) + if self.verbose: + sys.stdout.write("[Session] enqueue: " + str(que.dump()) + "\n") + return seq + + def dequeue(self): + if self._socket._closed: + raise SessionError("Session has been closed.") + que = None + try: + que = self.message_queue.pop(0) # always pop at index 0 + self.old_message_queue.append(que) + except IndexError: + que = Queue() + if self.verbose: + sys.stdout.write("[Session] dequeue: " + str(que.dump()) + "\n") + return que + + def get_queue(self, seq=None): + if self._socket._closed: + raise SessionError("Session has been closed.") + if seq is None: + seq = len(self.message_queue) - 1 + que = None + try: + que = self.message_queue[seq] + except IndexError: + raise IndexError + que = Queue() + if self.verbose: + sys.stdout.write("[Session] get_queue: " + str(que.dump()) + "\n") + return que + + def group_sendmsg(self, msg, group, instance="*", to="*"): + return self.enqueue(msg=msg, env={ + "type": "send", + "from": self._lname, + "to": to, + "group": group, + "instance": instance }) + + def group_recvmsg(self, nonblock=True, seq=0): + que = self.dequeue() + return que.msg, que.env + + def group_reply(self, routing, msg): + return self.enqueue(msg=msg, env={ + "type": "send", + "from": self._lname, + "to": routing["from"], + "group": routing["group"], + "instance": routing["instance"], + "reply": routing["seq"] }) + + def get_message(self, group, to='*'): + if self._socket._closed: + raise SessionError("Session has been closed.") + que = Queue() + for q in self.message_queue: + if q.env['group'] == group: + self.message_queue.remove(q) + self.old_message_queue.append(q) + que = q + if self.verbose: + sys.stdout.write("[Session] get_message: " + str(que.dump()) + "\n") + return q.msg + + def group_subscribe(self, group, instance = "*"): + if self._socket._closed: + raise SessionError("Session has been closed.") + + def group_unsubscribe(self, group, instance = "*"): + if self._socket._closed: + raise SessionError("Session has been closed.") diff --git a/src/bin/stats/tests/isc/config/Makefile.am b/src/bin/stats/tests/isc/config/Makefile.am new file mode 100644 index 0000000000..ffbecdae03 --- /dev/null +++ b/src/bin/stats/tests/isc/config/Makefile.am @@ -0,0 +1,7 @@ +EXTRA_DIST = __init__.py ccsession.py +CLEANFILES = __init__.pyc ccsession.pyc + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/config/__init__.py b/src/bin/stats/tests/isc/config/__init__.py new file mode 100644 index 0000000000..4c49e956aa --- /dev/null +++ b/src/bin/stats/tests/isc/config/__init__.py @@ -0,0 +1 @@ +from isc.config.ccsession import * diff --git a/src/bin/stats/tests/isc/config/ccsession.py b/src/bin/stats/tests/isc/config/ccsession.py new file mode 100644 index 0000000000..50f7c1b163 --- /dev/null +++ b/src/bin/stats/tests/isc/config/ccsession.py @@ -0,0 +1,249 @@ +# Copyright (C) 2010,2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" +A mock-up module of isc.cc.session + +*** NOTE *** +It is only for testing stats_httpd module and not reusable for +external module. +""" + +import json +import os +import time +from isc.cc.session import Session + +COMMAND_CONFIG_UPDATE = "config_update" + +def parse_answer(msg): + assert 'result' in msg + try: + return msg['result'][0], msg['result'][1] + except IndexError: + return msg['result'][0], None + +def create_answer(rcode, arg = None): + if arg is None: + return { 'result': [ rcode ] } + else: + return { 'result': [ rcode, arg ] } + +def parse_command(msg): + assert 'command' in msg + try: + return msg['command'][0], msg['command'][1] + except IndexError: + return msg['command'][0], None + +def create_command(command_name, params = None): + if params is None: + return {"command": [command_name]} + else: + return {"command": [command_name, params]} + +def module_spec_from_file(spec_file, check = True): + try: + file = open(spec_file) + json_str = file.read() + module_spec = json.loads(json_str) + file.close() + return ModuleSpec(module_spec['module_spec'], check) + except IOError as ioe: + raise ModuleSpecError("JSON read error: " + str(ioe)) + except ValueError as ve: + raise ModuleSpecError("JSON parse error: " + str(ve)) + except KeyError as err: + raise ModuleSpecError("Data definition has no module_spec element") + +class ModuleSpecError(Exception): + pass + +class ModuleSpec: + def __init__(self, module_spec, check = True): + # check only confi_data for testing + if check and "config_data" in module_spec: + _check_config_spec(module_spec["config_data"]) + self._module_spec = module_spec + + def get_config_spec(self): + return self._module_spec['config_data'] + + def get_commands_spec(self): + return self._module_spec['commands'] + + def get_module_name(self): + return self._module_spec['module_name'] + +def _check_config_spec(config_data): + # config data is a list of items represented by dicts that contain + # things like "item_name", depending on the type they can have + # specific subitems + """Checks a list that contains the configuration part of the + specification. Raises a ModuleSpecError if there is a + problem.""" + if type(config_data) != list: + raise ModuleSpecError("config_data is of type " + str(type(config_data)) + ", not a list of items") + for config_item in config_data: + _check_item_spec(config_item) + +def _check_item_spec(config_item): + """Checks the dict that defines one config item + (i.e. containing "item_name", "item_type", etc. + Raises a ModuleSpecError if there is an error""" + if type(config_item) != dict: + raise ModuleSpecError("item spec not a dict") + if "item_name" not in config_item: + raise ModuleSpecError("no item_name in config item") + if type(config_item["item_name"]) != str: + raise ModuleSpecError("item_name is not a string: " + str(config_item["item_name"])) + item_name = config_item["item_name"] + if "item_type" not in config_item: + raise ModuleSpecError("no item_type in config item") + item_type = config_item["item_type"] + if type(item_type) != str: + raise ModuleSpecError("item_type in " + item_name + " is not a string: " + str(type(item_type))) + if item_type not in ["integer", "real", "boolean", "string", "list", "map", "any"]: + raise ModuleSpecError("unknown item_type in " + item_name + ": " + item_type) + if "item_optional" in config_item: + if type(config_item["item_optional"]) != bool: + raise ModuleSpecError("item_default in " + item_name + " is not a boolean") + if not config_item["item_optional"] and "item_default" not in config_item: + raise ModuleSpecError("no default value for non-optional item " + item_name) + else: + raise ModuleSpecError("item_optional not in item " + item_name) + if "item_default" in config_item: + item_default = config_item["item_default"] + if (item_type == "integer" and type(item_default) != int) or \ + (item_type == "real" and type(item_default) != float) or \ + (item_type == "boolean" and type(item_default) != bool) or \ + (item_type == "string" and type(item_default) != str) or \ + (item_type == "list" and type(item_default) != list) or \ + (item_type == "map" and type(item_default) != dict): + raise ModuleSpecError("Wrong type for item_default in " + item_name) + # TODO: once we have check_type, run the item default through that with the list|map_item_spec + if item_type == "list": + if "list_item_spec" not in config_item: + raise ModuleSpecError("no list_item_spec in list item " + item_name) + if type(config_item["list_item_spec"]) != dict: + raise ModuleSpecError("list_item_spec in " + item_name + " is not a dict") + _check_item_spec(config_item["list_item_spec"]) + if item_type == "map": + if "map_item_spec" not in config_item: + raise ModuleSpecError("no map_item_sepc in map item " + item_name) + if type(config_item["map_item_spec"]) != list: + raise ModuleSpecError("map_item_spec in " + item_name + " is not a list") + for map_item in config_item["map_item_spec"]: + if type(map_item) != dict: + raise ModuleSpecError("map_item_spec element is not a dict") + _check_item_spec(map_item) + if 'item_format' in config_item and 'item_default' in config_item: + item_format = config_item["item_format"] + item_default = config_item["item_default"] + if not _check_format(item_default, item_format): + raise ModuleSpecError( + "Wrong format for " + str(item_default) + " in " + str(item_name)) + +def _check_format(value, format_name): + """Check if specified value and format are correct. Return True if + is is correct.""" + # TODO: should be added other format types if necessary + time_formats = { 'date-time' : "%Y-%m-%dT%H:%M:%SZ", + 'date' : "%Y-%m-%d", + 'time' : "%H:%M:%S" } + for fmt in time_formats: + if format_name == fmt: + try: + time.strptime(value, time_formats[fmt]) + return True + except (ValueError, TypeError): + break + return False + +class ModuleCCSessionError(Exception): + pass + +class DataNotFoundError(Exception): + pass + +class ConfigData: + def __init__(self, specification): + self.specification = specification + + def get_value(self, identifier): + """Returns a tuple where the first item is the value at the + given identifier, and the second item is absolutely False + even if the value is an unset default or not. Raises an + DataNotFoundError if the identifier is not found in the + specification file. + *** NOTE *** + There are some differences from the original method. This + method never handles local settings like the original + method. But these different behaviors aren't so big issues + for a mock-up method of stats_httpd because stats_httpd + calls this method at only first.""" + for config_map in self.get_module_spec().get_config_spec(): + if config_map['item_name'] == identifier: + if 'item_default' in config_map: + return config_map['item_default'], False + raise DataNotFoundError("item_name %s is not found in the specfile" % identifier) + + def get_module_spec(self): + return self.specification + +class ModuleCCSession(ConfigData): + def __init__(self, spec_file_name, config_handler, command_handler, cc_session = None): + module_spec = module_spec_from_file(spec_file_name) + ConfigData.__init__(self, module_spec) + self._module_name = module_spec.get_module_name() + self.set_config_handler(config_handler) + self.set_command_handler(command_handler) + if not cc_session: + self._session = Session(verbose=True) + else: + self._session = cc_session + + def start(self): + pass + + def close(self): + self._session.close() + + def check_command(self, nonblock=True): + msg, env = self._session.group_recvmsg(nonblock) + if not msg or 'result' in msg: + return + cmd, arg = parse_command(msg) + answer = None + if cmd == COMMAND_CONFIG_UPDATE and self._config_handler: + answer = self._config_handler(arg) + elif env['group'] == self._module_name and self._command_handler: + answer = self._command_handler(cmd, arg) + if answer: + self._session.group_reply(env, answer) + + def set_config_handler(self, config_handler): + self._config_handler = config_handler + # should we run this right now since we've changed the handler? + + def set_command_handler(self, command_handler): + self._command_handler = command_handler + + def get_module_spec(self): + return self.specification + + def get_socket(self): + return self._session._socket + diff --git a/src/bin/stats/tests/isc/log/Makefile.am b/src/bin/stats/tests/isc/log/Makefile.am new file mode 100644 index 0000000000..457b9de1c2 --- /dev/null +++ b/src/bin/stats/tests/isc/log/Makefile.am @@ -0,0 +1,7 @@ +EXTRA_DIST = __init__.py +CLEANFILES = __init__.pyc + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/log/__init__.py b/src/bin/stats/tests/isc/log/__init__.py new file mode 100644 index 0000000000..641cf790c1 --- /dev/null +++ b/src/bin/stats/tests/isc/log/__init__.py @@ -0,0 +1,33 @@ +# Copyright (C) 2011 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +# This file is not installed. The log.so is installed into the right place. +# It is only to find it in the .libs directory when we run as a test or +# from the build directory. +# But as nobody gives us the builddir explicitly (and we can't use generation +# from .in file, as it would put us into the builddir and we wouldn't be found) +# we guess from current directory. Any idea for something better? This should +# be enough for the tests, but would it work for B10_FROM_SOURCE as well? +# Should we look there? Or define something in bind10_config? + +import os +import sys + +for base in sys.path[:]: + loglibdir = os.path.join(base, 'isc/log/.libs') + if os.path.exists(loglibdir): + sys.path.insert(0, loglibdir) + +from log import * diff --git a/src/bin/stats/tests/isc/util/Makefile.am b/src/bin/stats/tests/isc/util/Makefile.am new file mode 100644 index 0000000000..9c74354ca3 --- /dev/null +++ b/src/bin/stats/tests/isc/util/Makefile.am @@ -0,0 +1,7 @@ +EXTRA_DIST = __init__.py process.py +CLEANFILES = __init__.pyc process.pyc + +CLEANDIRS = __pycache__ + +clean-local: + rm -rf $(CLEANDIRS) diff --git a/src/bin/stats/tests/isc/util/__init__.py b/src/bin/stats/tests/isc/util/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/bin/stats/tests/isc/util/process.py b/src/bin/stats/tests/isc/util/process.py new file mode 100644 index 0000000000..0f764c1872 --- /dev/null +++ b/src/bin/stats/tests/isc/util/process.py @@ -0,0 +1,21 @@ +# Copyright (C) 2010 Internet Systems Consortium. +# +# Permission to use, copy, modify, and distribute this software for any +# purpose with or without fee is hereby granted, provided that the above +# copyright notice and this permission notice appear in all copies. +# +# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM +# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, +# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING +# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, +# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION +# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +""" +A dummy function of isc.util.process.rename() +""" + +def rename(name=None): + pass diff --git a/src/bin/stats/tests/test_utils.py b/src/bin/stats/tests/test_utils.py deleted file mode 100644 index e79db48951..0000000000 --- a/src/bin/stats/tests/test_utils.py +++ /dev/null @@ -1,291 +0,0 @@ -""" -Utilities and mock modules for unittests of statistics modules - -""" -import os -import io -import time -import sys -import threading -import tempfile - -import msgq -import isc.config.cfgmgr -import stats -import stats_httpd - -# TODO: consider appropriate timeout seconds -TIMEOUT_SEC = 0.05 - -def send_command(command_name, module_name, params=None, session=None, nonblock=False, timeout=TIMEOUT_SEC): - if not session: - cc_session = isc.cc.Session() - else: - cc_session = session - orig_timeout = cc_session.get_timeout() - cc_session.set_timeout(timeout * 1000) - command = isc.config.ccsession.create_command(command_name, params) - seq = cc_session.group_sendmsg(command, module_name) - try: - (answer, env) = cc_session.group_recvmsg(nonblock, seq) - if answer: - return isc.config.ccsession.parse_answer(answer) - except isc.cc.SessionTimeout: - pass - finally: - if not session: - cc_session.close() - else: - cc_session.set_timeout(orig_timeout) - -def send_shutdown(module_name): - return send_command("shutdown", module_name) - -class ThreadingServerManager: - def __init__(self, server_class): - self.server_class = server_class - self.server_class_name = server_class.__name__ - self.server = self.server_class() - self.server._thread = threading.Thread( - name=self.server_class_name, target=self.server.run) - self.server._thread.daemon = True - - def run(self): - self.server._thread.start() - self.server._started.wait() - self.server._started.clear() - # waiting for the server's being ready for listening - time.sleep(TIMEOUT_SEC) - - def shutdown(self): - self.server.shutdown() - self.server._thread.join(TIMEOUT_SEC) - -class MockMsgq: - def __init__(self): - self._started = threading.Event() - self.msgq = msgq.MsgQ(None) - result = self.msgq.setup() - if result: - sys.exit("Error on Msgq startup: %s" % result) - - def run(self): - self._started.set() - try: - self.msgq.run() - except Exception: - pass - finally: - self.shutdown() - - def shutdown(self): - self.msgq.shutdown() - -class MockCfgmgr: - def __init__(self): - self._started = threading.Event() - self.cfgmgr = isc.config.cfgmgr.ConfigManager( - os.environ['CONFIG_TESTDATA_PATH'], "b10-config.db") - self.cfgmgr.read_config() - - def run(self): - self._started.set() - try: - self.cfgmgr.run() - finally: - self.shutdown() - - def shutdown(self): - self.cfgmgr.running = False - -class MockBoss: - spec_str = """\ -{ - "module_spec": { - "module_name": "Boss", - "module_description": "Mock Master process", - "config_data": [], - "commands": [ - { - "command_name": "sendstats", - "command_description": "Send data to a statistics module at once", - "command_args": [] - } - ], - "statistics": [ - { - "item_name": "boot_time", - "item_type": "string", - "item_optional": false, - "item_default": "1970-01-01T00:00:00Z", - "item_title": "Boot time", - "item_description": "A date time when bind10 process starts initially", - "item_format": "date-time" - } - ] - } -} -""" - _BASETIME = (2011, 6, 22, 8, 14, 8, 2, 173, 0) - - def __init__(self): - self._started = threading.Event() - self.running = False - self.spec_file = io.StringIO(self.spec_str) - # create ModuleCCSession object - self.mccs = isc.config.ModuleCCSession( - self.spec_file, - self.config_handler, - self.command_handler) - self.spec_file.close() - self.cc_session = self.mccs._session - self.got_command_name = '' - - def run(self): - self.mccs.start() - self.running = True - self._started.set() - while self.running: - self.mccs.check_command(False) - - def shutdown(self): - self.running = False - - def config_handler(self, new_config): - return isc.config.create_answer(0) - - def command_handler(self, command, *args, **kwargs): - self.got_command_name = command - if command == 'sendstats': - params = { "owner": "Boss", - "data": { - 'boot_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', self._BASETIME) - } - } - return send_command("set", "Stats", params=params, session=self.cc_session) - return isc.config.create_answer(1, "Unknown Command") - -class MockAuth: - spec_str = """\ -{ - "module_spec": { - "module_name": "Auth", - "module_description": "Mock Authoritative service", - "config_data": [], - "commands": [ - { - "command_name": "sendstats", - "command_description": "Send data to a statistics module at once", - "command_args": [] - } - ], - "statistics": [ - { - "item_name": "queries.tcp", - "item_type": "integer", - "item_optional": false, - "item_default": 0, - "item_title": "Queries TCP", - "item_description": "A number of total query counts which all auth servers receive over TCP since they started initially" - }, - { - "item_name": "queries.udp", - "item_type": "integer", - "item_optional": false, - "item_default": 0, - "item_title": "Queries UDP", - "item_description": "A number of total query counts which all auth servers receive over UDP since they started initially" - } - ] - } -} -""" - def __init__(self): - self._started = threading.Event() - self.running = False - self.spec_file = io.StringIO(self.spec_str) - # create ModuleCCSession object - self.mccs = isc.config.ModuleCCSession( - self.spec_file, - self.config_handler, - self.command_handler) - self.spec_file.close() - self.cc_session = self.mccs._session - self.got_command_name = '' - self.queries_tcp = 3 - self.queries_udp = 2 - - def run(self): - self.mccs.start() - self.running = True - self._started.set() - while self.running: - self.mccs.check_command(False) - - def shutdown(self): - self.running = False - - def config_handler(self, new_config): - return isc.config.create_answer(0) - - def command_handler(self, command, *args, **kwargs): - self.got_command_name = command - if command == 'sendstats': - params = { "owner": "Auth", - "data": { 'queries.tcp': self.queries_tcp, - 'queries.udp': self.queries_udp } } - return send_command("set", "Stats", params=params, session=self.cc_session) - return isc.config.create_answer(1, "Unknown Command") - -class MyStats(stats.Stats): - def __init__(self): - self._started = threading.Event() - stats.Stats.__init__(self) - - def run(self): - self._started.set() - stats.Stats.start(self) - - def shutdown(self): - send_shutdown("Stats") - -class MyStatsHttpd(stats_httpd.StatsHttpd): - def __init__(self): - self._started = threading.Event() - stats_httpd.StatsHttpd.__init__(self) - - def run(self): - self._started.set() - stats_httpd.StatsHttpd.start(self) - - def shutdown(self): - send_shutdown("StatsHttpd") - -class BaseModules: - def __init__(self): - self.class_name = BaseModules.__name__ - - # Change value of BIND10_MSGQ_SOCKET_FILE in environment variables - os.environ['BIND10_MSGQ_SOCKET_FILE'] = tempfile.mktemp(prefix='unix_socket.') - # MockMsgq - self.msgq = ThreadingServerManager(MockMsgq) - self.msgq.run() - # MockCfgmgr - self.cfgmgr = ThreadingServerManager(MockCfgmgr) - self.cfgmgr.run() - # MockBoss - self.boss = ThreadingServerManager(MockBoss) - self.boss.run() - # MockAuth - self.auth = ThreadingServerManager(MockAuth) - self.auth.run() - - def shutdown(self): - # MockAuth - self.auth.shutdown() - # MockBoss - self.boss.shutdown() - # MockCfgmgr - self.cfgmgr.shutdown() - # MockMsgq - self.msgq.shutdown() diff --git a/src/bin/stats/tests/testdata/Makefile.am b/src/bin/stats/tests/testdata/Makefile.am new file mode 100644 index 0000000000..1b8df6d736 --- /dev/null +++ b/src/bin/stats/tests/testdata/Makefile.am @@ -0,0 +1 @@ +EXTRA_DIST = stats_test.spec diff --git a/src/bin/stats/tests/testdata/stats_test.spec b/src/bin/stats/tests/testdata/stats_test.spec new file mode 100644 index 0000000000..8136756440 --- /dev/null +++ b/src/bin/stats/tests/testdata/stats_test.spec @@ -0,0 +1,19 @@ +{ + "module_spec": { + "module_name": "Stats", + "module_description": "Stats daemon", + "config_data": [], + "commands": [ + { + "command_name": "status", + "command_description": "identify whether stats module is alive or not", + "command_args": [] + }, + { + "command_name": "the_dummy", + "command_description": "this is for testing", + "command_args": [] + } + ] + } +} diff --git a/src/bin/tests/Makefile.am b/src/bin/tests/Makefile.am index 0dc5021302..56ff68b0c7 100644 --- a/src/bin/tests/Makefile.am +++ b/src/bin/tests/Makefile.am @@ -14,7 +14,7 @@ endif # test using command-line arguments, so use check-local target instead of TESTS check-local: if ENABLE_PYTHON_COVERAGE - touch $(abs_top_srcdir)/.coverage + touch $(abs_top_srcdir)/.coverage rm -f .coverage ${LN_S} $(abs_top_srcdir)/.coverage .coverage endif diff --git a/tests/system/bindctl/tests.sh b/tests/system/bindctl/tests.sh index 49ef0f17b0..6923c4167c 100755 --- a/tests/system/bindctl/tests.sh +++ b/tests/system/bindctl/tests.sh @@ -24,10 +24,6 @@ SYSTEMTESTTOP=.. status=0 n=0 -# TODO: consider consistency with statistics definition in auth.spec -auth_queries_tcp="\" -auth_queries_udp="\" - echo "I:Checking b10-auth is working by default ($n)" $DIG +norec @10.53.0.1 -p 53210 ns.example.com. A >dig.out.$n || status=1 # perform a simple check on the output (digcomp would be too much for this) @@ -44,8 +40,8 @@ echo 'Stats show --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # the server should have received 1 UDP and 1 TCP queries (TCP query was # sent from the server startup script) -grep $auth_queries_tcp".*\<1\>" bindctl.out.$n > /dev/null || status=1 -grep $auth_queries_udp".*\<1\>" bindctl.out.$n > /dev/null || status=1 +grep "\"auth.queries.tcp\": 1," bindctl.out.$n > /dev/null || status=1 +grep "\"auth.queries.udp\": 1," bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` @@ -77,8 +73,8 @@ echo 'Stats show ' | $RUN_BINDCTL \ --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # The statistics counters should have been reset while stop/start. -grep $auth_queries_tcp".*\<0\>" bindctl.out.$n > /dev/null || status=1 -grep $auth_queries_udp".*\<1\>" bindctl.out.$n > /dev/null || status=1 +grep "\"auth.queries.tcp\": 0," bindctl.out.$n > /dev/null || status=1 +grep "\"auth.queries.udp\": 1," bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` @@ -101,8 +97,8 @@ echo 'Stats show ' | $RUN_BINDCTL \ --csv-file-dir=$BINDCTL_CSV_DIR > bindctl.out.$n || status=1 # The statistics counters shouldn't be reset due to hot-swapping datasource. -grep $auth_queries_tcp".*\<0\>" bindctl.out.$n > /dev/null || status=1 -grep $auth_queries_udp".*\<2\>" bindctl.out.$n > /dev/null || status=1 +grep "\"auth.queries.tcp\": 0," bindctl.out.$n > /dev/null || status=1 +grep "\"auth.queries.udp\": 2," bindctl.out.$n > /dev/null || status=1 if [ $status != 0 ]; then echo "I:failed"; fi n=`expr $n + 1` From 3f5a0900a568436b011fc14b628b71bb130ae5f7 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 16 Aug 2011 13:11:46 +0200 Subject: [PATCH 158/175] [1063] Split long test --- src/lib/datasrc/tests/database_unittest.cc | 320 +++++++++++---------- 1 file changed, 167 insertions(+), 153 deletions(-) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index f4b5d0948c..5526bd4105 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -336,6 +336,21 @@ public: EXPECT_EQ(42, finder->zone_id()); EXPECT_EQ(current_database_, &finder->database()); } + + shared_ptr getFinder() { + DataSourceClient::FindResult zone( + client_->findZone(Name("example.org"))); + EXPECT_EQ(result::SUCCESS, zone.code); + shared_ptr finder( + dynamic_pointer_cast(zone.zone_finder)); + EXPECT_EQ(42, finder->zone_id()); + EXPECT_FALSE(current_database_->searchRunning()); + + return (finder); + } + + std::vector expected_rdatas_; + std::vector expected_sig_rdatas_; }; TEST_F(DatabaseClientTest, zoneNotFound) { @@ -388,24 +403,24 @@ doFindTest(shared_ptr finder, const isc::dns::RRType& expected_type, const isc::dns::RRTTL expected_ttl, ZoneFinder::Result expected_result, - const std::vector& expected_rdatas, - const std::vector& expected_sig_rdatas, + const std::vector& expected_rdatas_, + const std::vector& expected_sig_rdatas_, const isc::dns::Name& expected_name = isc::dns::Name::ROOT_NAME()) { SCOPED_TRACE("doFindTest " + name.toText() + " " + type.toText()); ZoneFinder::FindResult result = finder->find(name, type, NULL, ZoneFinder::FIND_DEFAULT); ASSERT_EQ(expected_result, result.code) << name << " " << type; - if (expected_rdatas.size() > 0) { + if (expected_rdatas_.size() > 0) { checkRRset(result.rrset, expected_name != Name(".") ? expected_name : name, finder->getClass(), expected_type, expected_ttl, - expected_rdatas); + expected_rdatas_); - if (expected_sig_rdatas.size() > 0) { + if (expected_sig_rdatas_.size() > 0) { checkRRset(result.rrset->getRRsig(), expected_name != Name(".") ? expected_name : name, finder->getClass(), isc::dns::RRType::RRSIG(), expected_ttl, - expected_sig_rdatas); + expected_sig_rdatas_); } else { EXPECT_EQ(isc::dns::RRsetPtr(), result.rrset->getRRsig()); } @@ -416,227 +431,220 @@ doFindTest(shared_ptr finder, } // end anonymous namespace TEST_F(DatabaseClientTest, find) { - DataSourceClient::FindResult zone(client_->findZone(Name("example.org"))); - ASSERT_EQ(result::SUCCESS, zone.code); - shared_ptr finder( - dynamic_pointer_cast(zone.zone_finder)); - EXPECT_EQ(42, finder->zone_id()); - EXPECT_FALSE(current_database_->searchRunning()); - std::vector expected_rdatas; - std::vector expected_sig_rdatas; + shared_ptr finder(getFinder()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_rdatas.push_back("192.0.2.2"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_rdatas_.push_back("192.0.2.2"); doFindTest(finder, isc::dns::Name("www2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("2001:db8::1"); - expected_rdatas.push_back("2001:db8::2"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("2001:db8::1"); + expected_rdatas_.push_back("2001:db8::2"); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); doFindTest(finder, isc::dns::Name("www.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("www.example.org."); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("www.example.org."); doFindTest(finder, isc::dns::Name("cname.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), isc::dns::RRTTL(3600), ZoneFinder::CNAME, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("www.example.org."); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("www.example.org."); doFindTest(finder, isc::dns::Name("cname.example.org."), isc::dns::RRType::CNAME(), isc::dns::RRType::CNAME(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); doFindTest(finder, isc::dns::Name("doesnotexist.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::NXDOMAIN, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); - expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("2001:db8::1"); - expected_rdatas.push_back("2001:db8::2"); - expected_sig_rdatas.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("2001:db8::1"); + expected_rdatas_.push_back("2001:db8::2"); + expected_sig_rdatas_.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); doFindTest(finder, isc::dns::Name("signed1.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("www.example.org."); - expected_sig_rdatas.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("www.example.org."); + expected_sig_rdatas_.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signedcname1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), isc::dns::RRTTL(3600), ZoneFinder::CNAME, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); - expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12346 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("2001:db8::2"); - expected_rdatas.push_back("2001:db8::1"); - expected_sig_rdatas.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("2001:db8::2"); + expected_rdatas_.push_back("2001:db8::1"); + expected_sig_rdatas_.push_back("AAAA 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); doFindTest(finder, isc::dns::Name("signed2.example.org."), isc::dns::RRType::TXT(), isc::dns::RRType::TXT(), isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("www.example.org."); - expected_sig_rdatas.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("www.example.org."); + expected_sig_rdatas_.push_back("CNAME 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("signedcname2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::CNAME(), isc::dns::RRTTL(3600), ZoneFinder::CNAME, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("acnamesig1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("acnamesig2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("acnamesig3.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_rdatas.push_back("192.0.2.2"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_rdatas_.push_back("192.0.2.2"); doFindTest(finder, isc::dns::Name("ttldiff1.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(360), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_rdatas.push_back("192.0.2.2"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_rdatas_.push_back("192.0.2.2"); doFindTest(finder, isc::dns::Name("ttldiff2.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(360), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); @@ -712,112 +720,118 @@ TEST_F(DatabaseClientTest, find) { // This RRSIG has the wrong sigtype field, which should be // an error if we decide to keep using that field // Right now the field is ignored, so it does not error - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_sig_rdatas.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.push_back("A 5 3 3600 20000101000000 20000201000000 12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("badsigtype.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, - expected_rdatas, expected_sig_rdatas); + expected_rdatas_, expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); +} + +TEST_F(DatabaseClientTest, findDelegation) { + shared_ptr finder(getFinder()); // The apex should not be considered delegation point and we can access // data - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); doFindTest(finder, isc::dns::Name("example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), - isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, - expected_sig_rdatas); + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas_, + expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); - expected_rdatas.push_back("ns.example.com."); - expected_sig_rdatas.push_back("NS 5 3 3600 20000101000000 20000201000000 " + expected_rdatas_.clear(); + expected_rdatas_.push_back("ns.example.com."); + expected_sig_rdatas_.push_back("NS 5 3 3600 20000101000000 20000201000000 " "12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("example.org."), isc::dns::RRType::NS(), isc::dns::RRType::NS(), - isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, - expected_sig_rdatas); + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas_, + expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); // Check when we ask for something below delegation point, we get the NS // (Both when the RRset there exists and doesn't) - expected_rdatas.clear(); - expected_sig_rdatas.clear(); - expected_rdatas.push_back("ns.example.com."); - expected_rdatas.push_back("ns.delegation.example.org."); - expected_sig_rdatas.push_back("NS 5 3 3600 20000101000000 20000201000000 " + expected_rdatas_.clear(); + expected_sig_rdatas_.clear(); + expected_rdatas_.push_back("ns.example.com."); + expected_rdatas_.push_back("ns.delegation.example.org."); + expected_sig_rdatas_.push_back("NS 5 3 3600 20000101000000 20000201000000 " "12345 example.org. FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), isc::dns::RRType::A(), isc::dns::RRType::NS(), - isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas, - expected_sig_rdatas, isc::dns::Name("delegation.example.org.")); + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); EXPECT_FALSE(current_database_->searchRunning()); doFindTest(finder, isc::dns::Name("ns.delegation.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), - isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas, - expected_sig_rdatas, isc::dns::Name("delegation.example.org.")); + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); EXPECT_FALSE(current_database_->searchRunning()); // Even when we check directly at the delegation point, we should get // the NS doFindTest(finder, isc::dns::Name("delegation.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), - isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas, - expected_sig_rdatas); + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); // And when we ask direcly for the NS, we should still get delegation doFindTest(finder, isc::dns::Name("delegation.example.org."), isc::dns::RRType::NS(), isc::dns::RRType::NS(), - isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas, - expected_sig_rdatas); + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); // Now test delegation. If it is below the delegation point, we should get // the DNAME (the one with data under DNAME is invalid zone, but we test // the behaviour anyway just to make sure) - expected_rdatas.clear(); - expected_rdatas.push_back("dname.example.com."); - expected_sig_rdatas.clear(); - expected_sig_rdatas.push_back("DNAME 5 3 3600 20000101000000 " + expected_rdatas_.clear(); + expected_rdatas_.push_back("dname.example.com."); + expected_sig_rdatas_.clear(); + expected_sig_rdatas_.push_back("DNAME 5 3 3600 20000101000000 " "20000201000000 12345 example.org. " "FAKEFAKEFAKE"); doFindTest(finder, isc::dns::Name("below.dname.example.org."), isc::dns::RRType::A(), isc::dns::RRType::DNAME(), - isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas, - expected_sig_rdatas, isc::dns::Name("dname.example.org.")); + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, + expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); EXPECT_FALSE(current_database_->searchRunning()); doFindTest(finder, isc::dns::Name("below.dname.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), - isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas, - expected_sig_rdatas, isc::dns::Name("dname.example.org.")); + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, + expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); EXPECT_FALSE(current_database_->searchRunning()); // Asking direcly for DNAME should give SUCCESS doFindTest(finder, isc::dns::Name("dname.example.org."), isc::dns::RRType::DNAME(), isc::dns::RRType::DNAME(), - isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, - expected_sig_rdatas); + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas_, + expected_sig_rdatas_); // But we don't delegate at DNAME point - expected_rdatas.clear(); - expected_rdatas.push_back("192.0.2.1"); - expected_sig_rdatas.clear(); + expected_rdatas_.clear(); + expected_rdatas_.push_back("192.0.2.1"); + expected_sig_rdatas_.clear(); doFindTest(finder, isc::dns::Name("dname.example.org."), isc::dns::RRType::A(), isc::dns::RRType::A(), - isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas, - expected_sig_rdatas); + isc::dns::RRTTL(3600), ZoneFinder::SUCCESS, expected_rdatas_, + expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); - expected_rdatas.clear(); + expected_rdatas_.clear(); doFindTest(finder, isc::dns::Name("dname.example.org."), isc::dns::RRType::AAAA(), isc::dns::RRType::AAAA(), - isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, expected_rdatas, - expected_sig_rdatas); + isc::dns::RRTTL(3600), ZoneFinder::NXRRSET, expected_rdatas_, + expected_sig_rdatas_); EXPECT_FALSE(current_database_->searchRunning()); // This is broken dname, it contains two targets From b3bcd825cfb9c19a62a7db4d12717e85aca0b1e8 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 16 Aug 2011 13:17:29 +0200 Subject: [PATCH 159/175] [1063] Few more tests --- src/lib/datasrc/tests/database_unittest.cc | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/src/lib/datasrc/tests/database_unittest.cc b/src/lib/datasrc/tests/database_unittest.cc index 5526bd4105..9efb1dd421 100644 --- a/src/lib/datasrc/tests/database_unittest.cc +++ b/src/lib/datasrc/tests/database_unittest.cc @@ -272,6 +272,8 @@ private: addCurName("delegation.example.org."); addRecord("A", "3600", "", "192.0.2.1"); addCurName("ns.delegation.example.org."); + addRecord("A", "3600", "", "192.0.2.1"); + addCurName("deep.below.delegation.example.org."); addRecord("A", "3600", "", "192.0.2.1"); addRecord("DNAME", "3600", "", "dname.example.com."); @@ -775,6 +777,11 @@ TEST_F(DatabaseClientTest, findDelegation) { isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, expected_sig_rdatas_, isc::dns::Name("delegation.example.org.")); + doFindTest(finder, isc::dns::Name("deep.below.delegation.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::NS(), + isc::dns::RRTTL(3600), ZoneFinder::DELEGATION, expected_rdatas_, + expected_sig_rdatas_, + isc::dns::Name("delegation.example.org.")); EXPECT_FALSE(current_database_->searchRunning()); // Even when we check directly at the delegation point, we should get @@ -811,6 +818,11 @@ TEST_F(DatabaseClientTest, findDelegation) { isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); EXPECT_FALSE(current_database_->searchRunning()); + doFindTest(finder, isc::dns::Name("really.deep.below.dname.example.org."), + isc::dns::RRType::AAAA(), isc::dns::RRType::DNAME(), + isc::dns::RRTTL(3600), ZoneFinder::DNAME, expected_rdatas_, + expected_sig_rdatas_, isc::dns::Name("dname.example.org.")); + EXPECT_FALSE(current_database_->searchRunning()); // Asking direcly for DNAME should give SUCCESS doFindTest(finder, isc::dns::Name("dname.example.org."), From 4cbf309be8a302afe3bc041da11c24b593464157 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 16 Aug 2011 13:29:59 +0200 Subject: [PATCH 160/175] [1063] Little bit of logging --- src/lib/datasrc/database.cc | 10 ++++++++++ src/lib/datasrc/datasrc_messages.mes | 14 ++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index 287602ab1f..b9d7330663 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -314,8 +314,14 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, // (it can be only NS or DNAME here) result_rrset = found.second; if (result_rrset->getType() == isc::dns::RRType::NS()) { + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DELEGATION). + arg(superdomain); result_status = DELEGATION; } else { + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DNAME). + arg(superdomain); result_status = DNAME; } // Don't search more @@ -331,7 +337,11 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, records_found = found.first; result_rrset = found.second; if (result_rrset && name != origin && + result_rrset->getType() == isc::dns::RRType::NS()) { + LOG_DEBUG(logger, DBG_TRACE_DETAILED, + DATASRC_DATABASE_FOUND_DELEGATION_EXACT). + arg(name); result_status = DELEGATION; } else if (result_rrset && type != isc::dns::RRType::CNAME() && result_rrset->getType() == isc::dns::RRType::CNAME()) { diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index 6af4fe6678..a080a6a92c 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -90,6 +90,20 @@ most likely points to a logic error in the code, and can be considered a bug. The current search is aborted. Specific information about the exception is printed in this error message. +% DATASRC_DATABASE_FOUND_DELEGATION Found delegation at %1 +When searching for a domain, the program met a delegation to a different zone +at the given domain name. It will return that one instead. + +% DATASRC_DATABASE_FOUND_DELEGATION_EXACT Found delegation at %1 (exact match) +The program found the domain requested, but it is a delegation point to a +different zone, therefore it is not authoritative for this domain name. +It will return the NS record instead. + +% DATASRC_DATABASE_FOUND_DNAME Found DNAME at %1 +When searching for a domain, the program met a DNAME redirection to a different +place in the domain space at the given domain name. It will return that one +instead. + % DATASRC_DATABASE_FOUND_NXDOMAIN search in datasource %1 resulted in NXDOMAIN for %2/%3/%4 The data returned by the database backend did not contain any data for the given domain name, class and type. From b06a3e2ba1febb9e34458c5106f8d1629a191d5f Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 16 Aug 2011 13:45:52 +0200 Subject: [PATCH 161/175] [1064] Documentation update --- src/lib/datasrc/database.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/lib/datasrc/database.h b/src/lib/datasrc/database.h index eaeecc57e8..95782ef3bb 100644 --- a/src/lib/datasrc/database.h +++ b/src/lib/datasrc/database.h @@ -268,7 +268,8 @@ public: * \param name The name to find * \param type The RRType to find * \param target Unused at this moment - * \param options Unused at this moment + * \param options Options about how to search. + * See ZoneFinder::FindOptions. */ virtual FindResult find(const isc::dns::Name& name, const isc::dns::RRType& type, From e074df43e95dc002374de30503ba44e203b04788 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 07:15:49 -0500 Subject: [PATCH 162/175] [master] add revision for entry 278 --- ChangeLog | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ChangeLog b/ChangeLog index 56bf8e97d7..83cce337cf 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,6 +1,6 @@ 278. [doc] jelte Add logging configuration documentation to the guide. - (Trac #1011, git TODO) + (Trac #1011, git 2cc500af0929c1f268aeb6f8480bc428af70f4c4) 277. [func] jerry Implement the SRV rrtype according to RFC2782. From c74d3b7f393f3934bae22fc9d3a4a49e2211aadb Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 07:33:45 -0500 Subject: [PATCH 163/175] [jreed-docs-2] fix typo --- doc/guide/bind10-guide.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index 59914cfd89..ef4a18612b 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -547,7 +547,7 @@ Debian and Ubuntu: --prefix - Define the the installation location (the + Define the installation location (the default is /usr/local/). From 810c79d6d9b8efbc12ec8e1ad727cf002f2dedc6 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 07:35:13 -0500 Subject: [PATCH 164/175] [jreed-docs-2] reformat some long lines (no content change) --- doc/guide/bind10-guide.xml | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/doc/guide/bind10-guide.xml b/doc/guide/bind10-guide.xml index ef4a18612b..87c6ac1bde 100644 --- a/doc/guide/bind10-guide.xml +++ b/doc/guide/bind10-guide.xml @@ -1599,14 +1599,15 @@ then change those defaults with config set Resolver/forward_addresses[0]/address error 111 opening TCP socket to 127.0.0.1(53) - A brief description of the cause of the problem. Within this text, - information relating to the condition that caused the message to - be logged will be included. In this example, error number 111 - (an operating system-specific error number) was encountered when - trying to open a TCP connection to port 53 on the local system - (address 127.0.0.1). The next step would be to find out the reason - for the failure by consulting your system's documentation to - identify what error number 111 means. + A brief description of the cause of the problem. + Within this text, information relating to the condition + that caused the message to be logged will be included. + In this example, error number 111 (an operating + system-specific error number) was encountered when trying + to open a TCP connection to port 53 on the local system + (address 127.0.0.1). The next step would be to find + out the reason for the failure by consulting your system's + documentation to identify what error number 111 means. From 5253640054d48f7816aa00c803f5bc593c0c12c1 Mon Sep 17 00:00:00 2001 From: chenzhengzhang Date: Tue, 16 Aug 2011 21:15:26 +0800 Subject: [PATCH 165/175] [master] merge #1114: Implement AFSDB rrtype --- ChangeLog | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/ChangeLog b/ChangeLog index 83cce337cf..24134fd76c 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,9 +1,13 @@ +279. [func] jerry + libdns++: Implement the AFSDB rrtype according to RFC1183. + (Trac #1114, git TODO) + 278. [doc] jelte Add logging configuration documentation to the guide. (Trac #1011, git 2cc500af0929c1f268aeb6f8480bc428af70f4c4) 277. [func] jerry - Implement the SRV rrtype according to RFC2782. + libdns++: Implement the SRV rrtype according to RFC2782. (Trac #1128, git 5fd94aa027828c50e63ae1073d9d6708e0a9c223) 276. [func] stephen From 6ad78d124740f1ea18f6f93721ec6f152364e878 Mon Sep 17 00:00:00 2001 From: chenzhengzhang Date: Tue, 16 Aug 2011 21:18:01 +0800 Subject: [PATCH 166/175] [master] update ChangeLog for #1114 --- ChangeLog | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ChangeLog b/ChangeLog index 24134fd76c..b58cb7ac2b 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,6 +1,6 @@ 279. [func] jerry libdns++: Implement the AFSDB rrtype according to RFC1183. - (Trac #1114, git TODO) + (Trac #1114, git ce052cd92cd128ea3db5a8f154bd151956c2920c) 278. [doc] jelte Add logging configuration documentation to the guide. From 691c232b2655673ac352beafc0bfba4bc966f8f8 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 08:20:02 -0500 Subject: [PATCH 167/175] [master] clarify reset and remove even those these may be removed from stats soon. Also add some TODO comments --- src/bin/stats/b10-stats.xml | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/src/bin/stats/b10-stats.xml b/src/bin/stats/b10-stats.xml index 13e568df63..445ac4393c 100644 --- a/src/bin/stats/b10-stats.xml +++ b/src/bin/stats/b10-stats.xml @@ -115,9 +115,11 @@ + remove removes the named statistics name and data. - reset + + reset will reset all statistics data to + default values except for constant names. + This may re-add previously removed statistics names. set + @@ -161,6 +168,8 @@ when starts collecting data An optional item name may be specified to receive individual output. + + shutdown will shutdown the b10-stats process. From 9df1f04f8b1f7091ab32dcd56fb6e47e3e96d5a7 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 08:22:18 -0500 Subject: [PATCH 168/175] [master] moved the STATISTICS section to after the configuration section and renamed it --- src/bin/stats/b10-stats.xml | 87 ++++++++++++++++++------------------- 1 file changed, 43 insertions(+), 44 deletions(-) diff --git a/src/bin/stats/b10-stats.xml b/src/bin/stats/b10-stats.xml index 445ac4393c..f2b6c03de0 100644 --- a/src/bin/stats/b10-stats.xml +++ b/src/bin/stats/b10-stats.xml @@ -87,50 +87,6 @@ - - DEFAULT STATISTICS - - - The b10-stats daemon contains - built-in statistics: - - - - - - report_time - - The latest report date and time in - ISO 8601 format. - - - - stats.timestamp - The current date and time represented in - seconds since UNIX epoch (1970-01-01T0 0:00:00Z) with - precision (delimited with a period) up to - one hundred thousandth of second. - - - - - - - - - - CONFIGURATION AND COMMANDS @@ -183,6 +139,49 @@ when starts collecting data + + STATISTICS DATA + + + The b10-stats daemon contains + built-in statistics: + + + + + + report_time + + The latest report date and time in + ISO 8601 format. + + + + stats.timestamp + The current date and time represented in + seconds since UNIX epoch (1970-01-01T0 0:00:00Z) with + precision (delimited with a period) up to + one hundred thousandth of second. + + + + + + + + + FILES From 0fe4f0151ae7a994aaf305e7985d4ba9f992e482 Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 16 Aug 2011 15:28:52 +0200 Subject: [PATCH 169/175] [1063] Name of DB in log messages --- src/lib/datasrc/database.cc | 6 +++--- src/lib/datasrc/datasrc_messages.mes | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/src/lib/datasrc/database.cc b/src/lib/datasrc/database.cc index b9d7330663..166a1d2fb7 100644 --- a/src/lib/datasrc/database.cc +++ b/src/lib/datasrc/database.cc @@ -316,12 +316,12 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, if (result_rrset->getType() == isc::dns::RRType::NS()) { LOG_DEBUG(logger, DBG_TRACE_DETAILED, DATASRC_DATABASE_FOUND_DELEGATION). - arg(superdomain); + arg(database_->getDBName()).arg(superdomain); result_status = DELEGATION; } else { LOG_DEBUG(logger, DBG_TRACE_DETAILED, DATASRC_DATABASE_FOUND_DNAME). - arg(superdomain); + arg(database_->getDBName()).arg(superdomain); result_status = DNAME; } // Don't search more @@ -341,7 +341,7 @@ DatabaseClient::Finder::find(const isc::dns::Name& name, result_rrset->getType() == isc::dns::RRType::NS()) { LOG_DEBUG(logger, DBG_TRACE_DETAILED, DATASRC_DATABASE_FOUND_DELEGATION_EXACT). - arg(name); + arg(database_->getDBName()).arg(name); result_status = DELEGATION; } else if (result_rrset && type != isc::dns::RRType::CNAME() && result_rrset->getType() == isc::dns::RRType::CNAME()) { diff --git a/src/lib/datasrc/datasrc_messages.mes b/src/lib/datasrc/datasrc_messages.mes index a080a6a92c..190adbe3ac 100644 --- a/src/lib/datasrc/datasrc_messages.mes +++ b/src/lib/datasrc/datasrc_messages.mes @@ -90,16 +90,16 @@ most likely points to a logic error in the code, and can be considered a bug. The current search is aborted. Specific information about the exception is printed in this error message. -% DATASRC_DATABASE_FOUND_DELEGATION Found delegation at %1 +% DATASRC_DATABASE_FOUND_DELEGATION Found delegation at %2 in %1 When searching for a domain, the program met a delegation to a different zone at the given domain name. It will return that one instead. -% DATASRC_DATABASE_FOUND_DELEGATION_EXACT Found delegation at %1 (exact match) +% DATASRC_DATABASE_FOUND_DELEGATION_EXACT Found delegation at %2 (exact match) in %1 The program found the domain requested, but it is a delegation point to a different zone, therefore it is not authoritative for this domain name. It will return the NS record instead. -% DATASRC_DATABASE_FOUND_DNAME Found DNAME at %1 +% DATASRC_DATABASE_FOUND_DNAME Found DNAME at %2 in %1 When searching for a domain, the program met a DNAME redirection to a different place in the domain space at the given domain name. It will return that one instead. From 98e74ad62b23ce33f66e3841431511136bc1c2f8 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 08:40:30 -0500 Subject: [PATCH 170/175] [master] document other b10-stats statistics --- src/bin/stats/b10-stats.xml | 55 +++++++++++++++++++++++++------------ 1 file changed, 38 insertions(+), 17 deletions(-) diff --git a/src/bin/stats/b10-stats.xml b/src/bin/stats/b10-stats.xml index f2b6c03de0..1164711a8e 100644 --- a/src/bin/stats/b10-stats.xml +++ b/src/bin/stats/b10-stats.xml @@ -143,8 +143,7 @@ STATISTICS DATA - The b10-stats daemon contains - built-in statistics: + The b10-stats daemon contains these statistics: @@ -156,6 +155,38 @@ ISO 8601 format. + + stats.boot_time + The date and time when this daemon was + started in ISO 8601 format. + This is a constant which can't be reset except by restarting + b10-stats. + + + + + stats.last_update_time + The date and time (in ISO 8601 format) + when this daemon last received data from another component. + + + + + stats.lname + This is the name used for the + b10-msgq command-control channel. + (This is a constant which can't be reset except by restarting + b10-stats.) + + + + + stats.start_time + This is the date and time (in ISO 8601 format) + when this daemon started collecting data. + + + stats.timestamp The current date and time represented in @@ -164,23 +195,13 @@ one hundred thousandth of second. - - - - + + See other manual pages for explanations for their statistics + that are kept track by b10-stats. + + From 09e8c50958a1fca313c2be427c2991c39798f90f Mon Sep 17 00:00:00 2001 From: Michal 'vorner' Vaner Date: Tue, 16 Aug 2011 16:07:10 +0200 Subject: [PATCH 171/175] Fix lexical cast (missing boost::) Reviewed on jabber. --- src/lib/dns/rdata/generic/afsdb_18.cc | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/lib/dns/rdata/generic/afsdb_18.cc b/src/lib/dns/rdata/generic/afsdb_18.cc index 0aca23f133..dd7fa5f861 100644 --- a/src/lib/dns/rdata/generic/afsdb_18.cc +++ b/src/lib/dns/rdata/generic/afsdb_18.cc @@ -23,6 +23,8 @@ #include #include +#include + using namespace std; using namespace isc::util::str; @@ -109,7 +111,7 @@ AFSDB::operator=(const AFSDB& source) { /// \return A \c string object that represents the \c AFSDB object. string AFSDB::toText() const { - return (lexical_cast(subtype_) + " " + server_.toText()); + return (boost::lexical_cast(subtype_) + " " + server_.toText()); } /// \brief Render the \c AFSDB in the wire format without name compression. From 1e702fae4c9adbd7134a739dee28c868a15f0b3e Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 11:03:20 -0500 Subject: [PATCH 172/175] [master] fix typo in message file expansion ordering the RRsets and class were reversed. This was the only example in all .mes files that had the numbers out of order. Okayed via jabber. This fixes: 2011-08-15 14:37:22.991 DEBUG [b10-resolver.cache] CACHE_RRSET_INIT initializing RRset cache for IN RRsets of class 10000 to: 2011-08-16 11:01:50.899 DEBUG [b10-resolver.cache] CACHE_RRSET_INIT initializing RRset cache for 10000 RRsets of class IN One thing to note: If this was a production server, we would need to consider deprecating this message ID and creating a new message ID as use of the logged results would be broken, because problem is in the logged brief explanation and not the full description. I will send an email about this. --- src/lib/cache/cache_messages.mes | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/lib/cache/cache_messages.mes b/src/lib/cache/cache_messages.mes index 2a68cc23bf..7f593ec6e6 100644 --- a/src/lib/cache/cache_messages.mes +++ b/src/lib/cache/cache_messages.mes @@ -124,7 +124,7 @@ the message will not be cached. Debug message. The requested data was found in the RRset cache. However, it is expired, so the cache removed it and is going to pretend nothing was found. -% CACHE_RRSET_INIT initializing RRset cache for %2 RRsets of class %1 +% CACHE_RRSET_INIT initializing RRset cache for %1 RRsets of class %2 Debug message. The RRset cache to hold at most this many RRsets for the given class is being created. From 7cdda20613f7ed7b18e7fe210ae0f6a87054dbf3 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 11:50:17 -0500 Subject: [PATCH 173/175] [master] update verbose explanation, document query_acl, add some history The query_acl now has some beginning docs here, but needs more. --- src/bin/resolver/b10-resolver.xml | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/src/bin/resolver/b10-resolver.xml b/src/bin/resolver/b10-resolver.xml index bdf4f8ad25..efe045a5f3 100644 --- a/src/bin/resolver/b10-resolver.xml +++ b/src/bin/resolver/b10-resolver.xml @@ -20,7 +20,7 @@ - February 17, 2011 + August 16, 2011 @@ -99,11 +99,14 @@ + + - Enabled verbose mode. This enables diagnostic messages to - STDERR. + Enable verbose mode. + This sets logging to the maximum debugging level. @@ -146,6 +149,22 @@ once that is merged you can for instance do 'config add Resolver/forward_address + + + + + + + query_acl is a list of query access control + rules. The list items are the action string + and the from or key strings. + The possible actions are ACCEPT, REJECT and DROP. + The from is a remote (source) IPv4 or IPv6 + address or special keyword. + The key is a TSIG key name. + The default configuration accepts queries from 127.0.0.1 and ::1. + + retries is the number of times to retry (resend query) after a query timeout @@ -234,7 +253,8 @@ once that is merged you can for instance do 'config add Resolver/forward_address The b10-resolver daemon was first coded in September 2010. The initial implementation only provided forwarding. Iteration was introduced in January 2011. - + Caching was implemented in February 2011. + Access control was introduced in June 2011. From 6a55aa002c8f3b701dbb8291cd9a8e21534c6974 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 12:12:48 -0500 Subject: [PATCH 174/175] [master] add the statistics date While here also sort the configurations (no content change for that). Regenerate the nroff file. --- src/bin/auth/b10-auth.8 | 47 +++++++++++++++++++++++++++------------ src/bin/auth/b10-auth.xml | 18 +++++++-------- 2 files changed, 42 insertions(+), 23 deletions(-) diff --git a/src/bin/auth/b10-auth.8 b/src/bin/auth/b10-auth.8 index 0356683b11..aedadeefb0 100644 --- a/src/bin/auth/b10-auth.8 +++ b/src/bin/auth/b10-auth.8 @@ -2,12 +2,12 @@ .\" Title: b10-auth .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] .\" Generator: DocBook XSL Stylesheets v1.75.2 -.\" Date: March 8, 2011 +.\" Date: August 11, 2011 .\" Manual: BIND10 .\" Source: BIND10 .\" Language: English .\" -.TH "B10\-AUTH" "8" "March 8, 2011" "BIND10" "BIND10" +.TH "B10\-AUTH" "8" "August 11, 2011" "BIND10" "BIND10" .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- @@ -70,18 +70,6 @@ defines the path to the SQLite3 zone file when using the sqlite datasource\&. Th /usr/local/var/bind10\-devel/zone\&.sqlite3\&. .PP -\fIlisten_on\fR -is a list of addresses and ports for -\fBb10\-auth\fR -to listen on\&. The list items are the -\fIaddress\fR -string and -\fIport\fR -number\&. By default, -\fBb10\-auth\fR -listens on port 53 on the IPv6 (::) and IPv4 (0\&.0\&.0\&.0) wildcard addresses\&. -.PP - \fIdatasources\fR configures data sources\&. The list items include: \fItype\fR @@ -114,6 +102,18 @@ In this development version, currently this is only used for the memory data sou .RE .PP +\fIlisten_on\fR +is a list of addresses and ports for +\fBb10\-auth\fR +to listen on\&. The list items are the +\fIaddress\fR +string and +\fIport\fR +number\&. By default, +\fBb10\-auth\fR +listens on port 53 on the IPv6 (::) and IPv4 (0\&.0\&.0\&.0) wildcard addresses\&. +.PP + \fIstatistics\-interval\fR is the timer interval in seconds for \fBb10\-auth\fR @@ -164,6 +164,25 @@ immediately\&. \fBshutdown\fR exits \fBb10\-auth\fR\&. (Note that the BIND 10 boss process will restart this service\&.) +.SH "STATISTICS DATA" +.PP +The statistics data collected by the +\fBb10\-stats\fR +daemon include: +.PP +auth\&.queries\&.tcp +.RS 4 +Total count of queries received by the +\fBb10\-auth\fR +server over TCP since startup\&. +.RE +.PP +auth\&.queries\&.udp +.RS 4 +Total count of queries received by the +\fBb10\-auth\fR +server over UDP since startup\&. +.RE .SH "FILES" .PP diff --git a/src/bin/auth/b10-auth.xml b/src/bin/auth/b10-auth.xml index a05be586b0..636f437993 100644 --- a/src/bin/auth/b10-auth.xml +++ b/src/bin/auth/b10-auth.xml @@ -131,15 +131,6 @@ /usr/local/var/bind10-devel/zone.sqlite3. - - listen_on is a list of addresses and ports for - b10-auth to listen on. - The list items are the address string - and port number. - By default, b10-auth listens on port 53 - on the IPv6 (::) and IPv4 (0.0.0.0) wildcard addresses. - - datasources configures data sources. The list items include: @@ -164,6 +155,15 @@ + + listen_on is a list of addresses and ports for + b10-auth to listen on. + The list items are the address string + and port number. + By default, b10-auth listens on port 53 + on the IPv6 (::) and IPv4 (0.0.0.0) wildcard addresses. + + statistics-interval is the timer interval in seconds for b10-auth to share its From 485e0ba7f7fe11e4d28e3eec2be835157521a6e9 Mon Sep 17 00:00:00 2001 From: "Jeremy C. Reed" Date: Tue, 16 Aug 2011 12:35:37 -0500 Subject: [PATCH 175/175] [master] mention class and its default --- src/bin/xfrin/b10-xfrin.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/xfrin/b10-xfrin.xml b/src/bin/xfrin/b10-xfrin.xml index 71fcf931ca..a8fe425d37 100644 --- a/src/bin/xfrin/b10-xfrin.xml +++ b/src/bin/xfrin/b10-xfrin.xml @@ -103,7 +103,7 @@ in separate zonemgr process. b10-xfrin daemon. The list items are: name (the zone name), - + class (defaults to IN), master_addr (the zone master to transfer from), master_port (defaults to 53), and tsig_key (optional TSIG key to use).
Owner Title Value
DummyFoo